This is the
talk page for discussing improvements to the
Dunning鈥揔ruger effect article. This is not a forum for general discussion of the article's subject. |
Article policies
|
Find sources:聽 Google ( books聽路 news聽路 scholar聽路 free images聽路 WP聽refs)聽路 FENS聽路 JSTOR聽路 TWL |
Archives: 1, 2, 3, 4, 5, 6Auto-archiving period: 90聽days聽 |
Dunning鈥揔ruger effect has been listed as one of the Social sciences and society good articles under the good article criteria. If you can improve it further, please do so. If it no longer meets these criteria, you can reassess it. | ||||||||||||||||||||||
|
This article is rated GA-class on Wikipedia's
content assessment scale. It is of interest to the following WikiProjects: |
|||||||||||||||||||||||||||||||||||||||||||||||||||
|
Daily pageviews of this article
A graph should have been displayed here but
graphs are temporarily disabled. Until they are enabled again, visit the interactive graph at
pageviews.wmcloud.org |
Dunning鈥揔ruger effect has been linked from multiple high-traffic websites. All prior and subsequent edits to the article are noted in its revision history.
|
|
||||||
The article otherwise largely fails to communicate the degree to which the DK effect is pseudoscience. The statement "the statistical explanation interprets these findings as statistical artifacts" needs to be expanded and made much more prominent to explain why the effect is simply autocorrelation and should not be basis for any cognitive or metacognitive claims despite its appeal. The autocorrelation claim is easy to understand and should be a convincing argument for changing the first paragraph to make clear that while the concept is appealing, it is not based on a valid statistically methodology and should not be taken too seriously.
Simply put for any sample of test scores on a 0-10 scale, the likelihood that someone who scores 0 will overestimate the performance is necessarily higher than someone who scores 10. The reverse is also true: anyone who scores 10 will necessarily underestimate their performance more that someone who scores 0.
The ironies are replete, as pointed out in the article: "there is a delightful irony to the circumstances of their [Dunning and Kruger's] blunder. Here are two Ivy League professors arguing that unskilled people have a 鈥榙ual burden鈥: not only are unskilled people 鈥榠ncompetent鈥 ... they are unaware of their own incompetence. [...] In their seminal paper, Dunning and Kruger are the ones broadcasting their (statistical) incompetence by conflating autocorrelation for a psychological effect."
The popularity of the DK effect may be an interesting study in how bad science can take hold in the popular mind given how many people seem to take it seriously without considering the fatal flaws in the methodology used to identify the alleged phenomenon. DK also serves as an example of how bad science can get through the scientific peer review process, especially if it comes from a highly reputable institution. -- Chassin ( talk) 16:10, 7 January 2023 (UTC)
the empirical data when analyzed correctly falsifies them both equallyWhich reliable source says so? Paradoctor ( talk) 20:25, 7 January 2023 (UTC)
I looked into this, the archived discussion doesn't seem to be particularly convincing on why not to mention dissent here. The scientific articles quoted by Fix seem rather convincing (if not damning) on the maths. But I get that they haven't been cited as often as the Dunning Kruger article they're pointing at. I'm pretty sure I can't get away with AFDing the article or something crazy like that. But... I do think that NPOV allows me to put the counterveiling point of view that Dunning-Kruger's paper is bad because (given sources claim) they messed up their maths. -- Kim Bruning ( talk) 00:34, 27 November 2023 (UTC) (Even if they didn't mess up their maths, they definitely did maths in a way that has been confusing to skilled scientists. They may have ended up confusing themselves, this seems plausible based on the cited sources. Either way, not Wikipedia's battle: But for sure we can write that not everyone thinks the effect is real!)
So I might be a little rusty. What's the exact policy reasons for removal of each of the sources? The published papers demonstrate that you can get the Dunning-Kruger graph from random noise (oops). The web source confirm-ably summarizes the papers, thus can usefully be seen as a secondary source. Usually when people actually dig in and read sources, they do also take 1 minute extra time to post their findings on the talk page (or link to where it was previously discussed) But I'll go read them again just to be sure, did I miss anything? -- Kim Bruning ( talk) 11:23, 27 November 2023 (UTC)
Annnd... just came back from reading papers, especially Nuefer 2017 concludes with: "Because of insufficient attention to Numeracy, Current prevalent explanations of the nature of human self-assessment seem to rest on a tenuous foundation.".
Due to the replication crisis in (among others) psychology, we're likely to see many papers like these going forward. Maybe I'm late to this party: is there standing Wikipedia policy when it comes to bad replications or methodological flaws? Else I'd just apply NPOV, and at least report that there have been reported issues with a particular study. (whether the report is correct or not is a different story, but it got published, so we can say it has and by whom.) -- Kim Bruning ( talk) 11:40, 27 November 2023 (UTC)
Three refereed papers on this topic have convenient Wikidata entries:
Boud ( talk) 21:05, 27 November 2023 (UTC) (PS: I see that the sfn structure for citation is used... Boud ( talk) 21:29, 27 November 2023 (UTC))
On the basis of a sample ... contrary to the Dunning-Kruger hypothesis. Additionally, the association ... contrary to the Dunning-Kruger hypothesis. It is concluded that, although the phenomenon described by the Dunning-Kruger hypothesis may be to some degree plausible for some skills, the magnitude of the e铿ect may be much smaller than reported previously.In other words, they found evidence contrary to the Dunning-Kruger hypothesis while accepting that a small effect for some skills might exist (since they didn't do tests for all well-known skills).Something should go up to the lead, but in proportion to the length of this section in the body, so currently it would have to be a very brief sentence. Waiting to see how this section develops would make sense: there is no deadline. Boud ( talk) 22:08, 27 November 2023 (UTC)
In one of the most highly replicable findings in social psychology, Kruger and Dunning1 showed that participants who performed worse in tests of humour, reasoning, and grammar were also more likely to overestimate their performance. This is a high-quality source (Nature Human Behaviour) that is more recent. Phlsph7 ( talk) 08:36, 28 November 2023 (UTC)
When scientists describe something as a "statistical artifact," they are often implying a significant doubt about the validity of the observed phenomenon as a true effect.
The Dunning鈥揔ruger effect is a cognitive bias in which people with limited competence in a particular domain overestimate their abilities. This is true even if that overestimation is caused by regression to the mean: very incompetent people cannot underestimate their competence because their competence is already at the bottom, so they can only judge it correctly or overestimate it, which, on average, means that they overestimate it. -- Hob Gadling ( talk) 11:01, 29 November 2023 (UTC)
https://en.wikipedia.org/?title=Dunning%E2%80%93Kruger_effect&diff=1212672204&oldid=1212638810 So I picked up this edit by
User:Dagelf, looks like some additional sources from The Usual Suspects. I think the consensus so far seems to be to put this under statistics for now, at least until/unless more scientists start to agree. I'm not married to the wording there, except that the word 'autocorrelation' should probably in the article *somewhere* at least. This group of statisticians did write a number of peer-reviewed articles on the topic, after all.
btw.. In general, I think it's appropriate to post a reasoning on the talk page if you're editing/reverting good-faith edits, where the reasoning might not be immediately obvious. Reverting with "This should be discussed[...]" is slightly ironic.聽;-) -- Kim Bruning ( talk) 01:39, 9 March 2024 (UTC)
References
The result was: promoted by
Bruxton (
talk)聽15:10, 27 August 2023 (UTC)
References
Sources
Improved to Good Article status by Phlsph7聽( talk). Self-nominated at 14:19, 24 August 2023 (UTC). Post-promotion hook changes for this nom will be logged at Template talk:Did you know nominations/Dunning鈥揔ruger effect; consider watching this nomination, if it is successful, until the hook appears on the Main Page.
General: Article is new enough and long enough |
---|
Policy compliance:
Hook: Hook has been verified by provided inline citation |
---|
|
QPQ: Done. |
Overall: Epicgenius ( talk) 14:38, 25 August 2023 (UTC)
Is Wikipedia:List_of_citogenesis_incidents#Terms_that_became_real true? If so, then there must be a reliable source that talks about this. CactiStaccingCrane ( talk) 19:05, 20 January 2024 (UTC)
There is disagreement about whether incompetent people really overestimate their competence? From people who know what regression to the mean is? But regression to the mean predicts the effect. -- Hob Gadling ( talk) 11:27, 9 March 2024 (UTC)
but it doesn't seem to hold for everyoneDuh. It's a statistical effect, of course it does not. What sort of reasoning is that? And "the original study was flawed" has no connection to "There is also disagreement about whether the effect is real at all". -- Hob Gadling ( talk) 13:48, 11 March 2024 (UTC)
In one of the most highly replicable findings in social psychology, Kruger and Dunning showed that participants who performed worse in tests of humour, reasoning, and grammar were also more likely to overestimate their performance.There are different ways to explain this but there are very few reliable sources that claim that there is nothing there. Even statistical explanations usually acknowledge this. For example, Gignac & Zajenkowski 2020 hold that statistics only explain some part of the effect and Nuhfer et al. 2017 only deny that the effect is "pronounced". Phlsph7 ( talk) 08:41, 10 March 2024 (UTC)
I really appreciate the time
Phlsph7 put in to improve the page!
Meanwhile, somewhere along the way we lost the recent comments by Gaze (one of the 'et al' in Nuhfer et al. ). I'll leave it here as a note in case we need it again later.
-- Kim Bruning ( talk) 14:04, 10 March 2024 (UTC)
This is the
talk page for discussing improvements to the
Dunning鈥揔ruger effect article. This is not a forum for general discussion of the article's subject. |
Article policies
|
Find sources:聽 Google ( books聽路 news聽路 scholar聽路 free images聽路 WP聽refs)聽路 FENS聽路 JSTOR聽路 TWL |
Archives: 1, 2, 3, 4, 5, 6Auto-archiving period: 90聽days聽 |
Dunning鈥揔ruger effect has been listed as one of the Social sciences and society good articles under the good article criteria. If you can improve it further, please do so. If it no longer meets these criteria, you can reassess it. | ||||||||||||||||||||||
|
This article is rated GA-class on Wikipedia's
content assessment scale. It is of interest to the following WikiProjects: |
|||||||||||||||||||||||||||||||||||||||||||||||||||
|
Daily pageviews of this article
A graph should have been displayed here but
graphs are temporarily disabled. Until they are enabled again, visit the interactive graph at
pageviews.wmcloud.org |
Dunning鈥揔ruger effect has been linked from multiple high-traffic websites. All prior and subsequent edits to the article are noted in its revision history.
|
|
||||||
The article otherwise largely fails to communicate the degree to which the DK effect is pseudoscience. The statement "the statistical explanation interprets these findings as statistical artifacts" needs to be expanded and made much more prominent to explain why the effect is simply autocorrelation and should not be basis for any cognitive or metacognitive claims despite its appeal. The autocorrelation claim is easy to understand and should be a convincing argument for changing the first paragraph to make clear that while the concept is appealing, it is not based on a valid statistically methodology and should not be taken too seriously.
Simply put for any sample of test scores on a 0-10 scale, the likelihood that someone who scores 0 will overestimate the performance is necessarily higher than someone who scores 10. The reverse is also true: anyone who scores 10 will necessarily underestimate their performance more that someone who scores 0.
The ironies are replete, as pointed out in the article: "there is a delightful irony to the circumstances of their [Dunning and Kruger's] blunder. Here are two Ivy League professors arguing that unskilled people have a 鈥榙ual burden鈥: not only are unskilled people 鈥榠ncompetent鈥 ... they are unaware of their own incompetence. [...] In their seminal paper, Dunning and Kruger are the ones broadcasting their (statistical) incompetence by conflating autocorrelation for a psychological effect."
The popularity of the DK effect may be an interesting study in how bad science can take hold in the popular mind given how many people seem to take it seriously without considering the fatal flaws in the methodology used to identify the alleged phenomenon. DK also serves as an example of how bad science can get through the scientific peer review process, especially if it comes from a highly reputable institution. -- Chassin ( talk) 16:10, 7 January 2023 (UTC)
the empirical data when analyzed correctly falsifies them both equallyWhich reliable source says so? Paradoctor ( talk) 20:25, 7 January 2023 (UTC)
I looked into this, the archived discussion doesn't seem to be particularly convincing on why not to mention dissent here. The scientific articles quoted by Fix seem rather convincing (if not damning) on the maths. But I get that they haven't been cited as often as the Dunning Kruger article they're pointing at. I'm pretty sure I can't get away with AFDing the article or something crazy like that. But... I do think that NPOV allows me to put the counterveiling point of view that Dunning-Kruger's paper is bad because (given sources claim) they messed up their maths. -- Kim Bruning ( talk) 00:34, 27 November 2023 (UTC) (Even if they didn't mess up their maths, they definitely did maths in a way that has been confusing to skilled scientists. They may have ended up confusing themselves, this seems plausible based on the cited sources. Either way, not Wikipedia's battle: But for sure we can write that not everyone thinks the effect is real!)
So I might be a little rusty. What's the exact policy reasons for removal of each of the sources? The published papers demonstrate that you can get the Dunning-Kruger graph from random noise (oops). The web source confirm-ably summarizes the papers, thus can usefully be seen as a secondary source. Usually when people actually dig in and read sources, they do also take 1 minute extra time to post their findings on the talk page (or link to where it was previously discussed) But I'll go read them again just to be sure, did I miss anything? -- Kim Bruning ( talk) 11:23, 27 November 2023 (UTC)
Annnd... just came back from reading papers, especially Nuefer 2017 concludes with: "Because of insufficient attention to Numeracy, Current prevalent explanations of the nature of human self-assessment seem to rest on a tenuous foundation.".
Due to the replication crisis in (among others) psychology, we're likely to see many papers like these going forward. Maybe I'm late to this party: is there standing Wikipedia policy when it comes to bad replications or methodological flaws? Else I'd just apply NPOV, and at least report that there have been reported issues with a particular study. (whether the report is correct or not is a different story, but it got published, so we can say it has and by whom.) -- Kim Bruning ( talk) 11:40, 27 November 2023 (UTC)
Three refereed papers on this topic have convenient Wikidata entries:
Boud ( talk) 21:05, 27 November 2023 (UTC) (PS: I see that the sfn structure for citation is used... Boud ( talk) 21:29, 27 November 2023 (UTC))
On the basis of a sample ... contrary to the Dunning-Kruger hypothesis. Additionally, the association ... contrary to the Dunning-Kruger hypothesis. It is concluded that, although the phenomenon described by the Dunning-Kruger hypothesis may be to some degree plausible for some skills, the magnitude of the e铿ect may be much smaller than reported previously.In other words, they found evidence contrary to the Dunning-Kruger hypothesis while accepting that a small effect for some skills might exist (since they didn't do tests for all well-known skills).Something should go up to the lead, but in proportion to the length of this section in the body, so currently it would have to be a very brief sentence. Waiting to see how this section develops would make sense: there is no deadline. Boud ( talk) 22:08, 27 November 2023 (UTC)
In one of the most highly replicable findings in social psychology, Kruger and Dunning1 showed that participants who performed worse in tests of humour, reasoning, and grammar were also more likely to overestimate their performance. This is a high-quality source (Nature Human Behaviour) that is more recent. Phlsph7 ( talk) 08:36, 28 November 2023 (UTC)
When scientists describe something as a "statistical artifact," they are often implying a significant doubt about the validity of the observed phenomenon as a true effect.
The Dunning鈥揔ruger effect is a cognitive bias in which people with limited competence in a particular domain overestimate their abilities. This is true even if that overestimation is caused by regression to the mean: very incompetent people cannot underestimate their competence because their competence is already at the bottom, so they can only judge it correctly or overestimate it, which, on average, means that they overestimate it. -- Hob Gadling ( talk) 11:01, 29 November 2023 (UTC)
https://en.wikipedia.org/?title=Dunning%E2%80%93Kruger_effect&diff=1212672204&oldid=1212638810 So I picked up this edit by
User:Dagelf, looks like some additional sources from The Usual Suspects. I think the consensus so far seems to be to put this under statistics for now, at least until/unless more scientists start to agree. I'm not married to the wording there, except that the word 'autocorrelation' should probably in the article *somewhere* at least. This group of statisticians did write a number of peer-reviewed articles on the topic, after all.
btw.. In general, I think it's appropriate to post a reasoning on the talk page if you're editing/reverting good-faith edits, where the reasoning might not be immediately obvious. Reverting with "This should be discussed[...]" is slightly ironic.聽;-) -- Kim Bruning ( talk) 01:39, 9 March 2024 (UTC)
References
The result was: promoted by
Bruxton (
talk)聽15:10, 27 August 2023 (UTC)
References
Sources
Improved to Good Article status by Phlsph7聽( talk). Self-nominated at 14:19, 24 August 2023 (UTC). Post-promotion hook changes for this nom will be logged at Template talk:Did you know nominations/Dunning鈥揔ruger effect; consider watching this nomination, if it is successful, until the hook appears on the Main Page.
General: Article is new enough and long enough |
---|
Policy compliance:
Hook: Hook has been verified by provided inline citation |
---|
|
QPQ: Done. |
Overall: Epicgenius ( talk) 14:38, 25 August 2023 (UTC)
Is Wikipedia:List_of_citogenesis_incidents#Terms_that_became_real true? If so, then there must be a reliable source that talks about this. CactiStaccingCrane ( talk) 19:05, 20 January 2024 (UTC)
There is disagreement about whether incompetent people really overestimate their competence? From people who know what regression to the mean is? But regression to the mean predicts the effect. -- Hob Gadling ( talk) 11:27, 9 March 2024 (UTC)
but it doesn't seem to hold for everyoneDuh. It's a statistical effect, of course it does not. What sort of reasoning is that? And "the original study was flawed" has no connection to "There is also disagreement about whether the effect is real at all". -- Hob Gadling ( talk) 13:48, 11 March 2024 (UTC)
In one of the most highly replicable findings in social psychology, Kruger and Dunning showed that participants who performed worse in tests of humour, reasoning, and grammar were also more likely to overestimate their performance.There are different ways to explain this but there are very few reliable sources that claim that there is nothing there. Even statistical explanations usually acknowledge this. For example, Gignac & Zajenkowski 2020 hold that statistics only explain some part of the effect and Nuhfer et al. 2017 only deny that the effect is "pronounced". Phlsph7 ( talk) 08:41, 10 March 2024 (UTC)
I really appreciate the time
Phlsph7 put in to improve the page!
Meanwhile, somewhere along the way we lost the recent comments by Gaze (one of the 'et al' in Nuhfer et al. ). I'll leave it here as a note in case we need it again later.
-- Kim Bruning ( talk) 14:04, 10 March 2024 (UTC)