This article is rated C-class on Wikipedia's
content assessment scale. It is of interest to the following WikiProjects: | ||||||||||||||||||
|
The article lead immediately refers to "instances" without providing any context or explanation of what "instances" are being referenced here.
dr.ef.tymac ( talk) 00:56, 6 August 2021 (UTC)
In the Astronomy community, precision and recall are often called completeness and purity (examples: Lochner et al. (2016) - page 8, Boone (2019)). I would like to include them with the other synonyms in the table. Does anyone know to to include them? Cyber Mulher ( talk) 01:50, 5 February 2021 (UTC)
In my opinion, the current use of the graphical illustration is not optimal. The German version of this article [1] uses the same picture, but additionally has two variants of the image (see [2]) which illustrate the individual concepts (P & R). The German article also uses the same colors in the True/false positive/negative table as in the image, further illustrating the connections. We should make those changes once we've agreed on how to do the merger (see below). Tobi Kellner ( talk) 19:34, 1 July 2008 (UTC)
I still don't understand the illustration. Perhaps a more thorough explanation of the color and arrows would help. —Preceding unsigned comment added by Toahi ( talk • contribs) 00:43, 27 February 2010 (UTC)
I am suggesting that the Precision (information retrieval) and Recall (information retrieval) be merged into this article. A similar movement between sensitivity and specificity is being discussed at Talk:Sensitivity_and_specificity#Merger_proposal, and it seems like the consensus is heading toward a merger. WDavis1911 ( talk) 18:26, 6 June 2008 (UTC)
I absolutely agree. We just went through a similar debate at Talk:Relevance (information retrieval) and I'm happy with how we redirect discussion of performance measures to Information retrieval#Performance_measures, which in turn points to this entry. Dtunkelang ( talk) 04:53, 10 June 2008 (UTC)
I completely agree. I started this article in September 2007, probably because I found that there was no English article that matched the German one on precision and recall [3]. Now I realize that we have a lot of redundant information, with the separate articles on Precision (information retrieval) and Recall (information retrieval) as well as the section Information retrieval#Performance_measures. I think there is a point to be made for having a discussion of precision and recall in one place, rather than simply having two separate articles, though, as the two terms are so closely related that it seems to make sense to explain them together. But maybe this article should have clear separate sections with effectively the contents of Precision (information retrieval) and Recall (information retrieval) for those looking for just a definition of one of the terms alone. Tobi Kellner ( talk) 19:34, 1 July 2008 (UTC)
I have had a go at doing the merger, basically just copied the text from the other pages. But I am a bit inexperienced so not sure what to do with the other pages Precision (information retrieval) and Recall (information retrieval) OZJ ( talk) 16:27, 24 June 2009 (UTC)
Mind the fact that there is now a Evaluation measures (information retrieval) page. This requires a bit of coordination. i⋅am⋅amz3 ( talk) 00:49, 18 March 2018 (UTC)
Would it be appropriate to say something about the relationship between precision/recall and soundness/completeness? 129.240.71.209 ( talk) 12:35, 18 May 2010 (UTC)
This page is very confusing. Here is a link to a good explanation:
http://newadonis.creighton.edu/hsl/searching/Recall-Precision.html —Preceding unsigned comment added by 171.66.73.218 ( talk) 00:10, 6 April 2011 (UTC)
I think the first sentence "Precision and recall are two widely used statistical classifications." is imprecise and potentially confusing given the relation of these terms to statistical classification. More accurately they are metrics of performance for statistical classifiers, not really "statistical classifications". --
Jludwig (
talk) 06:23, 10 June 2010 (UTC)
I noticed that Fall-out in the table of metrics for definition points to a wiki on information retrieval rather than a definition for fallout or a class of metrics covering it. — Preceding unsigned comment added by 2620:0:1009:18:DCAF:5105:B547:5A5E ( talk) 19:16, 22 August 2018 (UTC)
The equations don't include cases when the denominator is 0. Are precision and recall just undefined in these cases? Khatchad ( talk) 23:09, 7 March 2011 (UTC)
The article on accuracy and precision talks about pretty much the same concepts as this article, but does so differently, and completely fails to mention precision. This article covers all the concepts, but doesn't have the nice diagrams (e.g. the bullseye) of the former. But essentially, they're pretty much about the same thing. Thus, a proposal to merge these two articles. Good idea, yes or no?
Failing the merge proposal, the whole, ahem, mess of related articles could benefit from more cross-links and sharing: e.g. accuracy and precision fails to link to this one when defining recall, and instead links to sensitivity (tests) for recall. And likewise, as one chases around the various links in this (non-)cluster of articles. Arghh, so e.g. at the bottom of sensitivity and specificity is a table, defining precision, recall, accuracy, and many others, but completely forgetting to mention F1! The article on information retrieval points to this one as the "main article" on precision, recall, and fall-out, but this article never mentions fall-out. So maybe not just a merge, but a coherent rationalization of the whole cluster of related topics? linas ( talk) 19:21, 2 June 2012 (UTC)
The relationship between sensitivity and specificity to precision depends on the proportion of positive cases in the population, also known as prevalence; with fixed sensitivity and specificity, precision rises with increasing prevalence.
Seems to be wrong, if the prevalence is increasing the precision doesn't necessarily increase or decrease since the precision is the number of true positives over the total number of positives. Let me know if I misunderstood. PeepleLikeYou ( talk) 11:28, 15 August 2020 (UTC)
The quote is correct. The precision depends on the prevalence. One way to see this is to imagine what happens if the prevalence goes to 0 while the sensitivity and specificity remain constant. Since the precision is the number of true positives over the total number of positives, as you said, it must fall to 0 too (there can be no true positives if the prevalence is 0). Tobycrisford ( talk) 11:25, 7 January 2022 (UTC)
Fellow Wikipedians: I've proposed some changes to the formula infobox transcluded into this article, with the goal of trimming down its overpowering (if not excessive) width. My original message with some explanatory notes is at Template talk:Confusion matrix terms#Template_width, and you can see the revised template layout I've proposed by viewing its sandbox version.
There have been no responses over there in well over two months, and since the changes I'm proposing are significant enough to possibly be contentious, I wanted to invite any interested Wikipedians to discuss them over at the template's talk page. Thanks! FeRDNYC ( talk) 00:05, 5 January 2022 (UTC)
https://maxkleiner1.medium.com/bayes-theorem-confusion-matrix-b6f9ee3864a0 Biggerj1 ( talk) 14:52, 2 September 2022 (UTC)
This article was the subject of a Wiki Education Foundation-supported course assignment, between 20 January 2023 and 15 May 2023. Further details are available on the course page. Student editor(s): AbigailG23 ( article contribs).
— Assignment last updated by AbigailG23 ( talk) 05:13, 9 April 2023 (UTC)
Many of the referenced articles appear several times (different reference numbers). It would be nice to consolidate those. 194.230.147.195 ( talk) 20:17, 30 April 2023 (UTC)
It seems odd to me that there is no precision-recall figure on this page, as this visualizes the trade-off of precision and recall. There is one on the F-score page. Ramajoepanda ( talk) 14:08, 18 January 2024 (UTC)
This article is rated C-class on Wikipedia's
content assessment scale. It is of interest to the following WikiProjects: | ||||||||||||||||||
|
The article lead immediately refers to "instances" without providing any context or explanation of what "instances" are being referenced here.
dr.ef.tymac ( talk) 00:56, 6 August 2021 (UTC)
In the Astronomy community, precision and recall are often called completeness and purity (examples: Lochner et al. (2016) - page 8, Boone (2019)). I would like to include them with the other synonyms in the table. Does anyone know to to include them? Cyber Mulher ( talk) 01:50, 5 February 2021 (UTC)
In my opinion, the current use of the graphical illustration is not optimal. The German version of this article [1] uses the same picture, but additionally has two variants of the image (see [2]) which illustrate the individual concepts (P & R). The German article also uses the same colors in the True/false positive/negative table as in the image, further illustrating the connections. We should make those changes once we've agreed on how to do the merger (see below). Tobi Kellner ( talk) 19:34, 1 July 2008 (UTC)
I still don't understand the illustration. Perhaps a more thorough explanation of the color and arrows would help. —Preceding unsigned comment added by Toahi ( talk • contribs) 00:43, 27 February 2010 (UTC)
I am suggesting that the Precision (information retrieval) and Recall (information retrieval) be merged into this article. A similar movement between sensitivity and specificity is being discussed at Talk:Sensitivity_and_specificity#Merger_proposal, and it seems like the consensus is heading toward a merger. WDavis1911 ( talk) 18:26, 6 June 2008 (UTC)
I absolutely agree. We just went through a similar debate at Talk:Relevance (information retrieval) and I'm happy with how we redirect discussion of performance measures to Information retrieval#Performance_measures, which in turn points to this entry. Dtunkelang ( talk) 04:53, 10 June 2008 (UTC)
I completely agree. I started this article in September 2007, probably because I found that there was no English article that matched the German one on precision and recall [3]. Now I realize that we have a lot of redundant information, with the separate articles on Precision (information retrieval) and Recall (information retrieval) as well as the section Information retrieval#Performance_measures. I think there is a point to be made for having a discussion of precision and recall in one place, rather than simply having two separate articles, though, as the two terms are so closely related that it seems to make sense to explain them together. But maybe this article should have clear separate sections with effectively the contents of Precision (information retrieval) and Recall (information retrieval) for those looking for just a definition of one of the terms alone. Tobi Kellner ( talk) 19:34, 1 July 2008 (UTC)
I have had a go at doing the merger, basically just copied the text from the other pages. But I am a bit inexperienced so not sure what to do with the other pages Precision (information retrieval) and Recall (information retrieval) OZJ ( talk) 16:27, 24 June 2009 (UTC)
Mind the fact that there is now a Evaluation measures (information retrieval) page. This requires a bit of coordination. i⋅am⋅amz3 ( talk) 00:49, 18 March 2018 (UTC)
Would it be appropriate to say something about the relationship between precision/recall and soundness/completeness? 129.240.71.209 ( talk) 12:35, 18 May 2010 (UTC)
This page is very confusing. Here is a link to a good explanation:
http://newadonis.creighton.edu/hsl/searching/Recall-Precision.html —Preceding unsigned comment added by 171.66.73.218 ( talk) 00:10, 6 April 2011 (UTC)
I think the first sentence "Precision and recall are two widely used statistical classifications." is imprecise and potentially confusing given the relation of these terms to statistical classification. More accurately they are metrics of performance for statistical classifiers, not really "statistical classifications". --
Jludwig (
talk) 06:23, 10 June 2010 (UTC)
I noticed that Fall-out in the table of metrics for definition points to a wiki on information retrieval rather than a definition for fallout or a class of metrics covering it. — Preceding unsigned comment added by 2620:0:1009:18:DCAF:5105:B547:5A5E ( talk) 19:16, 22 August 2018 (UTC)
The equations don't include cases when the denominator is 0. Are precision and recall just undefined in these cases? Khatchad ( talk) 23:09, 7 March 2011 (UTC)
The article on accuracy and precision talks about pretty much the same concepts as this article, but does so differently, and completely fails to mention precision. This article covers all the concepts, but doesn't have the nice diagrams (e.g. the bullseye) of the former. But essentially, they're pretty much about the same thing. Thus, a proposal to merge these two articles. Good idea, yes or no?
Failing the merge proposal, the whole, ahem, mess of related articles could benefit from more cross-links and sharing: e.g. accuracy and precision fails to link to this one when defining recall, and instead links to sensitivity (tests) for recall. And likewise, as one chases around the various links in this (non-)cluster of articles. Arghh, so e.g. at the bottom of sensitivity and specificity is a table, defining precision, recall, accuracy, and many others, but completely forgetting to mention F1! The article on information retrieval points to this one as the "main article" on precision, recall, and fall-out, but this article never mentions fall-out. So maybe not just a merge, but a coherent rationalization of the whole cluster of related topics? linas ( talk) 19:21, 2 June 2012 (UTC)
The relationship between sensitivity and specificity to precision depends on the proportion of positive cases in the population, also known as prevalence; with fixed sensitivity and specificity, precision rises with increasing prevalence.
Seems to be wrong, if the prevalence is increasing the precision doesn't necessarily increase or decrease since the precision is the number of true positives over the total number of positives. Let me know if I misunderstood. PeepleLikeYou ( talk) 11:28, 15 August 2020 (UTC)
The quote is correct. The precision depends on the prevalence. One way to see this is to imagine what happens if the prevalence goes to 0 while the sensitivity and specificity remain constant. Since the precision is the number of true positives over the total number of positives, as you said, it must fall to 0 too (there can be no true positives if the prevalence is 0). Tobycrisford ( talk) 11:25, 7 January 2022 (UTC)
Fellow Wikipedians: I've proposed some changes to the formula infobox transcluded into this article, with the goal of trimming down its overpowering (if not excessive) width. My original message with some explanatory notes is at Template talk:Confusion matrix terms#Template_width, and you can see the revised template layout I've proposed by viewing its sandbox version.
There have been no responses over there in well over two months, and since the changes I'm proposing are significant enough to possibly be contentious, I wanted to invite any interested Wikipedians to discuss them over at the template's talk page. Thanks! FeRDNYC ( talk) 00:05, 5 January 2022 (UTC)
https://maxkleiner1.medium.com/bayes-theorem-confusion-matrix-b6f9ee3864a0 Biggerj1 ( talk) 14:52, 2 September 2022 (UTC)
This article was the subject of a Wiki Education Foundation-supported course assignment, between 20 January 2023 and 15 May 2023. Further details are available on the course page. Student editor(s): AbigailG23 ( article contribs).
— Assignment last updated by AbigailG23 ( talk) 05:13, 9 April 2023 (UTC)
Many of the referenced articles appear several times (different reference numbers). It would be nice to consolidate those. 194.230.147.195 ( talk) 20:17, 30 April 2023 (UTC)
It seems odd to me that there is no precision-recall figure on this page, as this visualizes the trade-off of precision and recall. There is one on the F-score page. Ramajoepanda ( talk) 14:08, 18 January 2024 (UTC)