![]() | This redirect does not require a rating on Wikipedia's
content assessment scale. It is of interest to the following WikiProjects: | ||||||||||||||
|
![]() | The contents of the Specificity (tests) page were merged into Sensitivity and specificity and it now redirects there. For the contribution history and old versions of the merged article please see its history. |
In gene structure prediction literature, specificity has traditionally been computed as
. That is, is the proportion of
predicted coding nucleotide that are actually coding. 22:31, 15 Aug 2004
I came to this page looking for a continuous interpretation of specificity (for instrumentation). For example if you built an instrument to measure the salt content of a solution, it might (by imperfect design) also register the amount of sugar in the sample. Suppose the actual instrument reading was [reading] = 0.99*[true salt concentration] + 0.01*[sugar concentration]. Is there a concept of specificity that characterizes this kind of imperfection? 69.159.205.193 14:07, 15 February 2006 (UTC)
This is based on http://www.musc.edu/dc/icrebm/sensitivity.html
Information Retrieval
Basics True Positives (TP) "Number of P's that you called P" True Negatives (TN) "Number of N's that you called N" False Positives (FP) "Number of N's that you called P" (Type I errors) False Negatives (FN) "Number of P's that you called N" (Type II errors) Positives (P=TP+FN) "Number of P's" Negatives (N=TN+FP) "Number of N's" Data set (A=P+N) "Number of P's and N's" Sensitivity (TP/P) "Proportion of P's that you called P" (recall in IR) Specificity (TN/N) "Proportion of N's that you called N" False Positive Rate (FP/N) "Proportion of N's that you called P" False Negative Rate (FN/P) "Proportion of P's that you called N" Positive Predictive Value (TP/TP+FP) "Proportion of those you called P that are P" (precision in IR) Negative Predictive Value (TN/TN+FN) "Proportion of those you called N that are N" Prevalence (P/A) "Proportion of data that are P" F-Measure (2 x Rec x Pre / Rec + Pre) "Harmonic mean of precision and recall"
-- Jettlogic
See Talk:Sensitivity (tests) re past wish list for simpler description, setting what it is before launching in mathematical jargon. I have also added a table and in Sensitivity (tests) added a worked example. The table is now consistant in Sensitivity, Specificity, PPV & NPV with relevant row or column for calculation highlighted. David Ruben Talk 02:44, 11 October 2006 (UTC)
I like the table! I think it would be helpful to somewhere explicitly say "Power = Sensitivity" (which follows from your equations) but I did not know how to edit the linked to example myself. If you agree, can you perhaps add this somewhere? Best wishes, David (wp07 at kreil.org). (23:20, 11 June 2007 User:141.244.140.159)
I would like to clarify in my mind the differences between the two concepts. At the moment specificity redirects to specificity (tests) but am I wrong in believing that in general terms the two words are synonymous? LouisBB ( talk) 06:11, 22 May 2008 (UTC)
![]() | This redirect does not require a rating on Wikipedia's
content assessment scale. It is of interest to the following WikiProjects: | ||||||||||||||
|
![]() | The contents of the Specificity (tests) page were merged into Sensitivity and specificity and it now redirects there. For the contribution history and old versions of the merged article please see its history. |
In gene structure prediction literature, specificity has traditionally been computed as
. That is, is the proportion of
predicted coding nucleotide that are actually coding. 22:31, 15 Aug 2004
I came to this page looking for a continuous interpretation of specificity (for instrumentation). For example if you built an instrument to measure the salt content of a solution, it might (by imperfect design) also register the amount of sugar in the sample. Suppose the actual instrument reading was [reading] = 0.99*[true salt concentration] + 0.01*[sugar concentration]. Is there a concept of specificity that characterizes this kind of imperfection? 69.159.205.193 14:07, 15 February 2006 (UTC)
This is based on http://www.musc.edu/dc/icrebm/sensitivity.html
Information Retrieval
Basics True Positives (TP) "Number of P's that you called P" True Negatives (TN) "Number of N's that you called N" False Positives (FP) "Number of N's that you called P" (Type I errors) False Negatives (FN) "Number of P's that you called N" (Type II errors) Positives (P=TP+FN) "Number of P's" Negatives (N=TN+FP) "Number of N's" Data set (A=P+N) "Number of P's and N's" Sensitivity (TP/P) "Proportion of P's that you called P" (recall in IR) Specificity (TN/N) "Proportion of N's that you called N" False Positive Rate (FP/N) "Proportion of N's that you called P" False Negative Rate (FN/P) "Proportion of P's that you called N" Positive Predictive Value (TP/TP+FP) "Proportion of those you called P that are P" (precision in IR) Negative Predictive Value (TN/TN+FN) "Proportion of those you called N that are N" Prevalence (P/A) "Proportion of data that are P" F-Measure (2 x Rec x Pre / Rec + Pre) "Harmonic mean of precision and recall"
-- Jettlogic
See Talk:Sensitivity (tests) re past wish list for simpler description, setting what it is before launching in mathematical jargon. I have also added a table and in Sensitivity (tests) added a worked example. The table is now consistant in Sensitivity, Specificity, PPV & NPV with relevant row or column for calculation highlighted. David Ruben Talk 02:44, 11 October 2006 (UTC)
I like the table! I think it would be helpful to somewhere explicitly say "Power = Sensitivity" (which follows from your equations) but I did not know how to edit the linked to example myself. If you agree, can you perhaps add this somewhere? Best wishes, David (wp07 at kreil.org). (23:20, 11 June 2007 User:141.244.140.159)
I would like to clarify in my mind the differences between the two concepts. At the moment specificity redirects to specificity (tests) but am I wrong in believing that in general terms the two words are synonymous? LouisBB ( talk) 06:11, 22 May 2008 (UTC)