![]() | This ![]() It is of interest to the following WikiProjects: | ||||||||||||||||||||
|
![]() | Daily page views
|
|
The best and most helpful analogy for I/II errors I've ever heard is the courtroom analogy, where a type I error is represented by a wrongfully convicted innocent, and a type II error by a wrongfully acquitted criminal. — Preceding unsigned comment added by Cpt5mann ( talk • contribs) 22:30, 21 May 2020 (UTC)
The reason that I have suggested the merge is that each of these three pieces desperately need to be understood within the context of the other two.
I have done what I can to make the "matrix" which the combined three will inhabit as clear as I possibly can.
The further work, to some extent, needs someone with a higher level of statistical understanding than I possess. However, I am certain that, now I have supplied all of the "historical" information, supported by the appropriate references, that the statistical descriptions and merging will be a far easier task.
Also, I am certain that, once the articles are merged, and once all of the statistical discussions have become symmetrical in form and content, that the average reader will find the Wiki far more helpful than it is at present. My best to you all Lindsay658 22:01, 23 June 2006 (UTC)
What do others think about this merger? I wanted to point a friend to an article about "false positives". Even though the information in the article is statistically correct, it makes little sense if you talk about false positives in the context off e-mail spam filters or anti-virus checks. Imagine you want to explain someone what a false postive is in the case of an anti-virus check and you point him to this site. He will see all the math talk and run - but that is not necessary: the concept of a false positive can be understood even if you have no clue about statistics. e.g. here is an explanation which has nothing to do with statistics, but is perfectly understandable for people who are not mathematically inclined: http://service1.symantec.com/sarc/sarc.nsf/info/html/what.false.positive.html
Wouldn't it be better to create an article about "false postives" and let it point to this article for further details? 79.199.225.197 ( talk) 15:40, 4 May 2009 (UTC)
This article was proposed for deletion on the basis of qualms about its name. I agree that it should be moved, and suggest Type I and Type II errors, but I see no reason to delete. Septentrionalis 21:32, 2 July 2006 (UTC)
Heeding the advice and guidance of Arthur Rubin and Septentrionalis this article is being re-titled and re-organized in a way that far better represents the overall situation. Lindsay658 22:45, 3 July 2006 (UTC)
It seems like the following proposed rearrangement of The Table of Error Types would be more consistent with the TP/FP/FN/TN graphic in the Error Rate section and be more consistent with the location of TP/FP/FN/TN and Type I and II errors in the Confusion_matrix page:
Table of error types | Null hypothesis (H0) is | ||
---|---|---|---|
False | True | ||
Decision about null hypothesis (H0) |
Reject | Correct inference (true positive) (probability = 1−β) |
Type I error (false positive) (probability = α) |
Don't reject |
Type II error (false negative) (probability = β) |
Correct inference (true negative) (probability = 1−α) |
I believe this is consistent with the spirit of Xavier maxime's comment below except that this recognizes that the above table is written from the perspective of H0 rather than H1 as it probably should be. Alternately, the whole table could be represented in the context of both H0 and H1:
Table of error types | Alternate hypothesis (H1) is | ||
---|---|---|---|
True | False | ||
Decision about null hypothesis (H0) |
Reject H0 (H1 Accepted) |
Correct inference (true positive) (probability = 1−β) |
Type I error (false positive) (probability = α) |
Don't reject H0 |
Type II error (false negative) (probability = β) |
Correct inference (true negative) (probability = 1−α) |
This makes the order of "condition positive" and "condition negative" columns more consistent with all the Wikipedia pages that contain the confusion matrix information. I believe this version of the table makes the concepts more easily mentally connected with both hypothesis testing and machine learning contexts. This version of the table also puts the table in the context of BOTH H0 and H1 which is really the only context for which Type I and II errors are meaningful. Sdavern ( talk) 20:45, 16 March 2020 (UTC)
The Table of Error Types is inconsistent with the text and with other wikipedia pages. In particular the True Positive and True Negative appear to be reversed. This page:
/info/en/?search=Sensitivity_and_specificity states that the True positive is the power of the test (i.e. 1 - beta) which is not what is in the table. Could you please check and / or clarify what is meant here ? - Date: 28 March 2018 — Preceding
unsigned comment added by
Xavier maxime (
talk •
contribs)
09:18, 28 March 2018 (UTC)
The term true negative is requested at
Wikipedia:Requested_articles/Mathematics. I think it would be most appropriate to link the two terms true negative and true positive to this article. They are however not explained here, so they need to be added. I think it would be a good thing to create a
confusion matrix at the beginning of this article, and explain each term with a reference to the matrix. I'm going to do the linkup when I finish writing this. If no opposition is uttered before Tuesday, I will do the proposed changes.
Bfg
15:18, 16 July 2006 (UTC)
Lindsay. I admit to not reading the entire article, just shimming. I've done a reread of the article upon reading your comments. I'll try to cover your points:
After reading the article, I also propose that the article is reorganization in the following way:
I copied the article improvement suggestions here to separate them from the very lengthy discussion about footnotes. Bfg 12:14, 7 August 2006 (UTC)
I am currently contemplating how this could best be done, an example should be useful both from a hypothesis testing perspective and a Bayesian classification perspective. I am considering making up an example where we look at the hypothesis testing as a two class Bayesian classification problem. Does that sound reasonable? Bfg 13:07, 7 August 2006 (UTC)
The following is a draft proposal for a case study, meant to come before Various proposals for further extension Bfg 13:36, 18 August 2006 (UTC)
In medicine one often dicsuss the possibility of large scale medical screening (see below).
Give a hypothetical idealized screening situation the following is true:
From these data, we can form a confusion matrix
Correct well (H0) | Correct ill (HA) | |
---|---|---|
Classified well | 989.010 | 10 |
Classified ill | 9.990 | 990 |
Legend | |
---|---|
True negative | |
False negative | |
False positive | |
True positive |
From this we may read the following:
Under frequentist theory one would always refer to one hypothesis as the null hypothesis, and the other as an alternative hypothesis. Under Bayesian theory however, there is no such preference among the alternative hypothesis, the notion of false positive and false negative becomes connected to the hypothesis one is currently discussing. Refering to the confusion matrix above, the false negatives becomes the sum of the offdiagonal column elements belonging to the respective hypothesis, while the false positives becomes the sum of the offdiagonal row elements. The true positives becomes the ondiagonal element, while true negative makes little sense.
In my opinion, the article as currently written systematically reverses the usual meanings of false negative and false positive.
In my experience, the usual notion of false positive is in situations such as this: One is testing to see if a patient has a disease, or if a person has taken illegal drugs. The null hypothesis is that no disease exists, or no drugs were taken. A test is administered, and a result is obtained. Suppose the test indicates that the disease condition exists or that the illegal drugs were taken. Then, if the patient is disease-free, or the drugs were not taken, this is a "false positive" in the usual meaning of the term. If for example under the null hypothesis, 5% of the subjects nonetheless test positive, then it is usually said that "the type I error rate is 0.05." That is, 5% of the subjects have falsely tested positive.
This observation is contrary to how the article is written.
Similarly, the notion of false discovery rate is connected with the rate of falsely "discovering" effects that are solely due to chance under the null hypothesis.
The article attempts to account for this by mentioning ambiguity of testing for innocence versus testing for guilt; while there is some rationale for this (especially under a Bayesian interpretation, where all hypotheses are treated the same), it is not usually the situation in frequentist theory, where one hypothesis is usually distinguished as the obvious null hypothesis. Usually this is the "default" hypothesis that one is trying to reject, e.g., that the accused is innocent, that the patient is disease-free, that the athlete is not using illegal drugs.
Therefore, I seriously propose that the article be edited to reflect properly the usual uses of these terms. As it stands, any Freshman statistics student would be immediately confused when comparing this article to what his statistics textbook states. Bill Jefferys 02:03, 11 August 2006 (UTC)
Reading the article more carefully, I find that it is inconsistent even within itself about "false negative" and "false positive". Whereas the lead-in (incorrectly) identifies a false negative as "rejecting a null hypothesis that should have been accepted", later on in the article, in the "medical screening" section, rejecting a null hypothesis that should have been accepted (i.e., deciding that the individual being screened has the condition when he does not -- the null hypothesis being that the individual does not have the condition), is correctly identified as a false positive.
This is going to take a lot of care to edit to make the whole article consistent with both usual terminology and to make it self-consistent.
I note in passing that under frequentist theory, one never "accepts" the null hypothesis. Rather, one fails to reject the alternative hypothesis. When the article is edited to remove these problems, this should also be fixed. Bill Jefferys 23:44, 11 August 2006 (UTC)
HI, thanks for the feedback. I've been a bit busy of late, but if I can find the time I'll go at it. Some of the later stuff is fine, but the introduction and some of the earlier stuff needs to be fixed. Bill Jefferys 13:59, 14 August 2006 (UTC)
Hi, I've made the changes I am aware of that should be made. There was also some confusion on sensitivity and power which I've also fixed. However, as this was done in one grand edit, I may have missed some things or incorrectly changed some others. Part of the confusion is that the article is written in such a way that it's not always clear what one should consider the null hypothesis. Thus, when discussing computer identity authenticitation, I've presumed that a positive test results corresponds with identifying the subject logging in as authentic, whereas a negative result corresponds with identifying the subject as an imposter. Please check my work. Bill Jefferys 21:47, 14 August 2006 (UTC)
For the paragraph:
Given that you seem to be saying that:
I am certain that it could easily be re-written in the form:
Also, it seems that "the error not rejecting a null hypothesis" should either be:
Best Lindsay658 03:05, 18 August 2006 (UTC)
I hear what you're saying and indeed the double negative bothers me as well. The problem is that according to standard (frequentist) hypothesis testing theory, one never "accepts" the null hypothesis; one only "fails to reject" it. I haven't been able to think of a good way around this that is both technically correct and easy to understand. Any help here would be appreciated.
In a Bayesian or decision-theoretic context, it wouldn't be an issue, but as long as we're considering standard hypothesis testing, it is. Bill Jefferys 11:36, 18 August 2006 (UTC)
An anonyomous editor [63.167.255.231] noted on the main page that:
I have moved the comment here because comments don't belong on the main page. The editor here has some justice behind his comments. Indeed, this is a problem of decision theory since not only must the Type I/II error rate be taken into account, but also the loss function that applies, since the loss on detaining a harmless passenger is much less than the loss on allowing a terrorist passenger on board.
I think that the right way to approach this in the main article is to point out (and link to) the connection to decision theory, perhaps using this example as a springboard. But since this article is explaining only Type I/II errors, it isn't the place to give a full explication of decision theory. Perhaps this can be added to the list of "to-dos". Bill Jefferys 23:55, 21 August 2006 (UTC)
It isn't quite true that one normally tries to make the Type I and Type II error rates equal. Usually one tries to balance the two against each other, i.e., choosing a test that will have an appropriate Type I error rate, and then picking other factors (e.g., the number of cases tested) that will guarantee a desired Type II error rate. I'm not aware that anyone slavishly decides that a study will be designed so that Type I and Type II error rates will be equal.
That said, it is still the case that in decision problems, the Type I/II error rates are part, but not all of the problem. For one thing, Type I/II error rates assume that the null/alternative hypothesis is correct, and take no account of the probability that the null/alternative hypothesis is actually the state of nature. This is something that is only reflected in a Bayesian context, through the prior. Secondly, as I point out above, the cost of making a particular decision, given the probability that the state of nature is what it is (e.g., terrorist, innocuous traveller) has to be taken into account. None of these has anything to do with the definition of what is a Type I/II error. They are important considerations that have to be considered when one is making a decision, but they don't reflect directly on the definition of the terms being described by this article.
Thus, I think that it's appropriate to mention this issue briefly with links to the appropriate articles on decision theory and loss function, for example, but it is also appropriate to use the security checking example as one that describes what Type I/II errors are, which is of course the point of the article. Bill Jefferys 01:47, 22 August 2006 (UTC)
I removed the statement that size is equal to power. This is wrong; in fact size is the maximum probability of Type I error; that is, the smallest level of significance of a test. In many cases, size is equal to (for example a test statistic having a continuous distribution and point null hypothesis), but in general it is where is the critical value. Btyner 22:11, 3 November 2006 (UTC)
Once again, the truth table got changed to an invalid state. The problem is obviously that True and False are used here in two different senses (i.e. the name of the actual condition and the validity of the test result); and so some people look at the intersection of the False column and the Negative row and think the cell should contain "False Negative". I've tried to make this blatantly clear, though some will think this solution is belabored. If you can think of a better way to get this idea across, please leap in. Trevor Hanson 05:19, 13 December 2006 (UTC)
There seem to be some inconsistant use of capitalization of the word type in this article. It seems there is confusion on the web too. Does any one know if 'type' should be capitalized if appearing in the middle of a sentence? 194.83.138.183 10:40, 8 March 2007 (UTC)
There seems to be no reason to capitalize 'type'. Let's proceed with small letters. Loard ( talk) 15:41, 13 April 2011 (UTC)
This article, currently, is very math and statistics heavy and is not useful as a cross reference from other articles that talk about false negatives and false positives.— C45207 | Talk 20:45, 12 March 2007 (UTC)
From the article:
Type II error, also known as an "error of the second kind", a β error, or a "false negative": the error of accepting a null hypothesis when the alternative hypothesis is the true state of nature.
It's my understanding that in a hypothesis test, you never accept the null. You either reject the null (i.e. accept the alternative) or you refrain from rejecting the null. So the above sentence incorrectly defines "type II error". However, a "false negative" can indeed refer to a situation where an assertive declaration is made, e.g. "You do not have cancer."
So a type II error and a false negative aren't necessarily the same thing.
Despite the Wikipedia guideline to be bold, I'm posting this to the talk page rather than making a correction to the article itself, in part because my knowledge of statistics isn't all that deep and there may well be further subtleties here that I'm missing. (In other words, I'd rather risk a type II error than a type I error. ;-) If you can affirm, qualify, or refute what I've said here, please do. Thanks. Fillard 16:40, 4 May 2007 (UTC)
I've made some minor tweaks to the section on null hypotheses. If anyone wants to check out the background to these changes (it necessitated a bit of head scratching and a visit to a library!), there are comments over at Talk:Null hypothesis#Formulation of null hypotheses and User_talk:Lindsay658#Null_hypotheses. -- Sjb90 11:35, 14 June 2007 (UTC)
Phaedrus273 ( talk) 04:14, 14 February 2014 (UTC)I have a major issue with the definitions of "Null Hypothesis" on this page and, less so, on the null hypothesis page. There are two types of null hypothesis:
Type 1 H0 defined as: the experiment is inconclusive
Type 2 H0 defined as: there is no correlation between the two parameters.
To assert a type 2 null hypothesis deductively implies that the experiment is "complete", in other words, the methodology was perfect, the measuring instruments were 100% accurate, every confounding factor was fully accounted for. In many areas of research we can not say this, particularly in psychology. For example:Schernhammer et al [1] state, “Findings from this study indicate that job stress is not related to any increase in breast cancer risk”. In fact their trial contained many serious flaws and so demonstrated nothing. A claim of rejection as a null hypothesis is predicated on a complete and flawless study which is very often not the case. [1]Schernhammer, ES, Hankinson, SE, Rosner, B, Kroenke, CH, Willett, WC, Colditz, GA, Kawachi, I. Job Stress and Breast Cancer Risk. Am J Epidemiol, 2004 160(11):1079-1086
Phaedrus273 - Paul Wilson
In one of the elaborations of the presence or absence of errors, the example of pregnancy testing is given.
On the basis that there is a well-established medical/physiological/psychological condition known as "false pregnancy" (see pseudocyesis) I suggest that it would be far better to choose a domain other than pregnancy for the example; because it could well be construed that, rather than testing for the presence/absence of a pregnancy (i.e., the presence of a foetus within the woman's uterus), it was actually testing for the presence/absence of a "false pregnancy" (viz., pseudocyesis with no foetus present). Lindsay658 01:13, 2 August 2007 (UTC)
The section "Understanding Type I and Type II Errors" was created by Varuag doos on 10 December 2006 (19:27 edit). The second paragraph, starting with the lead-in "Associated section," was (at the time I deleted it) substantially the same as in that original contribution. To help the discussion, here is the deleted paragraph in its entirety:
At this point one might have wanted to, say, provide a better segue into this paragraph, getting rid of the awkward "Associated section" lead in. One might also have corrected the fourth sentence (the one beginning with "There is little chance"), which makes it sound as if random samples can "result in" one or another kind of "population" (the term "population" has a technical meaning in statistics and is not synonymous with "statistical sample"). However, my real problem is that I simply don't understand three out of five sentences in this paragraph (I do understand sentences #1 and #3). There are some "differences between two populations" that random samples are supposed to be "averaging out." What are then the differences that persist "post treatment"? (While we're at it, what is this "treatment," enclosed in scare quotes?) I find the whole second sentence (the one beginning with "The argument is") very confusing. And finally, what is the point of this paragraph? Is it simply to warn that Type-I errors are always possible? Or is it that Type-II errors are always possible? In either case, it's too trivial a point to deserve a paragraph. If the intention is to say something else, then the whole paragraph should definitely be re-written by someone who understands what this something else is supposed to be. Reuqr 05:25, 14 October 2007 (UTC)
In qualitity control applications, this is also known as producer's risk (the risk of rejecting a good item) and consumer's risk (the risk of accepting a bad item) —Preceding unsigned comment added by 70.168.79.54 ( talk) 13:48, 29 July 2010 (UTC)
I have heard many statisticians complain about how "type I" and "type II" error are bad names for statistical terminology, because they are easily confused, and the names themselves carry no explanatory meaning. Has anyone encountered a published source making a statement to this effect? Are there any alternative terms that have been proposed in the literature? This would make an important topic to include in the article, if this sort of thing has gone on. It seems there may be a parallel here between this sort of thing and what has happened in other areas of mathematics, such as where the use of baire category terminology is being replaced by more descriptive terms, such as "of the first category" being replaced by meagre. Cazort ( talk) 18:57, 7 December 2007 (UTC)
I struggled with remembering which way round these are when I was doing courses in psychology, and discovered a mnemonic to help in this which I've added to the page. I hope they meet others' approval.
Meltingpot ( talk) 21:03, 6 October 2009 (UTC)
I thought that the following should appear here (originally at [2] for the ease of others. Lindsay658 ( talk) 21:21, 22 February 2008 (UTC)
Hi there,
Over on the null hypothesis talk page, I've been canvassing for opinions on a change that I plan to make regarding the formulation of a null hypothesis. However I've just noticed your excellent edits on Type I and type II errors. In particular, in the null hypothesis section you say:
The consistent application by statisticians of Neyman and Pearson's convention of representing "the hypothesis to be tested" (or "the hypothesis to be nullified") with the expression Ho -- associated with an increasing tendency to incorrectly read the expression's subscript as a zero, rather than an "O" (for "original") -- has led to circumstances where many understand the term "the null hypothesis" as meaning "the nil hypothesis". That is, they incorrectly understand it to mean "there is no phenomenon", and that the results in question have arisen through chance.
Now I know the trouble with stats in empirical science is that everyone is always feeling their way to some extent -- it's an inexact science that tries to bring sharp definition to the real world! But I'm really intrigued to know what you're basing this statement on -- I'm one of those people who has always understood the null hypothesis to be a statement of null effect. I've just dug out my old undergrad notes on this, and that's certainly what I was taught at Cambridge; and it's also what my stats reference (Statistical Methods for Psychology, by David C. Howell) seems to suggest. In addition, whenever I've been an examiner for public exams, the markscheme has tended to state the definition of a null as being a statement of null effect.
I'm a cognitive psychologist rather than a statistician, so I'm entirely prepared to accept that this may be a common misconception, but was wondering whether you could point me towards some decent reference sources that try to clear this up, if so! —The preceding unsigned comment was added by Sjb90 ( talk • contribs) 11:07, 16 May 2007 (UTC).
[ [4]], [ [5]], and [ [6]] I really didn't have a lot to work with.
Are the 2 examples under the Computer Security section backwards? Mingramh ( talk) 14:15, 2 April 2008 (UTC)
I suggest removing this example table. I'm not clear on what legal system abjectly uses a test for "innocence" where the presumption (null hypothesis) is on "not innocent". It could be argued that civil law uses this; however, there it's not really consistent with a hypothesis test at all since the basis is more a "preponderance of doubt". It could also be argued that the de facto situation in a kangaroo court uses a "not innocent" null hypothesis, but in that case I don't beleive it's consistent with a hypothesis test either since the "truth" is not what they're after. I never heard of someone being found "not innocent" by even a kangaroo court. Garykempen ( talk) 19:41, 22 March 2009 (UTC)
I'm going to be studying this article, and in the process will clean up a few minor things. The references to a "person" getting pregnant are silly, for example. Check the wiki page on pregnancy: only women get pregnant. Etc.. Brad ( talk) 01:50, 25 August 2008 (UTC)
I've changed the two "In other words" lines that try to put the false negative and false positive hypotheses in layman's terms, in order to clarify them a bit. In their previous form they said "In other words, a false positive indicates a Positive Inference is False." This is both poor grammar and very confusing. A false positive doesn't "indicate" anything; a false positive is a statistical mistake - a misreading of the auguries, if you will. I've also taken out the Winnie-the-Pooh capitalization and inserted "actually" to make it clear that the false positive (or negative) stands in contradiction to reality. Ivan Denisovitch ( talk) 13:41, 29 November 2010 (UTC)
Sensitivity and Specificity are essentially measures of type-I and type-II error - I think the articles could be merged. Wjastle ( talk) 14:43, 10 October 2008 (UTC)
Both are jargon, and are essentially the same thing, but one takes a positive view (how good the test is) and the other negative (how bad the test is). -- Rumping ( talk) 17:39, 8 November 2008 (UTC)
Is home testing for aids accurate —Preceding unsigned comment added by 70.119.131.29 ( talk) 23:15, 30 July 2009 (UTC)
As far as I remember, the null hypothesis in database searches is that documents are NOT relevant unless proven otherwise. This is thus exactly the opposite of what is said in the article. I tend to remove the whole section, but I think I will wait for some time to see whether citations for one or the other interpretation can be given. 194.94.96.194 ( talk) 08:43, 9 December 2009 (UTC)
Consider your self in a market and you want to buy an apple. Now seeing an apple at fruit shop you have to take decision whether apple is Healthy or Unhealthy? There will be four cases:
In case 1 and 4 you made no mistake your decision was correct. But in case 3 and 4 there is error. Error in case 2 is more harmful, it effect you more because you have purchased unhealthy apple. So this is TYPE II error. Where as error of case 3 is not so crucial, is not so harmful. because you have left apple. So this is TYPE I error.
Loard ( talk) 15:49, 13 April 2011 (UTC)
I have removed the following:
This scenario provided by User 174.21.120.17 is an excellent example of (so to speak) "perfect choice" consequent upon a "flawed" selection process, and has nothing to do with type I and type II errors.
By contrast, a type I or type II error would be one that is consequent upon a flawed selection choice that was made from a (so to speak) "perfect" selection process. Lindsay658 ( talk) 22:32, 27 March 2011 (UTC)
There are two articles discussing related, but non-synonymous terms: this one and Sensitivity and specificity. Consensus (look: Merge proposal) seems to be that thay should stay separate. This article takes care of technical side of things (formulas, alternative definitions), while Sensitivity and specificity discusses issues arrising in the application of statistical methods to fields such as medicine, physics and "airport security check science". I have edited the summary accordingly, also adding (a little clumsy) "about" template redirecting "applied" readers to Sensitivity and specificity. The rest of the article should follow.
I would like to discuss following improvements:
Loard ( talk) 17:47, 13 April 2011 (UTC)
While there are portions of great substance in this article, overall it leaves this reader -- who has seen many presentations of this subject over the years, and three further today -- with the impression of a flailing about with the subject, or perhaps experimenting with it. It is like a teacher converting their lecture notes into a textbook after the first time teaching through a course, rather than after teaching the subject for many years.
So, though rarely would I say this, I feel less solidly informed on this subject (honestly, feeling more confused) after perusing the wiki article than I did when first coming to the site (after having looked at 2-4 min YouTube videos to see state of that pedagogic art). Here, one is left with the sense of the results of gulping large mouthfuls and swallowing with only brief chewing -- which gets the job done, but is not great in terms of nutrition (or avoiding GI discomfort). I ascribe the impression as likely being due to the polyglot authorship.
So, I'd suggest that the wiki group that's created this page:
(1) Agree to have one significant and authoritative contributor take on the task of a major edit;
(2) That the individual do that major edit by first distilling the article down to the best available single pedagogic approach to presenting the T-1/T-2 definitions and fundamentals, **including standard graphics/tables the best teachers use** (!);
(3) Expanding to add a second standard but alternative (textbook-worthy) approach to the explanation, but one clearly tied to the first approach so that readers get two different attempts ("angles") at explaining the fundamental concepts;
(4) That next provided might be **limited**, clear, authoritative set examples, with focus on standard examples appearing in at least one major source text, which can then be cited so that deeper discussion and understanding can be pursued;
(5) That all other extraneous subject matter be limited or eliminated, e.g., in taking the reader from standard Type I/II thinking to the current state of the statistical art (what most statisticians believe and apply, right now), rather than listing encyclopedically every additional proposed error type, even the humorous -- one short insight-filled paragraph (please!);
(6) That historical aspects be reduced to one such brief paragraph, rather than as an alternative parallel development of the fundamentals; (Every chemist mentions its origin with Kekule when teaching about benzene's aromaticity; no one spends any more than a line or two on it, because our understanding of the concept is all the deeper for the passed time, and because the target audience for his writing is not like current audiences in any way; and
(7) That referencing and notes be reviewed for focus, relevance, and scholarly value; e.g., the citation of the high school AP stats prep website should probably go.
These are my suggestions as a teacher and writer and as a knowledgable outsider on this, and is prompted by my inability to suggest the article, as is, to young people needing to understand these errors.
Note: The only thing I did editorially was to remove as tangential the reference to types of error in science, because that section begged the question of the relation of those types to this articles' Types I and II error, where connection (explanatory text) was completely missing. (!)
Prof D. Meduban ( talk) 00:45, 9 June 2011 (UTC)
the type I and type II definition are totally wrong according to more books and otherwebsites — Preceding unsigned comment added by K2k1984 ( talk • contribs) 10:05, 15 July 2011 (UTC)
In the section "Type II error", under the section "Statistical test theory", rejecting the null hypothesis is equated with proving it false. This is inaccurate:
I rewrote the introduction, firstly because the term "false positive" and "false negative" come from other test areas, and are not specifically used in statistical test situations. And secondly the example that was given, could hardly be understood as an example in a statistical test. Nijdam ( talk) 09:15, 13 October 2011 (UTC)
I've noticed that "false positive" and 'false negative" directs to this article. This means this either article should clearly treat all the terms and the different context they refer to, or a separate article should treat "false positive" and 'false negative". Nijdam ( talk) 20:28, 14 November 2011 (UTC)
In the section "Understanding Type I and Type II errors," the last sentence includes "3.4 parts per million (0.00002941 %)," in which the parenthetical percentage seems to be in error. If you calculate 3.4/1000000, the answer is 3.4E-6, or 0.00034%. Gilmore.the.Lion ( talk) 15:06, 21 October 2011 (UTC)
I got to this article via the false positive redirect, hoping for some basic explanation of what a false positive is and what the significance of them is. Instead I get something about a "Type I error" and an "error of the first kind" that just makes my brain hurt trying to understand what it is saying - and I've taken an undergraduate-level course in philosophical logic in the past and have been told on several occasions I have above average comprehension. Wikipedia is a general encyclopaedia not an encyclopaedia for professional statisticians. Thryduulf ( talk) 13:11, 9 December 2011 (UTC)
As the original contributor, I have written a new introduction, and have moved the previous introduction down the page. I think that, as I have written it, most of the "complaints" about the technical complexity of the article will stop -- given that we can now say that the other stuff is connected with the imprecise catachrestical extension of the technical terms (by people that are too lazy to create their own) into these other areas in which, it seems, it is not desirable to speak in direct plain English, and that one must be as distant, obscure, and as jargon-laden as possible. Lindsay658 ( talk) 00:48, 21 December 2011 (UTC)
There has been editing over and over about the example of the wolf.
Is rejected when some-one cries "wolf'.
So far so good. But what about the terms (excessive) "skepticism" and "credulity". Normally a test is skeptical, i.e. one only beliefs there is a wolf if one sees one. Because of this skepticism a type I error is not made easily. As a consequence a type II error is at hand, and may only be avoided by choosing for credulity, i.e. choosing a reliable investigator, who only cries "wolf" when he is pretty sure there is one. If we want to use the words credulity and skepticism in connection with the types of error, we may say: as a consequence of the "credulity" an error of type I is sometimes made, and as a consequence of the "skepticism" an error of type II is sometimes made. Nijdam ( talk) 07:58, 12 April 2012 (UTC)
It never is good practice to make much edits at the same time, as it is rather difficult to see which ones are acceptable and which not. So, make suggestions here on the talk page, before changing the article. Nijdam ( talk) 07:04, 5 September 2012 (UTC)
I wouldn't know how to reject one edit and accept a later one. Nijdam ( talk) 07:56, 6 September 2012 (UTC)
A few years ago, I came up with a mnemonic device to help people remember what Type I and Type II errors are. Just remember "apparition" and "blind".
Apparition starts with an "A", as does "alpha, which is another term for Type I error. An apparition is a ghost, i.e. you're seeing something (a defect or a difference) that isn't there.
Blind starts with a "B", as does beta, which is another term for Type II error. Blind means you're not seeing something (a defect or a difference) that is there. — Preceding unsigned comment added by WaltGary ( talk • contribs) 08:38, 6 October 2012 (UTC)
Anyways: I think there is a real issue with the <<Memory formulas>> in the Table of error types. It seems totally confusing if you look at the table and read off the intersections (H0) Valid/True and (H0) reject which is the Type 1 error, but in the formula it says: "Type-1 = False result but accept it". This seems to be talking about the H1 Hypothesis in each case. To clarify that it should def. be added in front like "Type-1 = H1 false, but accept it" , "Type-2 = H1 true but rejected it", or changed completely. What do others think? best -- Daimpi ( talk) 14:32, 7 November 2015 (UTC)
I've just seen this on Stack Overflow, and thought it was a good way to remember which error was which;
For those experiencing difficulty correctly identifying the two error types, the following mnemonic is based on the fact that (a) an "error" is false, and (b) the Initial letters of "Positive" and "Negative" are written with a different number of vertical lines:
A Type I error is a false POSITIVE; and P has a single vertical line. A Type II error is a false NEGATIVE; and N has two vertical lines.
With this, you need to remember that a false positive means rejecting a true null hypothesis and a false negative is failing to reject a false null hypothesis.
https://stats.stackexchange.com/questions/1610/is-there-a-way-to-remember-the-definitions-of-type-i-and-type-ii-errors/1620#1620 Meltingpot ( talk) 09:42, 30 September 2018 (UTC)
I personally do not agree about the section called : Consequences. Where the article discusses NASA. You guys wrote: "For example, NASA engineers would prefer to throw out an electronic circuit that is really fine (null hypothesis H0: not broken; reality: not broken; action: thrown out; error: type I, false positive) than to use one on a spacecraft that is actually broken (null hypothesis H0: not broken; reality: broken; action: use it; error: type II, false negative). In that situation a type I error raises the budget, but a type II error would risk the entire mission."
I thought the definitions are the following: Type I error is: rejecting the null hypothesis when it is really true. Type II error: accepting the null hypothesis when it is really not true. Null Hypothesis: the hypothesis a researcher is testing is not true, there is no statistical significance. Researcher hypothesis: there is a statistical significance.
Basically the section about NASA could have been interpreted with two scenarios, for instance:
Scenario 1:alternative scenario
Electronic circuit Is good: researcher hypothesis.
Electronic circuit is not good: Null hypothesis
Type I error: Electronic circuit is good, when its really not good. Rejecting Null hypothesis.
Type II error: electronic circuit is not good, when its really good. Accepting Null Hypothesis
Scenario 2:Your scenario
Electronic circuit is broken: researcher hypotheis.
Electronic circuit is not broken: Null hypothesis
Type I error: Electronic circuit is broken, when its really not broken.Rejecting Null hypothesis
Type II error: Electronic circuit is not broken, when its really broken.Accepting Null hypothesis
I was totally confused by this section of the article. Please inform me If I am wrong, or maybe I didn't understand what you meant by this section. — Preceding unsigned comment added by 67.80.92.202 ( talk) 22:26, 3 April 2013 (UTC)
The moral of The Boy Who Cried Wolf is that if you abuse peoples trust they wont trust you when it matters. It is about deliberate deciet not being mistaken (i.e. an error!). It's not intuitive and it's a terrible metaphor. It really doesn't help make it clearer. Why not use an example which is actually about making an error?
Lucaswilkins ( talk) 17:05, 25 October 2013 (UTC)
Hello fellow Wikipedians,
I have just added archive links to 2 external links on
Type I and type II errors. Please take a moment to review
my edit. If necessary, add {{
cbignore}}
after the link to keep me from modifying it. Alternatively, you can add {{
nobots|deny=InternetArchiveBot}}
to keep me off the page altogether. I made the following changes:
When you have finished reviewing my changes, please set the checked parameter below to true to let others know.
This message was posted before February 2018.
After February 2018, "External links modified" talk page sections are no longer generated or monitored by InternetArchiveBot. No special action is required regarding these talk page notices, other than
regular verification using the archive tool instructions below. Editors
have permission to delete these "External links modified" talk page sections if they want to de-clutter talk pages, but see the
RfC before doing mass systematic removals. This message is updated dynamically through the template {{
source check}}
(last update: 5 June 2024).
Cheers.— cyberbot II Talk to my owner:Online 14:23, 18 February 2016 (UTC)
Hello fellow Wikipedians,
I have just added archive links to 2 external links on
Type I and type II errors. Please take a moment to review
my edit. If necessary, add {{
cbignore}}
after the link to keep me from modifying it. Alternatively, you can add {{
nobots|deny=InternetArchiveBot}}
to keep me off the page altogether. I made the following changes:
When you have finished reviewing my changes, please set the checked parameter below to true or failed to let others know (documentation at {{
Sourcecheck}}
).
This message was posted before February 2018.
After February 2018, "External links modified" talk page sections are no longer generated or monitored by InternetArchiveBot. No special action is required regarding these talk page notices, other than
regular verification using the archive tool instructions below. Editors
have permission to delete these "External links modified" talk page sections if they want to de-clutter talk pages, but see the
RfC before doing mass systematic removals. This message is updated dynamically through the template {{
source check}}
(last update: 5 June 2024).
Cheers.— cyberbot II Talk to my owner:Online 03:10, 1 March 2016 (UTC)
Hello fellow Wikipedians,
I have just modified one external link on Type I and type II errors. Please take a moment to review my edit. If you have any questions, or need the bot to ignore the links, or the page altogether, please visit this simple FaQ for additional information. I made the following changes:
When you have finished reviewing my changes, you may follow the instructions on the template below to fix any issues with the URLs.
This message was posted before February 2018.
After February 2018, "External links modified" talk page sections are no longer generated or monitored by InternetArchiveBot. No special action is required regarding these talk page notices, other than
regular verification using the archive tool instructions below. Editors
have permission to delete these "External links modified" talk page sections if they want to de-clutter talk pages, but see the
RfC before doing mass systematic removals. This message is updated dynamically through the template {{
source check}}
(last update: 5 June 2024).
Cheers.— InternetArchiveBot ( Report bug) 13:18, 30 December 2016 (UTC)
In the section on "Statistical significance", it currently reads: the higher the significance level, the less likely it is that the phenomena in question could have been produced by chance alone.
I think that should be: the lower the significance level ...
Lasse Kliemann ( talk) 07:47, 9 July 2017 (UTC)
The link to de.wikipedia is a wrong one (should be " https://de.wikipedia.org/wiki/Fehler_1._und_2._Art") and the link to simple.wikipedia is missing (even though wikidata has the correct information) — Preceding unsigned comment added by 194.209.78.123 ( talk) 12:26, 22 October 2018 (UTC)
I've been doing some minor edits on the diagram caption, namely defining the acronyms. Now I realize that the diagram is really 3 diagrams - the "upper diagram" (with the overlapping blue TN bell curve and red TP bell curve), the "lower diagram" (with the y-axis being labelled P(TP), TPR, and the x-axis being labelled P(FP), FPR), and the 2x2 table in the upper right. The caption only describes the top diagram, and there is no explanation whatsoever for the bottom diagram, nor for the 2x2 table in the upper right. I think the diagram should be split up into 3 separate diagrams (upper diagram, lower diagram, and 2x2 table), each being described by a well-written caption. This would be in the spirit of answering the criticism that "This article may be too technical for most readers to understand."
Also, x, X, Y, and Ŷ are not defined in the caption. — Preceding unsigned comment added by UpDater ( talk • contribs) 21:25, 24 October 2021 (UTC)
For ease of learning and symmetry of terminology, there has to also exist typification of truths.
Type I truth being the true positive ( Power of a test).
Type II truth being the true negative.
Then, "Type I" and "Type II" would have more semantic utility, of being a that "Type I" is about "positives", and "Type II" is about "negatives". — Preceding unsigned comment added by 78.56.218.15 ( talk) 00:24, 9 June 2022 (UTC)
![]() | This ![]() It is of interest to the following WikiProjects: | ||||||||||||||||||||
|
![]() | Daily page views
|
|
The best and most helpful analogy for I/II errors I've ever heard is the courtroom analogy, where a type I error is represented by a wrongfully convicted innocent, and a type II error by a wrongfully acquitted criminal. — Preceding unsigned comment added by Cpt5mann ( talk • contribs) 22:30, 21 May 2020 (UTC)
The reason that I have suggested the merge is that each of these three pieces desperately need to be understood within the context of the other two.
I have done what I can to make the "matrix" which the combined three will inhabit as clear as I possibly can.
The further work, to some extent, needs someone with a higher level of statistical understanding than I possess. However, I am certain that, now I have supplied all of the "historical" information, supported by the appropriate references, that the statistical descriptions and merging will be a far easier task.
Also, I am certain that, once the articles are merged, and once all of the statistical discussions have become symmetrical in form and content, that the average reader will find the Wiki far more helpful than it is at present. My best to you all Lindsay658 22:01, 23 June 2006 (UTC)
What do others think about this merger? I wanted to point a friend to an article about "false positives". Even though the information in the article is statistically correct, it makes little sense if you talk about false positives in the context off e-mail spam filters or anti-virus checks. Imagine you want to explain someone what a false postive is in the case of an anti-virus check and you point him to this site. He will see all the math talk and run - but that is not necessary: the concept of a false positive can be understood even if you have no clue about statistics. e.g. here is an explanation which has nothing to do with statistics, but is perfectly understandable for people who are not mathematically inclined: http://service1.symantec.com/sarc/sarc.nsf/info/html/what.false.positive.html
Wouldn't it be better to create an article about "false postives" and let it point to this article for further details? 79.199.225.197 ( talk) 15:40, 4 May 2009 (UTC)
This article was proposed for deletion on the basis of qualms about its name. I agree that it should be moved, and suggest Type I and Type II errors, but I see no reason to delete. Septentrionalis 21:32, 2 July 2006 (UTC)
Heeding the advice and guidance of Arthur Rubin and Septentrionalis this article is being re-titled and re-organized in a way that far better represents the overall situation. Lindsay658 22:45, 3 July 2006 (UTC)
It seems like the following proposed rearrangement of The Table of Error Types would be more consistent with the TP/FP/FN/TN graphic in the Error Rate section and be more consistent with the location of TP/FP/FN/TN and Type I and II errors in the Confusion_matrix page:
Table of error types | Null hypothesis (H0) is | ||
---|---|---|---|
False | True | ||
Decision about null hypothesis (H0) |
Reject | Correct inference (true positive) (probability = 1−β) |
Type I error (false positive) (probability = α) |
Don't reject |
Type II error (false negative) (probability = β) |
Correct inference (true negative) (probability = 1−α) |
I believe this is consistent with the spirit of Xavier maxime's comment below except that this recognizes that the above table is written from the perspective of H0 rather than H1 as it probably should be. Alternately, the whole table could be represented in the context of both H0 and H1:
Table of error types | Alternate hypothesis (H1) is | ||
---|---|---|---|
True | False | ||
Decision about null hypothesis (H0) |
Reject H0 (H1 Accepted) |
Correct inference (true positive) (probability = 1−β) |
Type I error (false positive) (probability = α) |
Don't reject H0 |
Type II error (false negative) (probability = β) |
Correct inference (true negative) (probability = 1−α) |
This makes the order of "condition positive" and "condition negative" columns more consistent with all the Wikipedia pages that contain the confusion matrix information. I believe this version of the table makes the concepts more easily mentally connected with both hypothesis testing and machine learning contexts. This version of the table also puts the table in the context of BOTH H0 and H1 which is really the only context for which Type I and II errors are meaningful. Sdavern ( talk) 20:45, 16 March 2020 (UTC)
The Table of Error Types is inconsistent with the text and with other wikipedia pages. In particular the True Positive and True Negative appear to be reversed. This page:
/info/en/?search=Sensitivity_and_specificity states that the True positive is the power of the test (i.e. 1 - beta) which is not what is in the table. Could you please check and / or clarify what is meant here ? - Date: 28 March 2018 — Preceding
unsigned comment added by
Xavier maxime (
talk •
contribs)
09:18, 28 March 2018 (UTC)
The term true negative is requested at
Wikipedia:Requested_articles/Mathematics. I think it would be most appropriate to link the two terms true negative and true positive to this article. They are however not explained here, so they need to be added. I think it would be a good thing to create a
confusion matrix at the beginning of this article, and explain each term with a reference to the matrix. I'm going to do the linkup when I finish writing this. If no opposition is uttered before Tuesday, I will do the proposed changes.
Bfg
15:18, 16 July 2006 (UTC)
Lindsay. I admit to not reading the entire article, just shimming. I've done a reread of the article upon reading your comments. I'll try to cover your points:
After reading the article, I also propose that the article is reorganization in the following way:
I copied the article improvement suggestions here to separate them from the very lengthy discussion about footnotes. Bfg 12:14, 7 August 2006 (UTC)
I am currently contemplating how this could best be done, an example should be useful both from a hypothesis testing perspective and a Bayesian classification perspective. I am considering making up an example where we look at the hypothesis testing as a two class Bayesian classification problem. Does that sound reasonable? Bfg 13:07, 7 August 2006 (UTC)
The following is a draft proposal for a case study, meant to come before Various proposals for further extension Bfg 13:36, 18 August 2006 (UTC)
In medicine one often dicsuss the possibility of large scale medical screening (see below).
Give a hypothetical idealized screening situation the following is true:
From these data, we can form a confusion matrix
Correct well (H0) | Correct ill (HA) | |
---|---|---|
Classified well | 989.010 | 10 |
Classified ill | 9.990 | 990 |
Legend | |
---|---|
True negative | |
False negative | |
False positive | |
True positive |
From this we may read the following:
Under frequentist theory one would always refer to one hypothesis as the null hypothesis, and the other as an alternative hypothesis. Under Bayesian theory however, there is no such preference among the alternative hypothesis, the notion of false positive and false negative becomes connected to the hypothesis one is currently discussing. Refering to the confusion matrix above, the false negatives becomes the sum of the offdiagonal column elements belonging to the respective hypothesis, while the false positives becomes the sum of the offdiagonal row elements. The true positives becomes the ondiagonal element, while true negative makes little sense.
In my opinion, the article as currently written systematically reverses the usual meanings of false negative and false positive.
In my experience, the usual notion of false positive is in situations such as this: One is testing to see if a patient has a disease, or if a person has taken illegal drugs. The null hypothesis is that no disease exists, or no drugs were taken. A test is administered, and a result is obtained. Suppose the test indicates that the disease condition exists or that the illegal drugs were taken. Then, if the patient is disease-free, or the drugs were not taken, this is a "false positive" in the usual meaning of the term. If for example under the null hypothesis, 5% of the subjects nonetheless test positive, then it is usually said that "the type I error rate is 0.05." That is, 5% of the subjects have falsely tested positive.
This observation is contrary to how the article is written.
Similarly, the notion of false discovery rate is connected with the rate of falsely "discovering" effects that are solely due to chance under the null hypothesis.
The article attempts to account for this by mentioning ambiguity of testing for innocence versus testing for guilt; while there is some rationale for this (especially under a Bayesian interpretation, where all hypotheses are treated the same), it is not usually the situation in frequentist theory, where one hypothesis is usually distinguished as the obvious null hypothesis. Usually this is the "default" hypothesis that one is trying to reject, e.g., that the accused is innocent, that the patient is disease-free, that the athlete is not using illegal drugs.
Therefore, I seriously propose that the article be edited to reflect properly the usual uses of these terms. As it stands, any Freshman statistics student would be immediately confused when comparing this article to what his statistics textbook states. Bill Jefferys 02:03, 11 August 2006 (UTC)
Reading the article more carefully, I find that it is inconsistent even within itself about "false negative" and "false positive". Whereas the lead-in (incorrectly) identifies a false negative as "rejecting a null hypothesis that should have been accepted", later on in the article, in the "medical screening" section, rejecting a null hypothesis that should have been accepted (i.e., deciding that the individual being screened has the condition when he does not -- the null hypothesis being that the individual does not have the condition), is correctly identified as a false positive.
This is going to take a lot of care to edit to make the whole article consistent with both usual terminology and to make it self-consistent.
I note in passing that under frequentist theory, one never "accepts" the null hypothesis. Rather, one fails to reject the alternative hypothesis. When the article is edited to remove these problems, this should also be fixed. Bill Jefferys 23:44, 11 August 2006 (UTC)
HI, thanks for the feedback. I've been a bit busy of late, but if I can find the time I'll go at it. Some of the later stuff is fine, but the introduction and some of the earlier stuff needs to be fixed. Bill Jefferys 13:59, 14 August 2006 (UTC)
Hi, I've made the changes I am aware of that should be made. There was also some confusion on sensitivity and power which I've also fixed. However, as this was done in one grand edit, I may have missed some things or incorrectly changed some others. Part of the confusion is that the article is written in such a way that it's not always clear what one should consider the null hypothesis. Thus, when discussing computer identity authenticitation, I've presumed that a positive test results corresponds with identifying the subject logging in as authentic, whereas a negative result corresponds with identifying the subject as an imposter. Please check my work. Bill Jefferys 21:47, 14 August 2006 (UTC)
For the paragraph:
Given that you seem to be saying that:
I am certain that it could easily be re-written in the form:
Also, it seems that "the error not rejecting a null hypothesis" should either be:
Best Lindsay658 03:05, 18 August 2006 (UTC)
I hear what you're saying and indeed the double negative bothers me as well. The problem is that according to standard (frequentist) hypothesis testing theory, one never "accepts" the null hypothesis; one only "fails to reject" it. I haven't been able to think of a good way around this that is both technically correct and easy to understand. Any help here would be appreciated.
In a Bayesian or decision-theoretic context, it wouldn't be an issue, but as long as we're considering standard hypothesis testing, it is. Bill Jefferys 11:36, 18 August 2006 (UTC)
An anonyomous editor [63.167.255.231] noted on the main page that:
I have moved the comment here because comments don't belong on the main page. The editor here has some justice behind his comments. Indeed, this is a problem of decision theory since not only must the Type I/II error rate be taken into account, but also the loss function that applies, since the loss on detaining a harmless passenger is much less than the loss on allowing a terrorist passenger on board.
I think that the right way to approach this in the main article is to point out (and link to) the connection to decision theory, perhaps using this example as a springboard. But since this article is explaining only Type I/II errors, it isn't the place to give a full explication of decision theory. Perhaps this can be added to the list of "to-dos". Bill Jefferys 23:55, 21 August 2006 (UTC)
It isn't quite true that one normally tries to make the Type I and Type II error rates equal. Usually one tries to balance the two against each other, i.e., choosing a test that will have an appropriate Type I error rate, and then picking other factors (e.g., the number of cases tested) that will guarantee a desired Type II error rate. I'm not aware that anyone slavishly decides that a study will be designed so that Type I and Type II error rates will be equal.
That said, it is still the case that in decision problems, the Type I/II error rates are part, but not all of the problem. For one thing, Type I/II error rates assume that the null/alternative hypothesis is correct, and take no account of the probability that the null/alternative hypothesis is actually the state of nature. This is something that is only reflected in a Bayesian context, through the prior. Secondly, as I point out above, the cost of making a particular decision, given the probability that the state of nature is what it is (e.g., terrorist, innocuous traveller) has to be taken into account. None of these has anything to do with the definition of what is a Type I/II error. They are important considerations that have to be considered when one is making a decision, but they don't reflect directly on the definition of the terms being described by this article.
Thus, I think that it's appropriate to mention this issue briefly with links to the appropriate articles on decision theory and loss function, for example, but it is also appropriate to use the security checking example as one that describes what Type I/II errors are, which is of course the point of the article. Bill Jefferys 01:47, 22 August 2006 (UTC)
I removed the statement that size is equal to power. This is wrong; in fact size is the maximum probability of Type I error; that is, the smallest level of significance of a test. In many cases, size is equal to (for example a test statistic having a continuous distribution and point null hypothesis), but in general it is where is the critical value. Btyner 22:11, 3 November 2006 (UTC)
Once again, the truth table got changed to an invalid state. The problem is obviously that True and False are used here in two different senses (i.e. the name of the actual condition and the validity of the test result); and so some people look at the intersection of the False column and the Negative row and think the cell should contain "False Negative". I've tried to make this blatantly clear, though some will think this solution is belabored. If you can think of a better way to get this idea across, please leap in. Trevor Hanson 05:19, 13 December 2006 (UTC)
There seem to be some inconsistant use of capitalization of the word type in this article. It seems there is confusion on the web too. Does any one know if 'type' should be capitalized if appearing in the middle of a sentence? 194.83.138.183 10:40, 8 March 2007 (UTC)
There seems to be no reason to capitalize 'type'. Let's proceed with small letters. Loard ( talk) 15:41, 13 April 2011 (UTC)
This article, currently, is very math and statistics heavy and is not useful as a cross reference from other articles that talk about false negatives and false positives.— C45207 | Talk 20:45, 12 March 2007 (UTC)
From the article:
Type II error, also known as an "error of the second kind", a β error, or a "false negative": the error of accepting a null hypothesis when the alternative hypothesis is the true state of nature.
It's my understanding that in a hypothesis test, you never accept the null. You either reject the null (i.e. accept the alternative) or you refrain from rejecting the null. So the above sentence incorrectly defines "type II error". However, a "false negative" can indeed refer to a situation where an assertive declaration is made, e.g. "You do not have cancer."
So a type II error and a false negative aren't necessarily the same thing.
Despite the Wikipedia guideline to be bold, I'm posting this to the talk page rather than making a correction to the article itself, in part because my knowledge of statistics isn't all that deep and there may well be further subtleties here that I'm missing. (In other words, I'd rather risk a type II error than a type I error. ;-) If you can affirm, qualify, or refute what I've said here, please do. Thanks. Fillard 16:40, 4 May 2007 (UTC)
I've made some minor tweaks to the section on null hypotheses. If anyone wants to check out the background to these changes (it necessitated a bit of head scratching and a visit to a library!), there are comments over at Talk:Null hypothesis#Formulation of null hypotheses and User_talk:Lindsay658#Null_hypotheses. -- Sjb90 11:35, 14 June 2007 (UTC)
Phaedrus273 ( talk) 04:14, 14 February 2014 (UTC)I have a major issue with the definitions of "Null Hypothesis" on this page and, less so, on the null hypothesis page. There are two types of null hypothesis:
Type 1 H0 defined as: the experiment is inconclusive
Type 2 H0 defined as: there is no correlation between the two parameters.
To assert a type 2 null hypothesis deductively implies that the experiment is "complete", in other words, the methodology was perfect, the measuring instruments were 100% accurate, every confounding factor was fully accounted for. In many areas of research we can not say this, particularly in psychology. For example:Schernhammer et al [1] state, “Findings from this study indicate that job stress is not related to any increase in breast cancer risk”. In fact their trial contained many serious flaws and so demonstrated nothing. A claim of rejection as a null hypothesis is predicated on a complete and flawless study which is very often not the case. [1]Schernhammer, ES, Hankinson, SE, Rosner, B, Kroenke, CH, Willett, WC, Colditz, GA, Kawachi, I. Job Stress and Breast Cancer Risk. Am J Epidemiol, 2004 160(11):1079-1086
Phaedrus273 - Paul Wilson
In one of the elaborations of the presence or absence of errors, the example of pregnancy testing is given.
On the basis that there is a well-established medical/physiological/psychological condition known as "false pregnancy" (see pseudocyesis) I suggest that it would be far better to choose a domain other than pregnancy for the example; because it could well be construed that, rather than testing for the presence/absence of a pregnancy (i.e., the presence of a foetus within the woman's uterus), it was actually testing for the presence/absence of a "false pregnancy" (viz., pseudocyesis with no foetus present). Lindsay658 01:13, 2 August 2007 (UTC)
The section "Understanding Type I and Type II Errors" was created by Varuag doos on 10 December 2006 (19:27 edit). The second paragraph, starting with the lead-in "Associated section," was (at the time I deleted it) substantially the same as in that original contribution. To help the discussion, here is the deleted paragraph in its entirety:
At this point one might have wanted to, say, provide a better segue into this paragraph, getting rid of the awkward "Associated section" lead in. One might also have corrected the fourth sentence (the one beginning with "There is little chance"), which makes it sound as if random samples can "result in" one or another kind of "population" (the term "population" has a technical meaning in statistics and is not synonymous with "statistical sample"). However, my real problem is that I simply don't understand three out of five sentences in this paragraph (I do understand sentences #1 and #3). There are some "differences between two populations" that random samples are supposed to be "averaging out." What are then the differences that persist "post treatment"? (While we're at it, what is this "treatment," enclosed in scare quotes?) I find the whole second sentence (the one beginning with "The argument is") very confusing. And finally, what is the point of this paragraph? Is it simply to warn that Type-I errors are always possible? Or is it that Type-II errors are always possible? In either case, it's too trivial a point to deserve a paragraph. If the intention is to say something else, then the whole paragraph should definitely be re-written by someone who understands what this something else is supposed to be. Reuqr 05:25, 14 October 2007 (UTC)
In qualitity control applications, this is also known as producer's risk (the risk of rejecting a good item) and consumer's risk (the risk of accepting a bad item) —Preceding unsigned comment added by 70.168.79.54 ( talk) 13:48, 29 July 2010 (UTC)
I have heard many statisticians complain about how "type I" and "type II" error are bad names for statistical terminology, because they are easily confused, and the names themselves carry no explanatory meaning. Has anyone encountered a published source making a statement to this effect? Are there any alternative terms that have been proposed in the literature? This would make an important topic to include in the article, if this sort of thing has gone on. It seems there may be a parallel here between this sort of thing and what has happened in other areas of mathematics, such as where the use of baire category terminology is being replaced by more descriptive terms, such as "of the first category" being replaced by meagre. Cazort ( talk) 18:57, 7 December 2007 (UTC)
I struggled with remembering which way round these are when I was doing courses in psychology, and discovered a mnemonic to help in this which I've added to the page. I hope they meet others' approval.
Meltingpot ( talk) 21:03, 6 October 2009 (UTC)
I thought that the following should appear here (originally at [2] for the ease of others. Lindsay658 ( talk) 21:21, 22 February 2008 (UTC)
Hi there,
Over on the null hypothesis talk page, I've been canvassing for opinions on a change that I plan to make regarding the formulation of a null hypothesis. However I've just noticed your excellent edits on Type I and type II errors. In particular, in the null hypothesis section you say:
The consistent application by statisticians of Neyman and Pearson's convention of representing "the hypothesis to be tested" (or "the hypothesis to be nullified") with the expression Ho -- associated with an increasing tendency to incorrectly read the expression's subscript as a zero, rather than an "O" (for "original") -- has led to circumstances where many understand the term "the null hypothesis" as meaning "the nil hypothesis". That is, they incorrectly understand it to mean "there is no phenomenon", and that the results in question have arisen through chance.
Now I know the trouble with stats in empirical science is that everyone is always feeling their way to some extent -- it's an inexact science that tries to bring sharp definition to the real world! But I'm really intrigued to know what you're basing this statement on -- I'm one of those people who has always understood the null hypothesis to be a statement of null effect. I've just dug out my old undergrad notes on this, and that's certainly what I was taught at Cambridge; and it's also what my stats reference (Statistical Methods for Psychology, by David C. Howell) seems to suggest. In addition, whenever I've been an examiner for public exams, the markscheme has tended to state the definition of a null as being a statement of null effect.
I'm a cognitive psychologist rather than a statistician, so I'm entirely prepared to accept that this may be a common misconception, but was wondering whether you could point me towards some decent reference sources that try to clear this up, if so! —The preceding unsigned comment was added by Sjb90 ( talk • contribs) 11:07, 16 May 2007 (UTC).
[ [4]], [ [5]], and [ [6]] I really didn't have a lot to work with.
Are the 2 examples under the Computer Security section backwards? Mingramh ( talk) 14:15, 2 April 2008 (UTC)
I suggest removing this example table. I'm not clear on what legal system abjectly uses a test for "innocence" where the presumption (null hypothesis) is on "not innocent". It could be argued that civil law uses this; however, there it's not really consistent with a hypothesis test at all since the basis is more a "preponderance of doubt". It could also be argued that the de facto situation in a kangaroo court uses a "not innocent" null hypothesis, but in that case I don't beleive it's consistent with a hypothesis test either since the "truth" is not what they're after. I never heard of someone being found "not innocent" by even a kangaroo court. Garykempen ( talk) 19:41, 22 March 2009 (UTC)
I'm going to be studying this article, and in the process will clean up a few minor things. The references to a "person" getting pregnant are silly, for example. Check the wiki page on pregnancy: only women get pregnant. Etc.. Brad ( talk) 01:50, 25 August 2008 (UTC)
I've changed the two "In other words" lines that try to put the false negative and false positive hypotheses in layman's terms, in order to clarify them a bit. In their previous form they said "In other words, a false positive indicates a Positive Inference is False." This is both poor grammar and very confusing. A false positive doesn't "indicate" anything; a false positive is a statistical mistake - a misreading of the auguries, if you will. I've also taken out the Winnie-the-Pooh capitalization and inserted "actually" to make it clear that the false positive (or negative) stands in contradiction to reality. Ivan Denisovitch ( talk) 13:41, 29 November 2010 (UTC)
Sensitivity and Specificity are essentially measures of type-I and type-II error - I think the articles could be merged. Wjastle ( talk) 14:43, 10 October 2008 (UTC)
Both are jargon, and are essentially the same thing, but one takes a positive view (how good the test is) and the other negative (how bad the test is). -- Rumping ( talk) 17:39, 8 November 2008 (UTC)
Is home testing for aids accurate —Preceding unsigned comment added by 70.119.131.29 ( talk) 23:15, 30 July 2009 (UTC)
As far as I remember, the null hypothesis in database searches is that documents are NOT relevant unless proven otherwise. This is thus exactly the opposite of what is said in the article. I tend to remove the whole section, but I think I will wait for some time to see whether citations for one or the other interpretation can be given. 194.94.96.194 ( talk) 08:43, 9 December 2009 (UTC)
Consider your self in a market and you want to buy an apple. Now seeing an apple at fruit shop you have to take decision whether apple is Healthy or Unhealthy? There will be four cases:
In case 1 and 4 you made no mistake your decision was correct. But in case 3 and 4 there is error. Error in case 2 is more harmful, it effect you more because you have purchased unhealthy apple. So this is TYPE II error. Where as error of case 3 is not so crucial, is not so harmful. because you have left apple. So this is TYPE I error.
Loard ( talk) 15:49, 13 April 2011 (UTC)
I have removed the following:
This scenario provided by User 174.21.120.17 is an excellent example of (so to speak) "perfect choice" consequent upon a "flawed" selection process, and has nothing to do with type I and type II errors.
By contrast, a type I or type II error would be one that is consequent upon a flawed selection choice that was made from a (so to speak) "perfect" selection process. Lindsay658 ( talk) 22:32, 27 March 2011 (UTC)
There are two articles discussing related, but non-synonymous terms: this one and Sensitivity and specificity. Consensus (look: Merge proposal) seems to be that thay should stay separate. This article takes care of technical side of things (formulas, alternative definitions), while Sensitivity and specificity discusses issues arrising in the application of statistical methods to fields such as medicine, physics and "airport security check science". I have edited the summary accordingly, also adding (a little clumsy) "about" template redirecting "applied" readers to Sensitivity and specificity. The rest of the article should follow.
I would like to discuss following improvements:
Loard ( talk) 17:47, 13 April 2011 (UTC)
While there are portions of great substance in this article, overall it leaves this reader -- who has seen many presentations of this subject over the years, and three further today -- with the impression of a flailing about with the subject, or perhaps experimenting with it. It is like a teacher converting their lecture notes into a textbook after the first time teaching through a course, rather than after teaching the subject for many years.
So, though rarely would I say this, I feel less solidly informed on this subject (honestly, feeling more confused) after perusing the wiki article than I did when first coming to the site (after having looked at 2-4 min YouTube videos to see state of that pedagogic art). Here, one is left with the sense of the results of gulping large mouthfuls and swallowing with only brief chewing -- which gets the job done, but is not great in terms of nutrition (or avoiding GI discomfort). I ascribe the impression as likely being due to the polyglot authorship.
So, I'd suggest that the wiki group that's created this page:
(1) Agree to have one significant and authoritative contributor take on the task of a major edit;
(2) That the individual do that major edit by first distilling the article down to the best available single pedagogic approach to presenting the T-1/T-2 definitions and fundamentals, **including standard graphics/tables the best teachers use** (!);
(3) Expanding to add a second standard but alternative (textbook-worthy) approach to the explanation, but one clearly tied to the first approach so that readers get two different attempts ("angles") at explaining the fundamental concepts;
(4) That next provided might be **limited**, clear, authoritative set examples, with focus on standard examples appearing in at least one major source text, which can then be cited so that deeper discussion and understanding can be pursued;
(5) That all other extraneous subject matter be limited or eliminated, e.g., in taking the reader from standard Type I/II thinking to the current state of the statistical art (what most statisticians believe and apply, right now), rather than listing encyclopedically every additional proposed error type, even the humorous -- one short insight-filled paragraph (please!);
(6) That historical aspects be reduced to one such brief paragraph, rather than as an alternative parallel development of the fundamentals; (Every chemist mentions its origin with Kekule when teaching about benzene's aromaticity; no one spends any more than a line or two on it, because our understanding of the concept is all the deeper for the passed time, and because the target audience for his writing is not like current audiences in any way; and
(7) That referencing and notes be reviewed for focus, relevance, and scholarly value; e.g., the citation of the high school AP stats prep website should probably go.
These are my suggestions as a teacher and writer and as a knowledgable outsider on this, and is prompted by my inability to suggest the article, as is, to young people needing to understand these errors.
Note: The only thing I did editorially was to remove as tangential the reference to types of error in science, because that section begged the question of the relation of those types to this articles' Types I and II error, where connection (explanatory text) was completely missing. (!)
Prof D. Meduban ( talk) 00:45, 9 June 2011 (UTC)
the type I and type II definition are totally wrong according to more books and otherwebsites — Preceding unsigned comment added by K2k1984 ( talk • contribs) 10:05, 15 July 2011 (UTC)
In the section "Type II error", under the section "Statistical test theory", rejecting the null hypothesis is equated with proving it false. This is inaccurate:
I rewrote the introduction, firstly because the term "false positive" and "false negative" come from other test areas, and are not specifically used in statistical test situations. And secondly the example that was given, could hardly be understood as an example in a statistical test. Nijdam ( talk) 09:15, 13 October 2011 (UTC)
I've noticed that "false positive" and 'false negative" directs to this article. This means this either article should clearly treat all the terms and the different context they refer to, or a separate article should treat "false positive" and 'false negative". Nijdam ( talk) 20:28, 14 November 2011 (UTC)
In the section "Understanding Type I and Type II errors," the last sentence includes "3.4 parts per million (0.00002941 %)," in which the parenthetical percentage seems to be in error. If you calculate 3.4/1000000, the answer is 3.4E-6, or 0.00034%. Gilmore.the.Lion ( talk) 15:06, 21 October 2011 (UTC)
I got to this article via the false positive redirect, hoping for some basic explanation of what a false positive is and what the significance of them is. Instead I get something about a "Type I error" and an "error of the first kind" that just makes my brain hurt trying to understand what it is saying - and I've taken an undergraduate-level course in philosophical logic in the past and have been told on several occasions I have above average comprehension. Wikipedia is a general encyclopaedia not an encyclopaedia for professional statisticians. Thryduulf ( talk) 13:11, 9 December 2011 (UTC)
As the original contributor, I have written a new introduction, and have moved the previous introduction down the page. I think that, as I have written it, most of the "complaints" about the technical complexity of the article will stop -- given that we can now say that the other stuff is connected with the imprecise catachrestical extension of the technical terms (by people that are too lazy to create their own) into these other areas in which, it seems, it is not desirable to speak in direct plain English, and that one must be as distant, obscure, and as jargon-laden as possible. Lindsay658 ( talk) 00:48, 21 December 2011 (UTC)
There has been editing over and over about the example of the wolf.
Is rejected when some-one cries "wolf'.
So far so good. But what about the terms (excessive) "skepticism" and "credulity". Normally a test is skeptical, i.e. one only beliefs there is a wolf if one sees one. Because of this skepticism a type I error is not made easily. As a consequence a type II error is at hand, and may only be avoided by choosing for credulity, i.e. choosing a reliable investigator, who only cries "wolf" when he is pretty sure there is one. If we want to use the words credulity and skepticism in connection with the types of error, we may say: as a consequence of the "credulity" an error of type I is sometimes made, and as a consequence of the "skepticism" an error of type II is sometimes made. Nijdam ( talk) 07:58, 12 April 2012 (UTC)
It never is good practice to make much edits at the same time, as it is rather difficult to see which ones are acceptable and which not. So, make suggestions here on the talk page, before changing the article. Nijdam ( talk) 07:04, 5 September 2012 (UTC)
I wouldn't know how to reject one edit and accept a later one. Nijdam ( talk) 07:56, 6 September 2012 (UTC)
A few years ago, I came up with a mnemonic device to help people remember what Type I and Type II errors are. Just remember "apparition" and "blind".
Apparition starts with an "A", as does "alpha, which is another term for Type I error. An apparition is a ghost, i.e. you're seeing something (a defect or a difference) that isn't there.
Blind starts with a "B", as does beta, which is another term for Type II error. Blind means you're not seeing something (a defect or a difference) that is there. — Preceding unsigned comment added by WaltGary ( talk • contribs) 08:38, 6 October 2012 (UTC)
Anyways: I think there is a real issue with the <<Memory formulas>> in the Table of error types. It seems totally confusing if you look at the table and read off the intersections (H0) Valid/True and (H0) reject which is the Type 1 error, but in the formula it says: "Type-1 = False result but accept it". This seems to be talking about the H1 Hypothesis in each case. To clarify that it should def. be added in front like "Type-1 = H1 false, but accept it" , "Type-2 = H1 true but rejected it", or changed completely. What do others think? best -- Daimpi ( talk) 14:32, 7 November 2015 (UTC)
I've just seen this on Stack Overflow, and thought it was a good way to remember which error was which;
For those experiencing difficulty correctly identifying the two error types, the following mnemonic is based on the fact that (a) an "error" is false, and (b) the Initial letters of "Positive" and "Negative" are written with a different number of vertical lines:
A Type I error is a false POSITIVE; and P has a single vertical line. A Type II error is a false NEGATIVE; and N has two vertical lines.
With this, you need to remember that a false positive means rejecting a true null hypothesis and a false negative is failing to reject a false null hypothesis.
https://stats.stackexchange.com/questions/1610/is-there-a-way-to-remember-the-definitions-of-type-i-and-type-ii-errors/1620#1620 Meltingpot ( talk) 09:42, 30 September 2018 (UTC)
I personally do not agree about the section called : Consequences. Where the article discusses NASA. You guys wrote: "For example, NASA engineers would prefer to throw out an electronic circuit that is really fine (null hypothesis H0: not broken; reality: not broken; action: thrown out; error: type I, false positive) than to use one on a spacecraft that is actually broken (null hypothesis H0: not broken; reality: broken; action: use it; error: type II, false negative). In that situation a type I error raises the budget, but a type II error would risk the entire mission."
I thought the definitions are the following: Type I error is: rejecting the null hypothesis when it is really true. Type II error: accepting the null hypothesis when it is really not true. Null Hypothesis: the hypothesis a researcher is testing is not true, there is no statistical significance. Researcher hypothesis: there is a statistical significance.
Basically the section about NASA could have been interpreted with two scenarios, for instance:
Scenario 1:alternative scenario
Electronic circuit Is good: researcher hypothesis.
Electronic circuit is not good: Null hypothesis
Type I error: Electronic circuit is good, when its really not good. Rejecting Null hypothesis.
Type II error: electronic circuit is not good, when its really good. Accepting Null Hypothesis
Scenario 2:Your scenario
Electronic circuit is broken: researcher hypotheis.
Electronic circuit is not broken: Null hypothesis
Type I error: Electronic circuit is broken, when its really not broken.Rejecting Null hypothesis
Type II error: Electronic circuit is not broken, when its really broken.Accepting Null hypothesis
I was totally confused by this section of the article. Please inform me If I am wrong, or maybe I didn't understand what you meant by this section. — Preceding unsigned comment added by 67.80.92.202 ( talk) 22:26, 3 April 2013 (UTC)
The moral of The Boy Who Cried Wolf is that if you abuse peoples trust they wont trust you when it matters. It is about deliberate deciet not being mistaken (i.e. an error!). It's not intuitive and it's a terrible metaphor. It really doesn't help make it clearer. Why not use an example which is actually about making an error?
Lucaswilkins ( talk) 17:05, 25 October 2013 (UTC)
Hello fellow Wikipedians,
I have just added archive links to 2 external links on
Type I and type II errors. Please take a moment to review
my edit. If necessary, add {{
cbignore}}
after the link to keep me from modifying it. Alternatively, you can add {{
nobots|deny=InternetArchiveBot}}
to keep me off the page altogether. I made the following changes:
When you have finished reviewing my changes, please set the checked parameter below to true to let others know.
This message was posted before February 2018.
After February 2018, "External links modified" talk page sections are no longer generated or monitored by InternetArchiveBot. No special action is required regarding these talk page notices, other than
regular verification using the archive tool instructions below. Editors
have permission to delete these "External links modified" talk page sections if they want to de-clutter talk pages, but see the
RfC before doing mass systematic removals. This message is updated dynamically through the template {{
source check}}
(last update: 5 June 2024).
Cheers.— cyberbot II Talk to my owner:Online 14:23, 18 February 2016 (UTC)
Hello fellow Wikipedians,
I have just added archive links to 2 external links on
Type I and type II errors. Please take a moment to review
my edit. If necessary, add {{
cbignore}}
after the link to keep me from modifying it. Alternatively, you can add {{
nobots|deny=InternetArchiveBot}}
to keep me off the page altogether. I made the following changes:
When you have finished reviewing my changes, please set the checked parameter below to true or failed to let others know (documentation at {{
Sourcecheck}}
).
This message was posted before February 2018.
After February 2018, "External links modified" talk page sections are no longer generated or monitored by InternetArchiveBot. No special action is required regarding these talk page notices, other than
regular verification using the archive tool instructions below. Editors
have permission to delete these "External links modified" talk page sections if they want to de-clutter talk pages, but see the
RfC before doing mass systematic removals. This message is updated dynamically through the template {{
source check}}
(last update: 5 June 2024).
Cheers.— cyberbot II Talk to my owner:Online 03:10, 1 March 2016 (UTC)
Hello fellow Wikipedians,
I have just modified one external link on Type I and type II errors. Please take a moment to review my edit. If you have any questions, or need the bot to ignore the links, or the page altogether, please visit this simple FaQ for additional information. I made the following changes:
When you have finished reviewing my changes, you may follow the instructions on the template below to fix any issues with the URLs.
This message was posted before February 2018.
After February 2018, "External links modified" talk page sections are no longer generated or monitored by InternetArchiveBot. No special action is required regarding these talk page notices, other than
regular verification using the archive tool instructions below. Editors
have permission to delete these "External links modified" talk page sections if they want to de-clutter talk pages, but see the
RfC before doing mass systematic removals. This message is updated dynamically through the template {{
source check}}
(last update: 5 June 2024).
Cheers.— InternetArchiveBot ( Report bug) 13:18, 30 December 2016 (UTC)
In the section on "Statistical significance", it currently reads: the higher the significance level, the less likely it is that the phenomena in question could have been produced by chance alone.
I think that should be: the lower the significance level ...
Lasse Kliemann ( talk) 07:47, 9 July 2017 (UTC)
The link to de.wikipedia is a wrong one (should be " https://de.wikipedia.org/wiki/Fehler_1._und_2._Art") and the link to simple.wikipedia is missing (even though wikidata has the correct information) — Preceding unsigned comment added by 194.209.78.123 ( talk) 12:26, 22 October 2018 (UTC)
I've been doing some minor edits on the diagram caption, namely defining the acronyms. Now I realize that the diagram is really 3 diagrams - the "upper diagram" (with the overlapping blue TN bell curve and red TP bell curve), the "lower diagram" (with the y-axis being labelled P(TP), TPR, and the x-axis being labelled P(FP), FPR), and the 2x2 table in the upper right. The caption only describes the top diagram, and there is no explanation whatsoever for the bottom diagram, nor for the 2x2 table in the upper right. I think the diagram should be split up into 3 separate diagrams (upper diagram, lower diagram, and 2x2 table), each being described by a well-written caption. This would be in the spirit of answering the criticism that "This article may be too technical for most readers to understand."
Also, x, X, Y, and Ŷ are not defined in the caption. — Preceding unsigned comment added by UpDater ( talk • contribs) 21:25, 24 October 2021 (UTC)
For ease of learning and symmetry of terminology, there has to also exist typification of truths.
Type I truth being the true positive ( Power of a test).
Type II truth being the true negative.
Then, "Type I" and "Type II" would have more semantic utility, of being a that "Type I" is about "positives", and "Type II" is about "negatives". — Preceding unsigned comment added by 78.56.218.15 ( talk) 00:24, 9 June 2022 (UTC)