From Wikipedia, the free encyclopedia
  • Promote case study. Medical screening is one of the oldest uses of statistics.
  • Adding a more rigorous mathematical foundation to the formula sections. Rice - Mathematical Statistics and Data Analysis sets up such a framework. See pp 16, 300.
  • Rework the introduction, again, to be less confusing, especially eliminating the possible misinterpretation of double negatives. (I find some of the examples useful, like the tradeoff about the risk in airport security between false negatives and false positives. What's missing in the examples is bringing them back around home to what are Type I and Type II errors shown. While I follow the example easily, I don't yet completely trust my mapping of the example to the error types, which I am left to do on my own, in part because the text in the initial definition of the types is confusing, appearing to rely on double negatives for the definition. But I'm not sure.) AugustMohr ( talk) 23:41, 10 August 2019 (UTC)AugustMohr reply


I think that there is a fundamental error when stating that the probability of a type I error is equal to alpha. I don't think this is correct. I refer you to the discussion in the article about the calculation of the average speed of a car. At alpha = 0.05 it is stated that the critical speed is 121.6 km/h. Thus, at this speed, 95% of drivers fined should have indeed exceeded the speed limit. And 5% will have not - but would still be fined. The fundamental point missing here is that many cars will be well in excess of 121.6 km/h, and nearly 100% of these will be rightly fined, with very few falsely fined. Thus, based on this, the rate of false positives should not be equal to alpha: rather it should be equal to OR LESS THAN alpha. Hardly any text that I have come across address this problem. Rather oddly, as the precision of the measurement improve the PROPORTION of false positives does not seem to change, but the total number of false positives increases. This is because many of those drivers that would not have been fined with a less precise measurement are now fined - because the critical speed has been brought closer to 120 km/h. This increases the number of people in the potential false-positive group. So improving the experiment leads to a greater number of people being falsely accused of driving above the speed limit, and at the same time it reduces the number of drivers that get away without a fine.

From Wikipedia, the free encyclopedia
  • Promote case study. Medical screening is one of the oldest uses of statistics.
  • Adding a more rigorous mathematical foundation to the formula sections. Rice - Mathematical Statistics and Data Analysis sets up such a framework. See pp 16, 300.
  • Rework the introduction, again, to be less confusing, especially eliminating the possible misinterpretation of double negatives. (I find some of the examples useful, like the tradeoff about the risk in airport security between false negatives and false positives. What's missing in the examples is bringing them back around home to what are Type I and Type II errors shown. While I follow the example easily, I don't yet completely trust my mapping of the example to the error types, which I am left to do on my own, in part because the text in the initial definition of the types is confusing, appearing to rely on double negatives for the definition. But I'm not sure.) AugustMohr ( talk) 23:41, 10 August 2019 (UTC)AugustMohr reply


I think that there is a fundamental error when stating that the probability of a type I error is equal to alpha. I don't think this is correct. I refer you to the discussion in the article about the calculation of the average speed of a car. At alpha = 0.05 it is stated that the critical speed is 121.6 km/h. Thus, at this speed, 95% of drivers fined should have indeed exceeded the speed limit. And 5% will have not - but would still be fined. The fundamental point missing here is that many cars will be well in excess of 121.6 km/h, and nearly 100% of these will be rightly fined, with very few falsely fined. Thus, based on this, the rate of false positives should not be equal to alpha: rather it should be equal to OR LESS THAN alpha. Hardly any text that I have come across address this problem. Rather oddly, as the precision of the measurement improve the PROPORTION of false positives does not seem to change, but the total number of false positives increases. This is because many of those drivers that would not have been fined with a less precise measurement are now fined - because the critical speed has been brought closer to 120 km/h. This increases the number of people in the potential false-positive group. So improving the experiment leads to a greater number of people being falsely accused of driving above the speed limit, and at the same time it reduces the number of drivers that get away without a fine.


Videos

Youtube | Vimeo | Bing

Websites

Google | Yahoo | Bing

Encyclopedia

Google | Yahoo | Bing

Facebook