for
realx > 0. Here is the
beta function. In many applications, the parameters d1 and d2 are
positive integers, but the distribution is well-defined for positive real values of these parameters.
In instances where the F-distribution is used, for example in the
analysis of variance, independence of and might be demonstrated by applying
Cochran's theorem.
Equivalently, the random variable of the F-distribution may also be written
where and , is the sum of squares of random variables from normal distribution and is the sum of squares of random variables from normal distribution .
[discuss][citation needed]
In a
frequentist context, a scaled F-distribution therefore gives the probability , with the F-distribution itself, without any scaling, applying where is being taken equal to . This is the context in which the F-distribution most generally appears in
F-tests: where the null hypothesis is that two independent normal variances are equal, and the observed sums of some appropriately selected squares are then examined to see whether their ratio is significantly incompatible with this null hypothesis.
The quantity has the same distribution in Bayesian statistics, if an uninformative rescaling-invariant
Jeffreys prior is taken for the
prior probabilities of and .[9] In this context, a scaled F-distribution thus gives the posterior probability , where the observed sums and are now taken as known.
^Lazo, A.V.; Rathie, P. (1978). "On the entropy of continuous probability distributions". IEEE Transactions on Information Theory. 24 (1). IEEE: 120–122.
doi:
10.1109/tit.1978.1055832.
^
abJohnson, Norman Lloyd; Samuel Kotz; N. Balakrishnan (1995). Continuous Univariate Distributions, Volume 2 (Second Edition, Section 27). Wiley.
ISBN0-471-58494-0.
^Mood, Alexander; Franklin A. Graybill; Duane C. Boes (1974). Introduction to the Theory of Statistics (Third ed.). McGraw-Hill. pp. 246–249.
ISBN0-07-042864-6.
for
realx > 0. Here is the
beta function. In many applications, the parameters d1 and d2 are
positive integers, but the distribution is well-defined for positive real values of these parameters.
In instances where the F-distribution is used, for example in the
analysis of variance, independence of and might be demonstrated by applying
Cochran's theorem.
Equivalently, the random variable of the F-distribution may also be written
where and , is the sum of squares of random variables from normal distribution and is the sum of squares of random variables from normal distribution .
[discuss][citation needed]
In a
frequentist context, a scaled F-distribution therefore gives the probability , with the F-distribution itself, without any scaling, applying where is being taken equal to . This is the context in which the F-distribution most generally appears in
F-tests: where the null hypothesis is that two independent normal variances are equal, and the observed sums of some appropriately selected squares are then examined to see whether their ratio is significantly incompatible with this null hypothesis.
The quantity has the same distribution in Bayesian statistics, if an uninformative rescaling-invariant
Jeffreys prior is taken for the
prior probabilities of and .[9] In this context, a scaled F-distribution thus gives the posterior probability , where the observed sums and are now taken as known.
^Lazo, A.V.; Rathie, P. (1978). "On the entropy of continuous probability distributions". IEEE Transactions on Information Theory. 24 (1). IEEE: 120–122.
doi:
10.1109/tit.1978.1055832.
^
abJohnson, Norman Lloyd; Samuel Kotz; N. Balakrishnan (1995). Continuous Univariate Distributions, Volume 2 (Second Edition, Section 27). Wiley.
ISBN0-471-58494-0.
^Mood, Alexander; Franklin A. Graybill; Duane C. Boes (1974). Introduction to the Theory of Statistics (Third ed.). McGraw-Hill. pp. 246–249.
ISBN0-07-042864-6.