From Wikipedia, the free encyclopedia
Mathematics desk
< October 11 << Sep | October | Nov >> Current desk >
Welcome to the Wikipedia Mathematics Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


October 12 Information

Interpreting how significant the findings are when χ2 (1) #s are given in a study

I'd like help interpreting how significant the findings are when χ2 (1) #s are given, as at I read Pearson's_chi-squared_test and couldn't thoroughly grok it. I get the gist, I think: χ2 (1) 32 is strong evidence, χ2 (1) = 9.4 is very weak evidence, but I'm not confident in my interpretation. I think I do understand pretty well what a p-value is and how to interpret it (e.g. p < 0.05 is a commonly used and often fairly-criticized-as-arbitrary threshold).

Would appreciate help interpreting how strong the findings are given the χ2 (1) #s in this study.

I think it would be good if Pearson's_chi-squared_test had more info on interpreting χ2 values - feels particularly warranted given the great detail it has on how to calculate χ2.

Thanks! (If I'm not mistaken the paper makes no attempt to adjust/correct for confounding factors, though it does try to identify and quantify some potential factors.) -- Elvey( tc) 07:00, 12 October 2016 (UTC) reply

Isn't the chi-squared just a correlation test under the assumption that observations are independent? If the assumption is correct, then all you care about is the p-value with its usual caveats. I assume from your post that you have some familiarity with Bayesian inference; just keep in mind that there also is publication bias. ( This is probably better than any article on the subject.) Tigraan Click here to contact me 08:45, 12 October 2016 (UTC) reply
In the context of the paper you cited, the χ2(1) is a statistical measure of correlation in a 2x2 contingency table. The larger the χ2(1), the more the entries in the table deviate from statistical independence. The p-value in this case is the probability that, by chance, two factors that were independent managed to produce a table with a given value of χ2(1) or larger. So qualitatively, the larger χ2(1), the smaller the p-value. the The paper itself gives both the χ2(1) value and the corresponding p-value (or a limit for the p-value) for each of the hypotheses tested, and there isn't much interpretation of χ2(1) beyond the p-value it implies. Since there are at least 20 or so tests in the paper, a threshold of significance of p < 0.05 implies that on average one of those tests is going to reach that threshold of significance just by chance. To guard against false positives like this, researchers may apply a multiple-testing correction to the p-value threshold. -- Mark viking ( talk) 10:25, 12 October 2016 (UTC) reply
Aye, but things like Bonferroni correction have plenty of critics and detractors. One simple issue is that I may publish further analysis and statistical tests on the same data that Crane et al. used. Shall they then update each of their assertions each time future work is published? Is it reasonable that me publishing a new test in the future should turn some of their positive results to negative?
I don't really want or need to get in to a whole new discussion of the multiple comparisons problem, but since you brought it up I thought OP may like to read some on this topic as well. SemanticMantis ( talk) 13:59, 12 October 2016 (UTC) reply
I wholeheartedly agree that the multiple comparisons, and what exactly a researcher would consider significant, are subtle and research design dependent issues. Thanks for expanding a bit on that. Bonferroni is just one way of doing this, and there are others, such as false discovery rate. Considering how to combine results from other studies is another level of complexity and is the topic of meta-analysis. -- Mark viking ( talk) 17:00, 12 October 2016 (UTC) reply
Thanks, Mark viking, User:Tigraan & SemanticMantis for the answers and pointers to further reading. Knowledge expanded. And xkcd is always a blast!-- Elvey( tc) 07:40, 13 October 2016 (UTC) reply

NP-hardness

Let the following problem: (that is, all formulae that have as many satisfying assignments as unsatisfying assignments).

Is this problem NP-hard? — Preceding unsigned comment added by 31.154.81.30 ( talk) 17:44, 12 October 2016 (UTC) reply

I can show it's coNP-hard. Recall that SAT (all formulae that have a satisfying assignment) is NP-complete. Then given a formula , choose a variable not appearing in the formula and make . Then . So this is a poly-time m-reduction from co-NP to , giving us a poly-time Turing reduction from NP to .-- 130.195.253.18 ( talk) 00:50, 14 October 2016 (UTC) reply
Thank you! 31.154.81.30 ( talk) 07:19, 14 October 2016 (UTC) reply

And how about this: (that is, all formulae that have an even number of satisfying assignments). Is it NP-hard or coNP-hard? 31.154.81.30 ( talk) 07:19, 14 October 2016 (UTC) reply

This ⊕P-complete, ⊕P is not known to contain or be contained in NP. 37.248.166.117 ( talk) 12:04, 15 October 2016 (UTC) reply
Thank you! (I've never heard of this class before) 31.154.81.30 ( talk) —Preceding undated comment added 10:31, 16 October 2016 (UTC) reply

Derivative when working with the field of rational functions

Suppose we have the field of rational functions with complex coefficients , and we have a function . Can we define the derivative of with respect to ? I do not mean the derivation of differential algebra, I mean the derivative as defined by a quotient and taking the limit. As is a field, a quotient is well-defined, but it's not obvious to me how to take a limit.-- Leon ( talk) 18:44, 12 October 2016 (UTC) reply

If you have a topology on the field, you can take a limit, but in finite fields and probably some other fields, you don't have that. But I'll take a stab at answering this, although I don't know whether this will be in accord with any standard usages and conventions. Suppose ƒ(x) = p(x)/q(x) and p(x) and q(x) are polynomials.
The numerator in that last fraction is 0 when h = 0; therefore it is divisible by h and so h can be canceled from the numerator and the denominator. The value of the resulting quotient when h = 0 could be termed the "limit". Michael Hardy ( talk) 00:24, 13 October 2016 (UTC) reply
For polynomials this would be the same as the formal derivative. For rationals it would be the same as combining the formal derivative with the quotient rule, which I suppose you could also call the formal derivative but the article doesn't mention it. -- RDBury ( talk) 07:46, 13 October 2016 (UTC) reply
From Wikipedia, the free encyclopedia
Mathematics desk
< October 11 << Sep | October | Nov >> Current desk >
Welcome to the Wikipedia Mathematics Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


October 12 Information

Interpreting how significant the findings are when χ2 (1) #s are given in a study

I'd like help interpreting how significant the findings are when χ2 (1) #s are given, as at I read Pearson's_chi-squared_test and couldn't thoroughly grok it. I get the gist, I think: χ2 (1) 32 is strong evidence, χ2 (1) = 9.4 is very weak evidence, but I'm not confident in my interpretation. I think I do understand pretty well what a p-value is and how to interpret it (e.g. p < 0.05 is a commonly used and often fairly-criticized-as-arbitrary threshold).

Would appreciate help interpreting how strong the findings are given the χ2 (1) #s in this study.

I think it would be good if Pearson's_chi-squared_test had more info on interpreting χ2 values - feels particularly warranted given the great detail it has on how to calculate χ2.

Thanks! (If I'm not mistaken the paper makes no attempt to adjust/correct for confounding factors, though it does try to identify and quantify some potential factors.) -- Elvey( tc) 07:00, 12 October 2016 (UTC) reply

Isn't the chi-squared just a correlation test under the assumption that observations are independent? If the assumption is correct, then all you care about is the p-value with its usual caveats. I assume from your post that you have some familiarity with Bayesian inference; just keep in mind that there also is publication bias. ( This is probably better than any article on the subject.) Tigraan Click here to contact me 08:45, 12 October 2016 (UTC) reply
In the context of the paper you cited, the χ2(1) is a statistical measure of correlation in a 2x2 contingency table. The larger the χ2(1), the more the entries in the table deviate from statistical independence. The p-value in this case is the probability that, by chance, two factors that were independent managed to produce a table with a given value of χ2(1) or larger. So qualitatively, the larger χ2(1), the smaller the p-value. the The paper itself gives both the χ2(1) value and the corresponding p-value (or a limit for the p-value) for each of the hypotheses tested, and there isn't much interpretation of χ2(1) beyond the p-value it implies. Since there are at least 20 or so tests in the paper, a threshold of significance of p < 0.05 implies that on average one of those tests is going to reach that threshold of significance just by chance. To guard against false positives like this, researchers may apply a multiple-testing correction to the p-value threshold. -- Mark viking ( talk) 10:25, 12 October 2016 (UTC) reply
Aye, but things like Bonferroni correction have plenty of critics and detractors. One simple issue is that I may publish further analysis and statistical tests on the same data that Crane et al. used. Shall they then update each of their assertions each time future work is published? Is it reasonable that me publishing a new test in the future should turn some of their positive results to negative?
I don't really want or need to get in to a whole new discussion of the multiple comparisons problem, but since you brought it up I thought OP may like to read some on this topic as well. SemanticMantis ( talk) 13:59, 12 October 2016 (UTC) reply
I wholeheartedly agree that the multiple comparisons, and what exactly a researcher would consider significant, are subtle and research design dependent issues. Thanks for expanding a bit on that. Bonferroni is just one way of doing this, and there are others, such as false discovery rate. Considering how to combine results from other studies is another level of complexity and is the topic of meta-analysis. -- Mark viking ( talk) 17:00, 12 October 2016 (UTC) reply
Thanks, Mark viking, User:Tigraan & SemanticMantis for the answers and pointers to further reading. Knowledge expanded. And xkcd is always a blast!-- Elvey( tc) 07:40, 13 October 2016 (UTC) reply

NP-hardness

Let the following problem: (that is, all formulae that have as many satisfying assignments as unsatisfying assignments).

Is this problem NP-hard? — Preceding unsigned comment added by 31.154.81.30 ( talk) 17:44, 12 October 2016 (UTC) reply

I can show it's coNP-hard. Recall that SAT (all formulae that have a satisfying assignment) is NP-complete. Then given a formula , choose a variable not appearing in the formula and make . Then . So this is a poly-time m-reduction from co-NP to , giving us a poly-time Turing reduction from NP to .-- 130.195.253.18 ( talk) 00:50, 14 October 2016 (UTC) reply
Thank you! 31.154.81.30 ( talk) 07:19, 14 October 2016 (UTC) reply

And how about this: (that is, all formulae that have an even number of satisfying assignments). Is it NP-hard or coNP-hard? 31.154.81.30 ( talk) 07:19, 14 October 2016 (UTC) reply

This ⊕P-complete, ⊕P is not known to contain or be contained in NP. 37.248.166.117 ( talk) 12:04, 15 October 2016 (UTC) reply
Thank you! (I've never heard of this class before) 31.154.81.30 ( talk) —Preceding undated comment added 10:31, 16 October 2016 (UTC) reply

Derivative when working with the field of rational functions

Suppose we have the field of rational functions with complex coefficients , and we have a function . Can we define the derivative of with respect to ? I do not mean the derivation of differential algebra, I mean the derivative as defined by a quotient and taking the limit. As is a field, a quotient is well-defined, but it's not obvious to me how to take a limit.-- Leon ( talk) 18:44, 12 October 2016 (UTC) reply

If you have a topology on the field, you can take a limit, but in finite fields and probably some other fields, you don't have that. But I'll take a stab at answering this, although I don't know whether this will be in accord with any standard usages and conventions. Suppose ƒ(x) = p(x)/q(x) and p(x) and q(x) are polynomials.
The numerator in that last fraction is 0 when h = 0; therefore it is divisible by h and so h can be canceled from the numerator and the denominator. The value of the resulting quotient when h = 0 could be termed the "limit". Michael Hardy ( talk) 00:24, 13 October 2016 (UTC) reply
For polynomials this would be the same as the formal derivative. For rationals it would be the same as combining the formal derivative with the quotient rule, which I suppose you could also call the formal derivative but the article doesn't mention it. -- RDBury ( talk) 07:46, 13 October 2016 (UTC) reply

Videos

Youtube | Vimeo | Bing

Websites

Google | Yahoo | Bing

Encyclopedia

Google | Yahoo | Bing

Facebook