Welcome to the Wikipedia Mathematics Reference Desk Archives
The page you are currently viewing is a
transcluded archive page. While you can leave answers for any questions shown below, please ask new questions on one of the
current reference desk pages.
September 20 Information
When will Goldbach's conjecture be tested.
Since the "odds" of a Goldbach's conjecture violation are 10^-3700 (see page history for deleted post), when will it be tested up to 10^4000 to ensure that no counterexamples are found in the expected range?
Hcobb (
talk)
03:36, 20 September 2011 (UTC)reply
That's not how it works. The figure of 10^-3700 is given for an even number of a specific magnitude, which as I understand is on the order of a few thousands. For larger numbers the probability is smaller. By the time you get to the even number 10000 the probability is maybe 10^-10000 (made up number) and so on. These very low probabilities mean the opposite of what you're implying - it means we're more confident of Goldbach's conjecture, not less confident and in need to look harder to be convinced. --
Meni Rosenfeld (
talk)
07:41, 20 September 2011 (UTC)reply
Your first post
[1] was removed because it didn't ask a question.
Goldbach's comet says: "the probability of zero pairs for any one E, in the range considered here, is of order 10-3700." The range in the graphs of the article is 1,160,000 to 1,540,000. My own calculations also indicate that for the middle of this range around 1,350,000 the odds are of order 10-3700. Your question apparently assumes that the odds are the same for each even integer but in fact the odds decrease quickly when the integer grows. My quick estimate says the odds are 10-15 for 1,000, 10-75 for 10,000, 10-440 for 100,000, 10-2900 for 1,000,000, and 10-20500 for 10,000,000. It is far smaller for numbers beyond the search limit of 26×1017 a week ago.
[2] The sum of the odds taken over all numbers above 26×1017 is extremely small so we think
Goldbach's conjecture is highly likely to be true. However, the calculation of the odds makes assumptions that seem reasonable and fits existing data very well but could theoretically turn out to be wrong.
PrimeHunter (
talk)
10:33, 20 September 2011 (UTC)reply
Even 10^10000 is only a ten thousand digit number and will handily fit in the CPU cache of any laptop. (It's a little more than 4KB.) So why not build a quantum prime solver of this modest size? (Assuming quantum computers actually work of course.)
Hcobb (
talk)
10:55, 20 September 2011 (UTC)reply
We don't have useful
quantum computers and I don't know whether there is an algorithm which would aid Goldbach's conjecture significantly. If we could instantly determine whether a number is prime then the project reaching 26×1017 would still have to examine each even number and only become a little faster. By the way, existing computers and algorithms can handle individual numbers the size of 1010000 = 47717 + 1010000-47717. I just found the
probable prime 1010000-47717 in 15 minutes on a PC. It is almost certainly prime but it would take much longer to prove primality.
PrimeHunter (
talk)
11:44, 20 September 2011 (UTC)reply
Intergrating ring of charge in 1/r field
Re
#Distribution of charge in 1,2,3 dimensions on symmetrical bodies (second part) - I got stuck (no suprises there) - I was trying to sum (integrate) the combined effect of a ring (circle or radius r) of "charge" generating a 1/r field at distance r , at an offset f from the centre of the ring.
I got the integral between 0 and pi of (f-rcosθ)r/(f2+r2-2frcosθ) , which I boiled down to the integral of
Laplacian correction for discriminative naive Bayes
In a naive Bayesian classifier with discriminative learning, is it appropriate to increase the amount of Laplacian correction with the logarithm of the dimensionality, as the Weka package DMNBtext does? If so, does this still hold as dimensions with less and less entropy (e.g. rare words being encountered for the first time) are added? I ask this because I'm testing a straight conversion of DMNBtext in a chat-spam filter, and finding that its probabilities are too close to 50% (i.e. it's not confident enough in its classifications).
NeonMerlin12:17, 20 September 2011 (UTC)reply
I don't know enough to give a straight answer, but maybe I can offer an opinion if you clarify the context and notation.
Do I understand correctly that DMNBtext assumes that in addition to the observed words, there are prior words divided evenly among the d words in the vocabulary, for each example? This does sound like a lot, especially if there aren't many real words in each example. I'm not sure about the theory that supports the logarithmic value, but it undoubtedly based on a specific prior on the distribution of word frequencies, which may not match the real distribution.
Also, why would it give 50% rather than whatever the prior class frequencies are?
Chaos theory deals with something quite different from noise though it may sometimes appear to be noise, an yes a recurrence relation would be a good idea for the question! It can be quite reasonable to say something exhibits both chaos and random noise.
Dmcq (
talk)
18:20, 21 September 2011 (UTC)reply
Yes, those are what the equations were supposed to look like and the percent sign is modulo. There is no recursion. Just specify in and is the input. --
Melab±1☎20:43, 21 September 2011 (UTC)reply
Welcome to the Wikipedia Mathematics Reference Desk Archives
The page you are currently viewing is a
transcluded archive page. While you can leave answers for any questions shown below, please ask new questions on one of the
current reference desk pages.
September 20 Information
When will Goldbach's conjecture be tested.
Since the "odds" of a Goldbach's conjecture violation are 10^-3700 (see page history for deleted post), when will it be tested up to 10^4000 to ensure that no counterexamples are found in the expected range?
Hcobb (
talk)
03:36, 20 September 2011 (UTC)reply
That's not how it works. The figure of 10^-3700 is given for an even number of a specific magnitude, which as I understand is on the order of a few thousands. For larger numbers the probability is smaller. By the time you get to the even number 10000 the probability is maybe 10^-10000 (made up number) and so on. These very low probabilities mean the opposite of what you're implying - it means we're more confident of Goldbach's conjecture, not less confident and in need to look harder to be convinced. --
Meni Rosenfeld (
talk)
07:41, 20 September 2011 (UTC)reply
Your first post
[1] was removed because it didn't ask a question.
Goldbach's comet says: "the probability of zero pairs for any one E, in the range considered here, is of order 10-3700." The range in the graphs of the article is 1,160,000 to 1,540,000. My own calculations also indicate that for the middle of this range around 1,350,000 the odds are of order 10-3700. Your question apparently assumes that the odds are the same for each even integer but in fact the odds decrease quickly when the integer grows. My quick estimate says the odds are 10-15 for 1,000, 10-75 for 10,000, 10-440 for 100,000, 10-2900 for 1,000,000, and 10-20500 for 10,000,000. It is far smaller for numbers beyond the search limit of 26×1017 a week ago.
[2] The sum of the odds taken over all numbers above 26×1017 is extremely small so we think
Goldbach's conjecture is highly likely to be true. However, the calculation of the odds makes assumptions that seem reasonable and fits existing data very well but could theoretically turn out to be wrong.
PrimeHunter (
talk)
10:33, 20 September 2011 (UTC)reply
Even 10^10000 is only a ten thousand digit number and will handily fit in the CPU cache of any laptop. (It's a little more than 4KB.) So why not build a quantum prime solver of this modest size? (Assuming quantum computers actually work of course.)
Hcobb (
talk)
10:55, 20 September 2011 (UTC)reply
We don't have useful
quantum computers and I don't know whether there is an algorithm which would aid Goldbach's conjecture significantly. If we could instantly determine whether a number is prime then the project reaching 26×1017 would still have to examine each even number and only become a little faster. By the way, existing computers and algorithms can handle individual numbers the size of 1010000 = 47717 + 1010000-47717. I just found the
probable prime 1010000-47717 in 15 minutes on a PC. It is almost certainly prime but it would take much longer to prove primality.
PrimeHunter (
talk)
11:44, 20 September 2011 (UTC)reply
Intergrating ring of charge in 1/r field
Re
#Distribution of charge in 1,2,3 dimensions on symmetrical bodies (second part) - I got stuck (no suprises there) - I was trying to sum (integrate) the combined effect of a ring (circle or radius r) of "charge" generating a 1/r field at distance r , at an offset f from the centre of the ring.
I got the integral between 0 and pi of (f-rcosθ)r/(f2+r2-2frcosθ) , which I boiled down to the integral of
Laplacian correction for discriminative naive Bayes
In a naive Bayesian classifier with discriminative learning, is it appropriate to increase the amount of Laplacian correction with the logarithm of the dimensionality, as the Weka package DMNBtext does? If so, does this still hold as dimensions with less and less entropy (e.g. rare words being encountered for the first time) are added? I ask this because I'm testing a straight conversion of DMNBtext in a chat-spam filter, and finding that its probabilities are too close to 50% (i.e. it's not confident enough in its classifications).
NeonMerlin12:17, 20 September 2011 (UTC)reply
I don't know enough to give a straight answer, but maybe I can offer an opinion if you clarify the context and notation.
Do I understand correctly that DMNBtext assumes that in addition to the observed words, there are prior words divided evenly among the d words in the vocabulary, for each example? This does sound like a lot, especially if there aren't many real words in each example. I'm not sure about the theory that supports the logarithmic value, but it undoubtedly based on a specific prior on the distribution of word frequencies, which may not match the real distribution.
Also, why would it give 50% rather than whatever the prior class frequencies are?
Chaos theory deals with something quite different from noise though it may sometimes appear to be noise, an yes a recurrence relation would be a good idea for the question! It can be quite reasonable to say something exhibits both chaos and random noise.
Dmcq (
talk)
18:20, 21 September 2011 (UTC)reply
Yes, those are what the equations were supposed to look like and the percent sign is modulo. There is no recursion. Just specify in and is the input. --
Melab±1☎20:43, 21 September 2011 (UTC)reply