Mathematics desk | ||
---|---|---|
< August 29 | << Jul | August | Sep >> | Current desk > |
Welcome to the Wikipedia Mathematics Reference Desk Archives |
---|
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages. |
I'm in the process of implementing the Miller-Rabin primality test and I'd like to allow the user to pass a "confidence" parameter in the form of a number on the interval of (0, 1]. That is to say 1 being "absolutely certain", 1/2 being a 50% level of certainty, etc. Now the article states that the minimum number of trials needed in order for the test to be deterministic (based on the assumption that the Generalized Riemann Hypothesis is indeed true) is 2 log^2 N, then goes on to say that the error term for a given number of K trials is 4^(-K). I assume that means that after say 3 trials there would be a 4^(-3) = 1.5625% chance that the test returns a false result or conversely a 1-4^(-3) = 98.4375% level of certainty. What confuses me though is that the size of N is not taken into account here. I would expect, for example, a confidence level of 100% to translate to 2 log^2 N trials. In other words how am I supposed to merge these together to obtain an equation which converts a confidence level into a minimum number of trials needed? Earl of Arundel ( talk) 16:48, 30 August 2018 (UTC)
As described above in my earlier post the Damgård-Landrock-Pomerance paper gives a maximum error-term of (K^(3/2))(2^T)(T^(-1/2))(4^(2(-((TK)^(1/2))))) [where K is the number of bits in N and T is the number of trials]. The really unusual thing about that is the fact that [1] the error-term actually seems to grow inversely-proportional to the size of of K and [2] when I plug into the equation say 5 trials for a given number containing for example anywhere between 8 to 18446744073709552000 bits (!!!) I obtain an error-term somewhere on the order of 10^(-40) in either case. Can that possibly be right? The fourth answer provided by user Warren MacEvoy in this Stackoverflow thread seems to support such a conclusion but it just seems too unbelievable to be true. Earl of Arundel ( talk) 22:28, 30 August 2018 (UTC)
Mathematics desk | ||
---|---|---|
< August 29 | << Jul | August | Sep >> | Current desk > |
Welcome to the Wikipedia Mathematics Reference Desk Archives |
---|
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages. |
I'm in the process of implementing the Miller-Rabin primality test and I'd like to allow the user to pass a "confidence" parameter in the form of a number on the interval of (0, 1]. That is to say 1 being "absolutely certain", 1/2 being a 50% level of certainty, etc. Now the article states that the minimum number of trials needed in order for the test to be deterministic (based on the assumption that the Generalized Riemann Hypothesis is indeed true) is 2 log^2 N, then goes on to say that the error term for a given number of K trials is 4^(-K). I assume that means that after say 3 trials there would be a 4^(-3) = 1.5625% chance that the test returns a false result or conversely a 1-4^(-3) = 98.4375% level of certainty. What confuses me though is that the size of N is not taken into account here. I would expect, for example, a confidence level of 100% to translate to 2 log^2 N trials. In other words how am I supposed to merge these together to obtain an equation which converts a confidence level into a minimum number of trials needed? Earl of Arundel ( talk) 16:48, 30 August 2018 (UTC)
As described above in my earlier post the Damgård-Landrock-Pomerance paper gives a maximum error-term of (K^(3/2))(2^T)(T^(-1/2))(4^(2(-((TK)^(1/2))))) [where K is the number of bits in N and T is the number of trials]. The really unusual thing about that is the fact that [1] the error-term actually seems to grow inversely-proportional to the size of of K and [2] when I plug into the equation say 5 trials for a given number containing for example anywhere between 8 to 18446744073709552000 bits (!!!) I obtain an error-term somewhere on the order of 10^(-40) in either case. Can that possibly be right? The fourth answer provided by user Warren MacEvoy in this Stackoverflow thread seems to support such a conclusion but it just seems too unbelievable to be true. Earl of Arundel ( talk) 22:28, 30 August 2018 (UTC)