![]() | This article is rated Start-class on Wikipedia's
content assessment scale. It is of interest to the following WikiProjects: | ||||||||||
|
Plugging in 3, 10 and 7.1 for the variables base, exponent and modulous yield 1.74 instead of 6. — Preceding unsigned comment added by 223.206.114.1 ( talk) 04:43, 13 August 2011 (UTC)
Could discuss top down vs. bottom up exponentiation.
Could add optimizations (for "top down") that are faster but use more memory. Examples: bit windowing (or sliding windows). — Preceding unsigned comment added by 63.202.83.229 ( talk • contribs) February 20, 2005
modpow is not complete. for all x, (x^0 mod 1) is 0, not 1. because e is zero, the loop is ignored and a result of 1 is returned. one general fix for this is:
return result % m;
another would be a fairly thorough conditional statement.— Preceding unsigned comment added by 72.19.80.208 ( talk • contribs) November 12, 2005
A base of 10, a exponent of 3 and a modulus of 7.1 (1000 % 7.1) yields 1.74 instead of 6. — Preceding unsigned comment added by 223.206.114.1 ( talk) 01:32, 13 August 2011 (UTC)
In the implementation of modexp_leftToRight
, A = A * A % m;
does not have any parens, but A = (A * g) % m;
does. Operator precedence following C/C++ rules would interpret A * A % m
as A * ( A % m )
. Is that the correct order for this algorithm, or should it parenthesized as A = ( A * A ) % m;
? --
Pfagerburg
22:53, 20 March 2006 (UTC)
b = (b * b) % m;
In the algorithm shown, during the last pass of the loop, the square b is calculated but never used.
A speed-up is to make it conditional based on the e value.
while (e > 0) { if ((e & 1) > 0) result = (result * b) % m; e >>= 1; if (e > 0) b = (b * b) % m; }
Bignum modpow(Bignum base, Bignum exponent, Bignum modulus) {
Bignum result = 1;
while (exponent > 0) {
if ((exponent%2) == 1) {
// exponent is odd, multiply base^(exponent-1) by the base to get base^exponent
result = (result * base) % modulus;
--exponent;
}
exponent /= 2;
base = (base * base) % modulus;
}
return result;
}
The article currently states
I don't understand the point of "b<m". Would it be OK to replace that sentence with
or is there something I'm missing? -- 70.189.77.59 17:36, 16 October 2006 (UTC)
be-1 (mod m)
, but the algorithm as shown doesn't do b (mod m), which isn't neccessary anyway if b is < m.
B.
Mearns
*, KSC
19:00, 1 March 2007 (UTC)What restrictions do I need on b and e and m must be, in order to avoid an ambiguous system? (An ambiguous system allows 2 different b values produce onto the same c value, allows 2 different plaintexts to produce the same ciphertext -- Bob would be unable to figure out which plaintext message Alice intended to send). (Is there some other article that discusses ambiguity of modular exponentiation?)
To be unambigous, b must be 0 ≤ b < m because of the pigeonhole principle. But 0 ≤ b < m is not sufficient -- for example, both
. Is it enough to set e prime ( RSA mentions that the prime 2^16 + 1 = "65537 is a commonly used value for e"), and make sure m is not a multiple of e? -- 70.189.77.59 17:36, 16 October 2006 (UTC)
please someone remove that section. the algorithm is just a re-written version of the C# code. contrary to the claim, it is inefficient and not suited for hardware implementation . it seems the only reason it is there is to please the python zealots. 213.80.27.34 ( talk) 12:09, 28 November 2007 (UTC)
Fractional modulo used with binary algorithm yielding incorrect results? Given base = 10, exponent = 3 and modulus of 7.1 returns 1.74 instead of 6.
result = 1
Loop 1: result = (1 * 10) MOD 7.1 = 2.9 exp >> 1 = 1 base = (10 * 10) MOD 7.1 = .6
Loop 2: result = (2.9 * .6) MOD 7.1 = 1.74 exp>> = 0
result = 1.74 — Preceding unsigned comment added by 223.206.114.1 ( talk) 15:02, 12 August 2011 (UTC)
This isn't completely obvious (to me!), is there a link to another wikipedia page that might show why this is always the case? I can see it is now I've tried it out on paper...Thanks for the great article by the way! Lionfish0 ( talk) 17:12, 26 January 2012 (UTC)
Given that the article states that "even when the numbers involved are enormous", it should take into account that multiplication of large numbers may not be computable directly on the processor. Thus, the runtimes should be depending on the efficiency of the multiplication algorithm: as in http://en.wikipedia.org/wiki/Computational_complexity_of_mathematical_operations.
Per.Zut ( talk) 10:14, 5 August 2013 (UTC)
In general, if one needs to calculate for arbitrary given , then it makes sense first to reduce the exponent to because of Euler's theorem. In many application, e.g. in cryptography, you will already have , and this reduction does not improve efficiency. — Preceding unsigned comment added by Wstomv ( talk • contribs) 22:11, 21 January 2016 (UTC)
I rewrote the example of raising a number to the 13 th power.
First, percents didn't belong there because percent is a modulus operator in specific computer languages. Second, I found the example with the four bullet points very hard to follow.
So, I wrote the basic algorithm following the example of raising a number to the 13 th power. The original example kept squaring the original base b; I used the unnecessary variable x so the explanation could explain what power of b was being computed. I then followed the algorithmic description with a rewrite of the original example with numbers b = 4, e = 13, modulus m = 497. I don't feel this numeric example makes things clearer, but I'll let other people decide whether to delete it. MathPerson ( talk) 15:07, 6 April 2017 (UTC)
This ultra naive algorithm is not a "method" and it has zero citations. I barely understand why it's worth mentioning, let alone dedicate multiple paragraphs and fully worked-out example with pseudocode to boot. The only purpose this can possibly serve is conceptual understanding, it is not used in the real world. Can it be condensed? Rubydesic ( talk) 07:53, 10 February 2024 (UTC)
![]() | This article is rated Start-class on Wikipedia's
content assessment scale. It is of interest to the following WikiProjects: | ||||||||||
|
Plugging in 3, 10 and 7.1 for the variables base, exponent and modulous yield 1.74 instead of 6. — Preceding unsigned comment added by 223.206.114.1 ( talk) 04:43, 13 August 2011 (UTC)
Could discuss top down vs. bottom up exponentiation.
Could add optimizations (for "top down") that are faster but use more memory. Examples: bit windowing (or sliding windows). — Preceding unsigned comment added by 63.202.83.229 ( talk • contribs) February 20, 2005
modpow is not complete. for all x, (x^0 mod 1) is 0, not 1. because e is zero, the loop is ignored and a result of 1 is returned. one general fix for this is:
return result % m;
another would be a fairly thorough conditional statement.— Preceding unsigned comment added by 72.19.80.208 ( talk • contribs) November 12, 2005
A base of 10, a exponent of 3 and a modulus of 7.1 (1000 % 7.1) yields 1.74 instead of 6. — Preceding unsigned comment added by 223.206.114.1 ( talk) 01:32, 13 August 2011 (UTC)
In the implementation of modexp_leftToRight
, A = A * A % m;
does not have any parens, but A = (A * g) % m;
does. Operator precedence following C/C++ rules would interpret A * A % m
as A * ( A % m )
. Is that the correct order for this algorithm, or should it parenthesized as A = ( A * A ) % m;
? --
Pfagerburg
22:53, 20 March 2006 (UTC)
b = (b * b) % m;
In the algorithm shown, during the last pass of the loop, the square b is calculated but never used.
A speed-up is to make it conditional based on the e value.
while (e > 0) { if ((e & 1) > 0) result = (result * b) % m; e >>= 1; if (e > 0) b = (b * b) % m; }
Bignum modpow(Bignum base, Bignum exponent, Bignum modulus) {
Bignum result = 1;
while (exponent > 0) {
if ((exponent%2) == 1) {
// exponent is odd, multiply base^(exponent-1) by the base to get base^exponent
result = (result * base) % modulus;
--exponent;
}
exponent /= 2;
base = (base * base) % modulus;
}
return result;
}
The article currently states
I don't understand the point of "b<m". Would it be OK to replace that sentence with
or is there something I'm missing? -- 70.189.77.59 17:36, 16 October 2006 (UTC)
be-1 (mod m)
, but the algorithm as shown doesn't do b (mod m), which isn't neccessary anyway if b is < m.
B.
Mearns
*, KSC
19:00, 1 March 2007 (UTC)What restrictions do I need on b and e and m must be, in order to avoid an ambiguous system? (An ambiguous system allows 2 different b values produce onto the same c value, allows 2 different plaintexts to produce the same ciphertext -- Bob would be unable to figure out which plaintext message Alice intended to send). (Is there some other article that discusses ambiguity of modular exponentiation?)
To be unambigous, b must be 0 ≤ b < m because of the pigeonhole principle. But 0 ≤ b < m is not sufficient -- for example, both
. Is it enough to set e prime ( RSA mentions that the prime 2^16 + 1 = "65537 is a commonly used value for e"), and make sure m is not a multiple of e? -- 70.189.77.59 17:36, 16 October 2006 (UTC)
please someone remove that section. the algorithm is just a re-written version of the C# code. contrary to the claim, it is inefficient and not suited for hardware implementation . it seems the only reason it is there is to please the python zealots. 213.80.27.34 ( talk) 12:09, 28 November 2007 (UTC)
Fractional modulo used with binary algorithm yielding incorrect results? Given base = 10, exponent = 3 and modulus of 7.1 returns 1.74 instead of 6.
result = 1
Loop 1: result = (1 * 10) MOD 7.1 = 2.9 exp >> 1 = 1 base = (10 * 10) MOD 7.1 = .6
Loop 2: result = (2.9 * .6) MOD 7.1 = 1.74 exp>> = 0
result = 1.74 — Preceding unsigned comment added by 223.206.114.1 ( talk) 15:02, 12 August 2011 (UTC)
This isn't completely obvious (to me!), is there a link to another wikipedia page that might show why this is always the case? I can see it is now I've tried it out on paper...Thanks for the great article by the way! Lionfish0 ( talk) 17:12, 26 January 2012 (UTC)
Given that the article states that "even when the numbers involved are enormous", it should take into account that multiplication of large numbers may not be computable directly on the processor. Thus, the runtimes should be depending on the efficiency of the multiplication algorithm: as in http://en.wikipedia.org/wiki/Computational_complexity_of_mathematical_operations.
Per.Zut ( talk) 10:14, 5 August 2013 (UTC)
In general, if one needs to calculate for arbitrary given , then it makes sense first to reduce the exponent to because of Euler's theorem. In many application, e.g. in cryptography, you will already have , and this reduction does not improve efficiency. — Preceding unsigned comment added by Wstomv ( talk • contribs) 22:11, 21 January 2016 (UTC)
I rewrote the example of raising a number to the 13 th power.
First, percents didn't belong there because percent is a modulus operator in specific computer languages. Second, I found the example with the four bullet points very hard to follow.
So, I wrote the basic algorithm following the example of raising a number to the 13 th power. The original example kept squaring the original base b; I used the unnecessary variable x so the explanation could explain what power of b was being computed. I then followed the algorithmic description with a rewrite of the original example with numbers b = 4, e = 13, modulus m = 497. I don't feel this numeric example makes things clearer, but I'll let other people decide whether to delete it. MathPerson ( talk) 15:07, 6 April 2017 (UTC)
This ultra naive algorithm is not a "method" and it has zero citations. I barely understand why it's worth mentioning, let alone dedicate multiple paragraphs and fully worked-out example with pseudocode to boot. The only purpose this can possibly serve is conceptual understanding, it is not used in the real world. Can it be condensed? Rubydesic ( talk) 07:53, 10 February 2024 (UTC)