![]() | This redirect does not require a rating on Wikipedia's
content assessment scale. It is of interest to the following WikiProjects: | |||||||||||||||||||||||||||
|
|
Thank you, Groupthink, for writing this material. But I wonder if it should not be part of the Computational complexity page. See my comment on that discussion page. If you are keen, that page could benefit from your efforts. Scottcraig 17:22, 30 June 2007 (UTC)
I strongly support this merger. I should have caught that that page existed and punched up that page instead of creating this page. Ah well. Ironically, whoever created the other page cited the exact same textbook I used for my reference. :-) At any rate, I'd push for the content of analysis of algorithms to be merged into run-time analysis and not vice-versa. Groupthink 23:38, 8 July 2007 (UTC)
I'm going on a University WikiBreak starting on the 21st. If I can't get the merger done by then, then any editor who'd care to take over has my blessing. Groupthink 21:40, 15 August 2007 (UTC)
Quoting: Note that Big O notation can only be applied to run-times that are monotonically increasing. The following algorithm, expressed as pseudocode, cannot be said to have a Big O classification:
1 if n is prime 2 return n 3 else if n is even 4 while n > 1 5 n = n / 2 6 else 7 while n > 0 8 n = n - 50
This runs in O(n) time, no? Skippydo 02:05, 17 August 2007 (UTC)
Who says? Skippydo 05:00, 17 August 2007 (UTC)
Note that Big O notation can only be applied to run-times that are monotonically increasing. The following algorithm, expressed as pseudocode, cannot be said to have a Big O classification:
1 if n is prime 2 return n 3 else if n is even 4 while n > 1 5 n = floor(n / 2) 6 else 7 while n > 0 8 n = n - 50
The above example definitely cannot be expressed using Θ-notation, but I'm unsure if Big-O and Ω are formally restricted to monotonically increasing functions or not. Groupthink 06:12, 17 August 2007 (UTC)
Quoting:The specific amount of time to carry out a given instruction ... can be considered to be an abstract unit of time This very power statement in made but not, in my opinion, utilized. Since we are dealing with abstract units of time the values can be said to be bounded by 1 unit. Therefore our running time is at most,
Thoughts? Skippydo 16:21, 17 August 2007 (UTC)
Note:
The analysis given above for the space consumption is possibly wrong, because the memory space reserved is not growing at a rate of 2n the inputs.
Let us validate the above statement with a simple proof.
Assume that the file size is one and memory reserved is also 1 byte,
for N = 1, space allocated is 1 byte
Suppose the size is increased to 2,
for N = 2, space allocated will be doubled to 2*1 bytes(2), as per definition in line #2
for N = 3, space allocated will be doubled to 2*2 bytes(4), as per definition in line #2
for N = 4, space allocated remains the same, because already we have the enough space
As a summary we can say
S(1) = 1 (Initial condition)
S(2) = 2*1 = 2 (Doubled as per Line #2)
S(3) = 2*2 = 4 (Doubled as per Line #2)
S(4) = 4 (No change)
S(5) = 4*2 = 8 (Doubled as per Line #2)
S(6) = 8 (No change)
. .
S(8) = 8 (No change)
S(9) = 8*2 = 16 (Doubled as per Line #2)
S(10) = 16 (No change)
. .
. .
. .
. .
. .
For input file size of 4, reserved space is only 4 bytes and not 16 bytes(24). So at any time, reserved memory space is less than double the size of an input file size. So we can say the space complexity for the above problem is O(2n). Since the constants doesnt mean in Big-Oh notation, we can say S(N) = O(n).
Linear growth rate.
—Preceding
unsigned comment added by
125.17.107.170 (
talk •
contribs)
NOTE:
The above analysis is possibly wrong and the growth rate is not exponential, regardless of the size of the file.
There are two contradictions with the above algorithm.
1) Why we need to double the amount of memory reserved for every 1GB increase in file size if we have the enough memory space already allocated for the file.
Eg.,
Suppose the file size is 1GB and the reserved space is 2 GB.
Now as the above algorithm says, for increase the file of 1GB, we need not double the reserved space to 4GB, because
already we have the reserved space of 2GB and the file size is also 2GB. So the above algorithm cannot hold good.
2) If the growth rate is exponential as explained above,
for 1GB of file size the reserved space should be 21GB
for 2GB of file size the reserved space should be 22GB
But this is practically impossible and not the case. So we cannot say the algorithm grows at exponential rate.
So we can modify the above alorithm to,
1 While file is still open
2 Let n = file size
3 If n > memory reserved
4 Double the amount of memory reserved
For the above algorithm, assume the file size is 1 byte and reserved memory space is also 1 byte
S(1) = 1
S(2) = 1*2 = 2 (Doubled as per Line #4)
S(3) = 2*2 = 4 (Doubled as per Line #4)
S(4) = 4 (No change)
s(5) = 4*2 = 8 (Doubled as per Line #4)
S(6) = 8 (No change)
.
.
s(9) = 8*2 = 16 (Doubled as per Line #4)
S(10) = 16 (No change)
.
.
.
.
At any point of time, reserved space is less than double the input file size. It is clear that space complexity is less than 2N for input file size N. So space complexity for the above algorithm is O(2N), since the constants doesnt mean anything in Big-Oh notation, we can simplify it to O(N).
Linear Growth Rate.
—Preceding
unsigned comment added by
125.17.107.170 (
talk •
contribs)
File Size Memory Reserved ============================= 100 MB 10 MB (approx. 23 MB) 200 MB 20 MB (approx. 24 MB) 300 MB 40 MB (approx. 25 MB) ... ... 1000 MB 5120 MB (approx. 212 MB) ... ... n MB 1.25 * 2n MB
The article heading "run-time analysis" suggests an analysis performed at run time - as in " Performance analysis". (I have now added a 'do not confuse with' sentence at start of the existing article to help with this).
The wording in the article did not make it sufficiently clear that this is an estimation technique, not related to an algorithm's actual performance. If merging is to take place I suggest it is NOT with this article's current name!
There were, until recently, almost no XREF's between important related topics:-
etc
I have added some XREF's but there may be many more that also relate to performance generally that I have not discovered yet. ken ( talk) 06:29, 21 July 2008 (UTC)
![]() | This redirect does not require a rating on Wikipedia's
content assessment scale. It is of interest to the following WikiProjects: | |||||||||||||||||||||||||||
|
|
Thank you, Groupthink, for writing this material. But I wonder if it should not be part of the Computational complexity page. See my comment on that discussion page. If you are keen, that page could benefit from your efforts. Scottcraig 17:22, 30 June 2007 (UTC)
I strongly support this merger. I should have caught that that page existed and punched up that page instead of creating this page. Ah well. Ironically, whoever created the other page cited the exact same textbook I used for my reference. :-) At any rate, I'd push for the content of analysis of algorithms to be merged into run-time analysis and not vice-versa. Groupthink 23:38, 8 July 2007 (UTC)
I'm going on a University WikiBreak starting on the 21st. If I can't get the merger done by then, then any editor who'd care to take over has my blessing. Groupthink 21:40, 15 August 2007 (UTC)
Quoting: Note that Big O notation can only be applied to run-times that are monotonically increasing. The following algorithm, expressed as pseudocode, cannot be said to have a Big O classification:
1 if n is prime 2 return n 3 else if n is even 4 while n > 1 5 n = n / 2 6 else 7 while n > 0 8 n = n - 50
This runs in O(n) time, no? Skippydo 02:05, 17 August 2007 (UTC)
Who says? Skippydo 05:00, 17 August 2007 (UTC)
Note that Big O notation can only be applied to run-times that are monotonically increasing. The following algorithm, expressed as pseudocode, cannot be said to have a Big O classification:
1 if n is prime 2 return n 3 else if n is even 4 while n > 1 5 n = floor(n / 2) 6 else 7 while n > 0 8 n = n - 50
The above example definitely cannot be expressed using Θ-notation, but I'm unsure if Big-O and Ω are formally restricted to monotonically increasing functions or not. Groupthink 06:12, 17 August 2007 (UTC)
Quoting:The specific amount of time to carry out a given instruction ... can be considered to be an abstract unit of time This very power statement in made but not, in my opinion, utilized. Since we are dealing with abstract units of time the values can be said to be bounded by 1 unit. Therefore our running time is at most,
Thoughts? Skippydo 16:21, 17 August 2007 (UTC)
Note:
The analysis given above for the space consumption is possibly wrong, because the memory space reserved is not growing at a rate of 2n the inputs.
Let us validate the above statement with a simple proof.
Assume that the file size is one and memory reserved is also 1 byte,
for N = 1, space allocated is 1 byte
Suppose the size is increased to 2,
for N = 2, space allocated will be doubled to 2*1 bytes(2), as per definition in line #2
for N = 3, space allocated will be doubled to 2*2 bytes(4), as per definition in line #2
for N = 4, space allocated remains the same, because already we have the enough space
As a summary we can say
S(1) = 1 (Initial condition)
S(2) = 2*1 = 2 (Doubled as per Line #2)
S(3) = 2*2 = 4 (Doubled as per Line #2)
S(4) = 4 (No change)
S(5) = 4*2 = 8 (Doubled as per Line #2)
S(6) = 8 (No change)
. .
S(8) = 8 (No change)
S(9) = 8*2 = 16 (Doubled as per Line #2)
S(10) = 16 (No change)
. .
. .
. .
. .
. .
For input file size of 4, reserved space is only 4 bytes and not 16 bytes(24). So at any time, reserved memory space is less than double the size of an input file size. So we can say the space complexity for the above problem is O(2n). Since the constants doesnt mean in Big-Oh notation, we can say S(N) = O(n).
Linear growth rate.
—Preceding
unsigned comment added by
125.17.107.170 (
talk •
contribs)
NOTE:
The above analysis is possibly wrong and the growth rate is not exponential, regardless of the size of the file.
There are two contradictions with the above algorithm.
1) Why we need to double the amount of memory reserved for every 1GB increase in file size if we have the enough memory space already allocated for the file.
Eg.,
Suppose the file size is 1GB and the reserved space is 2 GB.
Now as the above algorithm says, for increase the file of 1GB, we need not double the reserved space to 4GB, because
already we have the reserved space of 2GB and the file size is also 2GB. So the above algorithm cannot hold good.
2) If the growth rate is exponential as explained above,
for 1GB of file size the reserved space should be 21GB
for 2GB of file size the reserved space should be 22GB
But this is practically impossible and not the case. So we cannot say the algorithm grows at exponential rate.
So we can modify the above alorithm to,
1 While file is still open
2 Let n = file size
3 If n > memory reserved
4 Double the amount of memory reserved
For the above algorithm, assume the file size is 1 byte and reserved memory space is also 1 byte
S(1) = 1
S(2) = 1*2 = 2 (Doubled as per Line #4)
S(3) = 2*2 = 4 (Doubled as per Line #4)
S(4) = 4 (No change)
s(5) = 4*2 = 8 (Doubled as per Line #4)
S(6) = 8 (No change)
.
.
s(9) = 8*2 = 16 (Doubled as per Line #4)
S(10) = 16 (No change)
.
.
.
.
At any point of time, reserved space is less than double the input file size. It is clear that space complexity is less than 2N for input file size N. So space complexity for the above algorithm is O(2N), since the constants doesnt mean anything in Big-Oh notation, we can simplify it to O(N).
Linear Growth Rate.
—Preceding
unsigned comment added by
125.17.107.170 (
talk •
contribs)
File Size Memory Reserved ============================= 100 MB 10 MB (approx. 23 MB) 200 MB 20 MB (approx. 24 MB) 300 MB 40 MB (approx. 25 MB) ... ... 1000 MB 5120 MB (approx. 212 MB) ... ... n MB 1.25 * 2n MB
The article heading "run-time analysis" suggests an analysis performed at run time - as in " Performance analysis". (I have now added a 'do not confuse with' sentence at start of the existing article to help with this).
The wording in the article did not make it sufficiently clear that this is an estimation technique, not related to an algorithm's actual performance. If merging is to take place I suggest it is NOT with this article's current name!
There were, until recently, almost no XREF's between important related topics:-
etc
I have added some XREF's but there may be many more that also relate to performance generally that I have not discovered yet. ken ( talk) 06:29, 21 July 2008 (UTC)