![]() | This is an archive of past discussions. Do not edit the contents of this page. If you wish to start a new discussion or revive an old one, please do so on the current talk page. |
Archive 1 | Archive 2 | Archive 3 | Archive 4 |
I'd like to put in some mention of computer algorithms and their Big O performance: selection sort being N^2, merge sort N log N, travelling salesman, and so on, and implications for computing (faster computers don't compensate for big-O differences, etc). Think this should be part of this write-up, or separate but linked?
for some k1, k2, \exists k_1,k_2>0, n_0 \; \forall n>n_0. f(n) \in o(g(n)), Small Omicron; Small O; Small Oh, f is dominated by g asymptotically, |f(n)| \le ...
19:46, 22 September 2009 (UTC)--Musatov
I notice no-one has added (in the section "Orders of common functions" - O(log log n) time.
"Big O notation"? It sounds like the name was created with the intention of teaching it to eight-year-olds... I can't believe that this is the accepted terminology. Unless I'm mistaken, "big O" stands for order (or some foreign equivalent), and in my experience in maths it's always been referred to as such. When I heard order notation formally introduced as "Big O notation" for the first time in computer science, I burst out laughing... You people are all mad :p. -- 203.206.183.160 14:38, 14 September 2006 (UTC)
Are the lim-sup and lim-inf definitions correct? I haven't seen them before. Reference this in the article, perhaps. Connelly 00:15, 9 August 2005 (UTC)
The lim-sup and lim-inf based definitions have vanished (in the table with the definitions). I do not know why (no comment was given at the appropriate changes), but the present lim based definitions of are clearly wrong. Example: Consider the function for even n and for odd n. Clearly, it should be . But does not exist. Further, the definition of does not consider the absolute value of . Then e.g. but (in contrast to the other definitions where a negation of f does not matter).
In the older revision link everything is correct. So unless there was a reason for the change I do not see, I think the table should be reverted to the old state. -- DniQ 11:06, 11 March 2006 (UTC)
(the below moved from "Incorrect use of limits in table?" which is discussing the same thing):
How about a note on amortized complexities? For instance, inserts and lookups in hashtables are O(1) amortized, in constrast with array lookup which is always O(1).
doesn't O(n log n) have some special name ? -- Taw
I've heard (and taught) "quasilinear" (because f=O(xa) for all a>1, as close to 1 as desired), and think this is quite standard and also reasonable. (I plan to add this term on the main page if there is no protest against.) — MFH: Talk 13:02, 24 May 2005 (UTC)
I added loglinear, as that's how I've seen it in several books, and at least one other user has seen that (Stormix). Chris Connett 21:02, 21 December 2005 (UTC)
Well, you can also write it in soft-O, as in Õ(n). Is that "special"? -- Rovenhot 00:46, 23 June 2006 (UTC)
I remembering it as coming from the Latin word ordo. It makes a bit more sense as Landau, and Bachmann, bringing this notation to us, both were Germans. Obviously, it means the same, but I can see the difference. :) -- Chexum
The german word for order/ordo would be Ordnung, so that's another possibility...but as Chexum said, the point is moot and these words are probably cognates anyway -- Magnus N.
May the ordo disambiguation link to this page?-- I hate to register 11:58, 10 March 2007 (UTC)
(Forgive the lack of mathematical notation...) Technically speaking, isn't it "f(x) is an element of O(g(x))"? In the article, "equals" is used. I think big O is the set of functions with the appropriate property. Doradus 13:06, 1 Aug 2003 (UTC)
Technically that is correct. The thing is that the "=" notation is long established; I would say most authors I have seen use "=" when writing about asymtotic bounds. I know that when I initially studied this, I would change the "=" to "" in my head, since by context the latter is correct. However, I, apparently like others, became accustomed to "=" notation that is (I think) more widely used.
In light of this, I notice that the article mixes these two freely. I think we should pick one or the other, and I nominate "=". If no one objects I'll change it, but add an explanation similar to the one above to the appropriate article (since this applies to all asymtotic notation, not just big-O) Chris Connett 20:59, 21 December 2005 (UTC)
My interpretation is that O(g(x)) denotes an unspecified (or anonymous) function that is asymptotically bounded by a multiple of g, rather than the class of all such functions. This is similar to how we use the constant of integration (+C) in calculus. We say that the most general antiderivative of cos(x) is sin(x)+C, but almost nobody says that C denotes the set of all constant functions -- rather, it is an "arbitrary constant". So what is wrong with saying that O(1) is an "arbitrary bounded function" and writing sin(x) = O(1)? David Radcliffe ( talk) 22:30, 17 March 2008 (UTC)
In my case, I first met the "=" notation, and later, upon seeing the much more intuitive "" notation, I immediately got a better understanding of the matter and adopted the latter notation. As this topic is a part of mathematics, I think the article ought not to adopt an erroneous mathematical notation just because the majority of computer scientists is using it. I suggest that Wikipedia's article on the subject should not encourage the mentioned abuse of notation, but act guiding and avoid it. Also, the reader should be able to learn about the inconsistencies of use in practice by reading about it, not by guessing it. (As in the paragraph Matters of notation) 129.241.157.9 ( talk) 22:25, 25 September 2008 (UTC)
What I've learned in two classes is to write "f(x) is O(n)", the = looks very wrong. 84.209.125.101 ( talk) 12:27, 24 January 2010 (UTC)
I noticed the statement that if
f(n,m) = n^2 + m^2 + O(n+m)
that this is like saying there exists N,C s.t. for all n,m >= N
f(n,m) <= n^2 + m^2 + C(n+m)
I would have thought that the O(n+m) is to be bounded in absolute value, i.e.
|f(n,m) - n^2 - m^2| <= C|n+m|
which leads to n^2 + m^2 - C|n+m| <= f(n,m) <= n^2 + m^2 + C|n+m|
Is this correct?
Steffan.
What does
where o is little o, mean?
The question is, should Wikipedia itself use Θ instead of O when both are correct, and the former is intended? (e.g. in discussing topics like the FFT and sorting.)
I have a computer-science background, and I know the difference between these two...however, I also know that Θ(...) is essentially unknown outside of computer-science circles, while O(...) is widely used and, informally, more-or-less understood to mean Θ. At a certain point, when usage becomes widespread enough, it ceases to be "incorrect" (it's only notation, after all).
One argument is that, since Θ is so little-known, its appearance will only confuse most people...thus, one should use O as long as it is correct (even if it is weaker than necessary) except in cases where one wishes to explicitly distinguish between the two. On the other hand, one could use Θ whereever possible, making the symbol a link to this page...requiring an extra page of reading for people not familiar with it, but hoping that most people will simply guess that the symbol looks like O so it means what they expect.
— Steven G. Johnson 07:00, 28 Nov 2003 (UTC)
I just added O(nn) to the Common orders of functions table, mainly because I know it's often discussed in calculus classes. But I've never been able to find a name for this function (i.e., nn or xx). Does anyone have a name and source? - dcljr 04:53, 9 Jul 2004 (UTC)
Your edit seems to have been reverted, I don't know why (a comment could have been placed here). I think this could indeed be mentioned, but maybe we should wait for a name. I think for all that grows faster than exponential, one eventually takes the log and discusses the growth class of the latter. i.e.:
— MFH: Talk 21:52, 21 Jun 2005 (UTC)
It sounds like a special case of a polynomial. I've never encountered a name for this, but you could call it an nth degree polynomial. -- Colonel panic 04:21, 24 September 2005 (UTC)
I do not think so. Polynomials have a fixed degree. What's more confusing: grows faster than exponential, whereas polynomials grow slower than exponential. -- NeoUrfahraner
If I recall correctly, it can be proven that O(nn) = O(n!), using Stirling's approximation. So you could just call it "factorial order." Pmdboi 02:52, 25 May 2006 (UTC)
I thought I saw a name for O) -- that is, exp(polynomial). Has anyone seen a name for this, or a name for O for that matter? CRGreathouse ( t | c) 04:34, 20 September 2006 (UTC)
What do the formal properties mean? In particular, what is the meaning of O(fg) = O(f)O(g)? -- NeoUrfahraner 14:11, 18 Mar 2005 (UTC)
The only possible meaning of this would be that if h=O(fg), f'=O(f), g'=O(g), then h = O(f' g'), but the previous notation is indeed not completely well defined/correct. (The other way round, i.e. O(f) O(g) = O(fg) would be correct, however.) — MFH: Talk 13:46, 24 May 2005 (UTC)
Thank you -- NeoUrfahraner 06:25, 27 May 2005 (UTC)
What I am missing on this page is remarks on the relation between the various notions, in particular between Big-O and little-o. Intuitively, it seems that f=o(g) iff f=O(g) and g!=O(f). The "only if" direction of this claim is clearly true, but I am not so sure about the "if" direction. Is it true? If yes, it should be noted in the section discussing Big o and little o. If it is not true, there should be a brief remark why this is the case.
After a bit more thinking, I see now that the converse of the above does not hold. Take the following functions:
f(n) = 2^n if n is even, and n if n is odd
g(n) = 0.5 n
Then f=O(g), g!=O(f), an f!=o(g). I guess the converse holds only for monotonic functions.
This should really be moved to order of growth, asymptotic growth of a function, or something to that effect. Describing orders of growth in an article called "Big O notation" makes about as much sense as describing limits in an article called "limit notation", addition in an article called "summation notation", or for that matter mathematics under "mathematical notation". Of course, the notation must be described, but that's not the primary purpose of the article. Fredrik | talk 11:40, 10 November 2005 (UTC)
In the "Multiple Variables" section, shouldn't the second equation,
\forall n, m>N:f(n,m) \le n^2 + m^3 + C(n+m).
end in just "+ C.", since C is just some constant and doesn't depend on n and m?
You find Õ instead of Θ (theta) at many places in the article. is this correct? -- Abdull 12:50, 3 March 2006 (UTC)
No, that's wrong. Õ is soft-O, which means that logarithmic factors are ignored. Big-Theta, Θ, means that the function is guaranteed to have that particular order--no more, no less. -- Rovenhot 00:52, 23 June 2006 (UTC)
Two comments on notation.
McKay 11:36, 15 June 2006 (UTC)
I'm a mathematician, and I can honestly say I have never seen the "element of" notation used in connection with the big O notation. Is this something that's common in computer science? If not, it should be deleted from the article.-- 209.43.9.143 19:19, 19 June 2006 (UTC)
I see a merge suggestion on the top of the page but I don't think there is a case for it? Should it be removed then? -- Evanx( tag?) 05:54, 22 June 2006 (UTC)
Definitely merge. —Steven G. Johnson 13:37, 6 July 2006 (UTC)
Definitely merge under the title "Landau notation" and make this page a redirect. 141.20.53.108 18:00, 7 September 2006 (UTC)
I think that this article should stay where it is. Merging content from Landu notation can be done as needed. CRGreathouse ( t | c) 13:01, 20 September 2006 (UTC)
Should not the small o be used in the example of where sublinear is written? helohe (talk) 14:23, 28 July 2006 (UTC)
So how do you pronounce "O(n)"? [1] says it's pronounced big oh of n. Maybe we should add that? -- Felix Wiemann 18:35, 9 August 2006 (UTC)
c) 04:37, 20 September 2006 (UTC)
This claim is wrong:
A counterexample is to take g(n)=1/n if n is odd, and g(n)=n if n is even. A correct sufficient condition would be g(n)=Ω(1) but Ω has not been introduced in the article yet. McKay 00:58, 11 August 2006 (UTC)
Already covered:
I have yet to see this answered, and answering it would solve many questions I have when reading this article:
My gut reaction is, "No." But is there some formal answer as to why this may or may not be the case?
Are there any objections, corrections, or suggestions to this modification of the table?
Notation | Name | Example |
---|---|---|
O(1) | constant | Determining if a number is even or odd |
O(log* n) | iterated logarithmic | The find algorithm of Hopcroft and Ullman on a disjont set |
O(log n) | logarithmic | Finding an item in a sorted list |
O((log n)c) | polylogarithmic | Deciding if a number is prime with the AKS primality test |
O(n) | linear | Finding an item in an unsorted list |
O(n log n) | linearithmic, loglinear, or quasilinear | Sorting a list with heapsort |
O(n2) | quadratic | Sorting a list with insertion sort |
O(nc), c > 1 | polynomial, sometimes called algebraic | Finding the shortest path on a weighted digraph with the Floyd-Warshall algorithm |
O(cn) | exponential, sometimes called geometric | Finding the (exact) solution to the traveling salesperson problem |
O(n!) | factorial, sometimes called combinatorial | Determining if two logical statements are equivalent [2] |
double exponential | Finding a complete set of AC-unifiers [3] |
Changes:
Things that might need work:
General notes:
CRGreathouse ( t | c) 07:31, 20 September 2006 (UTC)
F.R., April 3 2008:
I merged asymptotic notation and Landau notation into this page, following the longstanding merge tags, as those pages were (almost) complete subsets of the information on this page, and this page had received by far the most attention of the three (and thus should have its history preserved).
However, now that they are merged, there is an argument for moving this page and its history to asymptotic notation, as that title seems broader.
On the other hand, since this page had received the most attention of the different titles, it seems that editors have already voted with their feet. Also, in common usage, the term "Big O notation" seems to be a synecdoche for asymptotic notation in general.
What do people think?
—Steven G. Johnson 15:40, 26 September 2006 (UTC)
The article at the moment uses a real mess of different notations. This won't do. We should choose a single notation and use it throughout except for a section that describes alternative notations. The only suitable default notation is the one using "=" because that is what is used 99.9% of the time in professional mathematics and CS. Anyone disagree? McKay 08:17, 16 November 2006 (UTC)
In the properties section there is a paragraph on the effects of changing units. This implies that doing so can affect the order of the algorithm, which sounds counter-intuitive.
It also says (in part): "If, however, an algorithm runs in the order of , replacing n with cn gives . This is not equivalent to (unless, of course, c=1)." I thought the actual base of the exponent was irrelevant, and we simply used base 2 as a convenience for computer nerds who are used to thinking in binary. is still a constant base, and so it's still
Any comments? Am I right? Gazimon 11:07, 5 December 2006 (UTC)
With the way is defined, it would almost seem to confuse comparisons of the running time of algorithms. Suppose we have two functions, and . I could, quite rightly according to the defintion and the examples presented, claim that and that . Now, when deciding which algorithm is the fastest, you would think that would be the best to implement, when, clearly, this is not actually the case.
Perhaps we could add some comment about usefulness -- namely that in order to compare two functions, say and , we find minimal, 'monic' expressions (i.e. without any arbitrary constants -- for the sake of cleanliness: it is really seen that ) for and before proclaiming that and . Then, we can come to the conclusion that .
I hope this makes sense. 137.205.139.149 16:24, 12 January 2007 (UTC)
I'm not a mathematician, but I like math. I also like Unix, and the big O notation comes up a lot in Unix. My understanding of it is: for a function f(n), O(f(n)) is the maximum number of iterations that may be required to compute f(n).
It seems to match the examples given in the article:
Notation | Name | Example |
---|---|---|
O(1) | constant | Determining if a number is even or odd |
O(log n) | logarithmic | Finding an item in a sorted list with the binary search algorithm |
O(n) | linear | Finding an item in an unsorted list |
Did I get this right?
If I did, what the £%*µ! does THIS mean?
And this?
1. What's g? I assume that g(x) is the same as O(f(x))
2. Same thing for M. What is M? It makes sense in the mathematical example, but not in the practical ones, the ones Common_orders_of_functions.
The second definition, I can sort of make out:
But I don't understand how this relates to the examples I gave above. (If they are indeed right.)
If that vulgarisation of the mathematical definition is correct, how can I apply it to the sign-up sheet or the phone book examples? Or for that matter to the odd and even problem? How can I apply the two mathematical definitions to those problems?
The closest I can get is with the sign-up sheet example. f(x) is defined as follows. There are n names on a list. Each x is a person looking for someone on the sign-up sheet, reading from top to bottom. In the best-case scenario, the person they're looking for is the first, they only have to read one name. In the worst-case, they are looking for the last name and have to read all of the names on the list. The maximum number of operations requited in n. Thus Big O of f(x) is O(n). Would be the first person to read all of the names? If there are 14 names on the, list f(x) will always be less or equal to 14. But in my example, how do x and n relate to the mathematical definition? In the mathematical definition, both f and g use the same argument.
Eje211 17:31, 12 January 2007 (UTC)
Big-O notation is just a way of being able to compare expected run times without getting bogged down in detail. Suppose you need to run an algorithm on a set of n inputs. Perhaps the actual number of 'operations' (however you choose to define that term) that would need to be carried out is something like: 2n^2 + 3n - 5 This is really only interesting when n gets large, in which case the only term that really matters is the n^2 term, which dwarfs the others. And in fact, the constant coefficient isn't really that important, either. So we just say that the algorithm runs in 'order n squared' time, or O(n^2)."
Is "algebraic integer" really used as a term for O(n^k), 0<k<1? If not, what term should be used? I like the recent addition, assuming it's otherwise correct, but the term doesn't look right at all. Help? CRGreathouse ( t | c) 00:21, 19 January 2007 (UTC)
May I ask, McKay: In what context is nc (n an integer, c a rational number) called a "rational integer" ? Certainly not in any part of mathematics that I've ever encountered. Daqu ( talk) 17:06, 3 June 2008 (UTC)
what would mean that ?
and the fact that for big x then ?
could we say something about f(x) in both cases? -- Karl-H 23:02, 25 January 2007 (UTC)
The table currently contains the functions in this order:
Notation | Name |
---|---|
O(log n) | logarithmic |
O(n) | linear |
O(n log n) | linearithmic |
O(n^2) | quadratic |
O(n^c), c > 1 | polynomial |
Is it infact the case that O(n^c), 1 < c < 2, grows faster than O(n^2)?
There is a legitimate reason why we use instead of because there are normally more than one function that are Big O of another function. Consider:
and
Both and are Big O of
If we write = and = , are we then going to say ? -- CBKAtTopsails 16:13, 25 April 2007 (UTC)
There appears to be a lot of abuse of notation in this article that needs to be cleaned up to make it easier to read particularly for people who are not familiar with the subject. I've done a few. Who is going to volunteer to finish the rest? -- CBKAtTopsails 16:27, 25 April 2007 (UTC)
Even though this is about a 'scientific' or 'mathematical' subject, the summary should be much more approachable. I believe it should be changed to something like:
Big O notation roughly estimates the runtime, a relative measure of computer instructions, of a function in terms of n where n is the size of the input.
This is much more clear and less cryptic. No one without significant education is going to understand the current summary , let alone page. It's like some one is trying to validate their extended education by making this simple topic more complex than it actually is. This should be simplified to meet wikipedia standards. -- 65.189.189.23 22:40, 5 March 2007 (UTC)
I agree completely that we should be as clear as possible for beginners. Unfortunately, your suggested text won't work.
First of all, the use in computer science is just a specialization of its use in mathematical analysis. But even assuming you're only talking about its application in the complexity theory and the analysis of algorithms, it has several serious problems:
In sum, though I agree completely that we need to make an effort to be simple, we also have to be accurate. I'll be happy to work with you and other editors to improve the article. -- Macrakis 00:14, 6 March 2007 (UTC)
How about: Big O notation is often used to estimate an algorithm's performance, usually speed or memory usage, typically as a function of the size of the input. Big O notation is also used in many other scientific and mathematical fields to provide similar estimations.
Slight variation.. less terse but more common English replacements for: estimate, function, input. Going to wait a bit and ask around office of programmers and then will change. Big O notation is often used to characterize an algorithm's performance, usually in terms of how processing time or working space requirements grow as the number of items to be processed grows. Big O notation is used in a similar way in many other scientific and mathematical fields. — Preceding unsigned comment added by 72.14.228.137 ( talk) 21:48, 26 July 2011 (UTC)
In "Formal Definition" it says "for all Xo". Shouldn't it be "exists Xo"? Italo Tasso 22:11, 27 May 2007 (UTC)
I still don't think the formal def. is quite right. Should be this For some M>0, there exists x-naught such that |f(x)| <= M * | g(x) | for all x > x-naught Seems to be currently stating the converse Anyone agree? —Preceding unsigned comment added by 144.26.117.1 ( talk) 22:17, 3 September 2008 (UTC)
I agree that the formal definition is incorrect. At least according to Erdelyi (Asymptotic Expansions, DOver 1956). Erdelyi says (page 5) that for x in R (R a Hausdorff space) we write phi = O(psi) if there exists a constant (i.e. a number independent of x) A, such that |phi| <= A |psi| for all x in R.
There is no mention of a limit point, yet. Erdelyi goes on to write: phi = O(psi) as x-> xo if there exists a constant A and a neighborhood U of xo so that |phi| <= A |psi| for all x common to U and R.
On a related (but different) note: I confess to being bothered by the fact that in Kevorkian and Cole (Perturbation Methods in Applied Mathematics) they define O differently. K&C say: phi, psi, etc are scalar functions of the variable x (which may be a vector) and the scalar parameter epsilon. The variable x ranges over some domain D and epsilon belongs to some interval I. THen they define big O: Let x be fixed. We say phi = O(psi) in I if there exists a k(x) such that |phi| <= k(x) |psi| for all epsilon in I.
Erdelyi goes on to say: "If the functions involved in an order relation depend on parameters, in general also the constant A, and the neighborhoods U, U_epsilon involved in the definitions will depend on the parameters. If A, U, U_epsilon may be chosen to be independent of the parameters, the order relation is said to hold uniformly in the parameters." I'm not sure what the difference is between "x" and "a parameter". —Preceding unsigned comment added by 65.19.15.124 ( talk) 16:25, 29 November 2009 (UTC)
I believe that, in general, both addition and multiplication are of order log(N). This is surely worthy of mention, as these are rather commonly used operations (and people tend to assume that they take constant time). Pog 16:06, 1 November 2007 (UTC)
Hello,
in the beginning of the article it is said that capital theta notation is "described below". It is not. I could remove this "described below" phrase, but it would be nice if someone actually describes it. 87.228.109.83 16:22, 10 November 2007 (UTC)
Wouldn't it be simpler to define big Omega and big Theta as follows?
Is that correct? —Preceding unsigned comment added by 84.221.201.63 ( talk) 04:40, 3 April 2008 (UTC)
I would guess that many (most?) readers who visit this page want information about big O notation from a computer, not maths, perspective.
Can some kind, clever person give a simple, authoritative explanation with examples?
Sam Dutton 22:16, 14 November 2007 (UTC)
More than a year ago, someone changed the notation
into
and nobody complained (I think). I want to complain now, as almost no serious publications in mathematics or CS use . In fact the most common usage is . Who will object to changing it now? McKay ( talk) 20:49, 14 April 2008 (UTC)
I find this article maddening because until one gets to the "formal defintion" section -- which is admirably clear -- there is nothing that can be even called an informal definition. Before there is even an informal defintion, I don't want to know anything else about big-oh notation. (The rough description in the opening sentence is accurate, but much too vague to be of any help. All the intervening stuff up to the "formal definition" is just plain annoying, absent any clear definition of what is being discussed.) Daqu ( talk) 17:00, 3 June 2008 (UTC)
The article says
but these expressions are meaningless or their meaning has not been defined in the article: we have defined the menaing of
but not the menaning of
In analytic number theory (which is what I do) typically O(f(x))=O(g(x)) just means f(x)=O(g(x)). It is used in long strings of equalities. Jordan toronto ( talk) 19:00, 24 June 2008 (UTC)
The first three claims in the new column in the second table are wrong, and the last one says nothing new. That makes 2/6. The previous version of this column was also useless in my opinion. I suggest deleting it completely. McKay ( talk) 12:43, 28 July 2008 (UTC)
Since the first two are still wrong, I'm going to remove them and leave the cells blank for the moment. I added the column (not knowing there had been a previous attempt) because I think the analogy between O, theta, and omega and the <,>,<=,>=, etc symbols makes them much easier to understand. Feel free to remove the column, but if you could somehow replace the top two with statements that are both mathematically correct and still make connection (as CRGreathouse did), I think that would be ideal. mcs ( talk) 21:44, 28 July 2008 (UTC)
The comment(s) below were originally left at Talk:Big O notation/Comments, and are posted here for posterity. Following several discussions in past years, these subpages are now deprecated. The comments may be irrelevant or outdated; if so, please feel free to remove this section.
Geometry guy 14:14, 21 May 2007 (UTC)
Just what are "tight bounds"? The only reference I can find int this article on "Big O". 71.117.209.73 23:16, 21 June 2007 (UTC)Charles Pergiel chuck.pergiel@verizon.net |
Last edited at 23:16, 21 June 2007 (UTC). Substituted at 20:04, 2 May 2016 (UTC)
![]() | This is an archive of past discussions. Do not edit the contents of this page. If you wish to start a new discussion or revive an old one, please do so on the current talk page. |
Archive 1 | Archive 2 | Archive 3 | Archive 4 |
I'd like to put in some mention of computer algorithms and their Big O performance: selection sort being N^2, merge sort N log N, travelling salesman, and so on, and implications for computing (faster computers don't compensate for big-O differences, etc). Think this should be part of this write-up, or separate but linked?
for some k1, k2, \exists k_1,k_2>0, n_0 \; \forall n>n_0. f(n) \in o(g(n)), Small Omicron; Small O; Small Oh, f is dominated by g asymptotically, |f(n)| \le ...
19:46, 22 September 2009 (UTC)--Musatov
I notice no-one has added (in the section "Orders of common functions" - O(log log n) time.
"Big O notation"? It sounds like the name was created with the intention of teaching it to eight-year-olds... I can't believe that this is the accepted terminology. Unless I'm mistaken, "big O" stands for order (or some foreign equivalent), and in my experience in maths it's always been referred to as such. When I heard order notation formally introduced as "Big O notation" for the first time in computer science, I burst out laughing... You people are all mad :p. -- 203.206.183.160 14:38, 14 September 2006 (UTC)
Are the lim-sup and lim-inf definitions correct? I haven't seen them before. Reference this in the article, perhaps. Connelly 00:15, 9 August 2005 (UTC)
The lim-sup and lim-inf based definitions have vanished (in the table with the definitions). I do not know why (no comment was given at the appropriate changes), but the present lim based definitions of are clearly wrong. Example: Consider the function for even n and for odd n. Clearly, it should be . But does not exist. Further, the definition of does not consider the absolute value of . Then e.g. but (in contrast to the other definitions where a negation of f does not matter).
In the older revision link everything is correct. So unless there was a reason for the change I do not see, I think the table should be reverted to the old state. -- DniQ 11:06, 11 March 2006 (UTC)
(the below moved from "Incorrect use of limits in table?" which is discussing the same thing):
How about a note on amortized complexities? For instance, inserts and lookups in hashtables are O(1) amortized, in constrast with array lookup which is always O(1).
doesn't O(n log n) have some special name ? -- Taw
I've heard (and taught) "quasilinear" (because f=O(xa) for all a>1, as close to 1 as desired), and think this is quite standard and also reasonable. (I plan to add this term on the main page if there is no protest against.) — MFH: Talk 13:02, 24 May 2005 (UTC)
I added loglinear, as that's how I've seen it in several books, and at least one other user has seen that (Stormix). Chris Connett 21:02, 21 December 2005 (UTC)
Well, you can also write it in soft-O, as in Õ(n). Is that "special"? -- Rovenhot 00:46, 23 June 2006 (UTC)
I remembering it as coming from the Latin word ordo. It makes a bit more sense as Landau, and Bachmann, bringing this notation to us, both were Germans. Obviously, it means the same, but I can see the difference. :) -- Chexum
The german word for order/ordo would be Ordnung, so that's another possibility...but as Chexum said, the point is moot and these words are probably cognates anyway -- Magnus N.
May the ordo disambiguation link to this page?-- I hate to register 11:58, 10 March 2007 (UTC)
(Forgive the lack of mathematical notation...) Technically speaking, isn't it "f(x) is an element of O(g(x))"? In the article, "equals" is used. I think big O is the set of functions with the appropriate property. Doradus 13:06, 1 Aug 2003 (UTC)
Technically that is correct. The thing is that the "=" notation is long established; I would say most authors I have seen use "=" when writing about asymtotic bounds. I know that when I initially studied this, I would change the "=" to "" in my head, since by context the latter is correct. However, I, apparently like others, became accustomed to "=" notation that is (I think) more widely used.
In light of this, I notice that the article mixes these two freely. I think we should pick one or the other, and I nominate "=". If no one objects I'll change it, but add an explanation similar to the one above to the appropriate article (since this applies to all asymtotic notation, not just big-O) Chris Connett 20:59, 21 December 2005 (UTC)
My interpretation is that O(g(x)) denotes an unspecified (or anonymous) function that is asymptotically bounded by a multiple of g, rather than the class of all such functions. This is similar to how we use the constant of integration (+C) in calculus. We say that the most general antiderivative of cos(x) is sin(x)+C, but almost nobody says that C denotes the set of all constant functions -- rather, it is an "arbitrary constant". So what is wrong with saying that O(1) is an "arbitrary bounded function" and writing sin(x) = O(1)? David Radcliffe ( talk) 22:30, 17 March 2008 (UTC)
In my case, I first met the "=" notation, and later, upon seeing the much more intuitive "" notation, I immediately got a better understanding of the matter and adopted the latter notation. As this topic is a part of mathematics, I think the article ought not to adopt an erroneous mathematical notation just because the majority of computer scientists is using it. I suggest that Wikipedia's article on the subject should not encourage the mentioned abuse of notation, but act guiding and avoid it. Also, the reader should be able to learn about the inconsistencies of use in practice by reading about it, not by guessing it. (As in the paragraph Matters of notation) 129.241.157.9 ( talk) 22:25, 25 September 2008 (UTC)
What I've learned in two classes is to write "f(x) is O(n)", the = looks very wrong. 84.209.125.101 ( talk) 12:27, 24 January 2010 (UTC)
I noticed the statement that if
f(n,m) = n^2 + m^2 + O(n+m)
that this is like saying there exists N,C s.t. for all n,m >= N
f(n,m) <= n^2 + m^2 + C(n+m)
I would have thought that the O(n+m) is to be bounded in absolute value, i.e.
|f(n,m) - n^2 - m^2| <= C|n+m|
which leads to n^2 + m^2 - C|n+m| <= f(n,m) <= n^2 + m^2 + C|n+m|
Is this correct?
Steffan.
What does
where o is little o, mean?
The question is, should Wikipedia itself use Θ instead of O when both are correct, and the former is intended? (e.g. in discussing topics like the FFT and sorting.)
I have a computer-science background, and I know the difference between these two...however, I also know that Θ(...) is essentially unknown outside of computer-science circles, while O(...) is widely used and, informally, more-or-less understood to mean Θ. At a certain point, when usage becomes widespread enough, it ceases to be "incorrect" (it's only notation, after all).
One argument is that, since Θ is so little-known, its appearance will only confuse most people...thus, one should use O as long as it is correct (even if it is weaker than necessary) except in cases where one wishes to explicitly distinguish between the two. On the other hand, one could use Θ whereever possible, making the symbol a link to this page...requiring an extra page of reading for people not familiar with it, but hoping that most people will simply guess that the symbol looks like O so it means what they expect.
— Steven G. Johnson 07:00, 28 Nov 2003 (UTC)
I just added O(nn) to the Common orders of functions table, mainly because I know it's often discussed in calculus classes. But I've never been able to find a name for this function (i.e., nn or xx). Does anyone have a name and source? - dcljr 04:53, 9 Jul 2004 (UTC)
Your edit seems to have been reverted, I don't know why (a comment could have been placed here). I think this could indeed be mentioned, but maybe we should wait for a name. I think for all that grows faster than exponential, one eventually takes the log and discusses the growth class of the latter. i.e.:
— MFH: Talk 21:52, 21 Jun 2005 (UTC)
It sounds like a special case of a polynomial. I've never encountered a name for this, but you could call it an nth degree polynomial. -- Colonel panic 04:21, 24 September 2005 (UTC)
I do not think so. Polynomials have a fixed degree. What's more confusing: grows faster than exponential, whereas polynomials grow slower than exponential. -- NeoUrfahraner
If I recall correctly, it can be proven that O(nn) = O(n!), using Stirling's approximation. So you could just call it "factorial order." Pmdboi 02:52, 25 May 2006 (UTC)
I thought I saw a name for O) -- that is, exp(polynomial). Has anyone seen a name for this, or a name for O for that matter? CRGreathouse ( t | c) 04:34, 20 September 2006 (UTC)
What do the formal properties mean? In particular, what is the meaning of O(fg) = O(f)O(g)? -- NeoUrfahraner 14:11, 18 Mar 2005 (UTC)
The only possible meaning of this would be that if h=O(fg), f'=O(f), g'=O(g), then h = O(f' g'), but the previous notation is indeed not completely well defined/correct. (The other way round, i.e. O(f) O(g) = O(fg) would be correct, however.) — MFH: Talk 13:46, 24 May 2005 (UTC)
Thank you -- NeoUrfahraner 06:25, 27 May 2005 (UTC)
What I am missing on this page is remarks on the relation between the various notions, in particular between Big-O and little-o. Intuitively, it seems that f=o(g) iff f=O(g) and g!=O(f). The "only if" direction of this claim is clearly true, but I am not so sure about the "if" direction. Is it true? If yes, it should be noted in the section discussing Big o and little o. If it is not true, there should be a brief remark why this is the case.
After a bit more thinking, I see now that the converse of the above does not hold. Take the following functions:
f(n) = 2^n if n is even, and n if n is odd
g(n) = 0.5 n
Then f=O(g), g!=O(f), an f!=o(g). I guess the converse holds only for monotonic functions.
This should really be moved to order of growth, asymptotic growth of a function, or something to that effect. Describing orders of growth in an article called "Big O notation" makes about as much sense as describing limits in an article called "limit notation", addition in an article called "summation notation", or for that matter mathematics under "mathematical notation". Of course, the notation must be described, but that's not the primary purpose of the article. Fredrik | talk 11:40, 10 November 2005 (UTC)
In the "Multiple Variables" section, shouldn't the second equation,
\forall n, m>N:f(n,m) \le n^2 + m^3 + C(n+m).
end in just "+ C.", since C is just some constant and doesn't depend on n and m?
You find Õ instead of Θ (theta) at many places in the article. is this correct? -- Abdull 12:50, 3 March 2006 (UTC)
No, that's wrong. Õ is soft-O, which means that logarithmic factors are ignored. Big-Theta, Θ, means that the function is guaranteed to have that particular order--no more, no less. -- Rovenhot 00:52, 23 June 2006 (UTC)
Two comments on notation.
McKay 11:36, 15 June 2006 (UTC)
I'm a mathematician, and I can honestly say I have never seen the "element of" notation used in connection with the big O notation. Is this something that's common in computer science? If not, it should be deleted from the article.-- 209.43.9.143 19:19, 19 June 2006 (UTC)
I see a merge suggestion on the top of the page but I don't think there is a case for it? Should it be removed then? -- Evanx( tag?) 05:54, 22 June 2006 (UTC)
Definitely merge. —Steven G. Johnson 13:37, 6 July 2006 (UTC)
Definitely merge under the title "Landau notation" and make this page a redirect. 141.20.53.108 18:00, 7 September 2006 (UTC)
I think that this article should stay where it is. Merging content from Landu notation can be done as needed. CRGreathouse ( t | c) 13:01, 20 September 2006 (UTC)
Should not the small o be used in the example of where sublinear is written? helohe (talk) 14:23, 28 July 2006 (UTC)
So how do you pronounce "O(n)"? [1] says it's pronounced big oh of n. Maybe we should add that? -- Felix Wiemann 18:35, 9 August 2006 (UTC)
c) 04:37, 20 September 2006 (UTC)
This claim is wrong:
A counterexample is to take g(n)=1/n if n is odd, and g(n)=n if n is even. A correct sufficient condition would be g(n)=Ω(1) but Ω has not been introduced in the article yet. McKay 00:58, 11 August 2006 (UTC)
Already covered:
I have yet to see this answered, and answering it would solve many questions I have when reading this article:
My gut reaction is, "No." But is there some formal answer as to why this may or may not be the case?
Are there any objections, corrections, or suggestions to this modification of the table?
Notation | Name | Example |
---|---|---|
O(1) | constant | Determining if a number is even or odd |
O(log* n) | iterated logarithmic | The find algorithm of Hopcroft and Ullman on a disjont set |
O(log n) | logarithmic | Finding an item in a sorted list |
O((log n)c) | polylogarithmic | Deciding if a number is prime with the AKS primality test |
O(n) | linear | Finding an item in an unsorted list |
O(n log n) | linearithmic, loglinear, or quasilinear | Sorting a list with heapsort |
O(n2) | quadratic | Sorting a list with insertion sort |
O(nc), c > 1 | polynomial, sometimes called algebraic | Finding the shortest path on a weighted digraph with the Floyd-Warshall algorithm |
O(cn) | exponential, sometimes called geometric | Finding the (exact) solution to the traveling salesperson problem |
O(n!) | factorial, sometimes called combinatorial | Determining if two logical statements are equivalent [2] |
double exponential | Finding a complete set of AC-unifiers [3] |
Changes:
Things that might need work:
General notes:
CRGreathouse ( t | c) 07:31, 20 September 2006 (UTC)
F.R., April 3 2008:
I merged asymptotic notation and Landau notation into this page, following the longstanding merge tags, as those pages were (almost) complete subsets of the information on this page, and this page had received by far the most attention of the three (and thus should have its history preserved).
However, now that they are merged, there is an argument for moving this page and its history to asymptotic notation, as that title seems broader.
On the other hand, since this page had received the most attention of the different titles, it seems that editors have already voted with their feet. Also, in common usage, the term "Big O notation" seems to be a synecdoche for asymptotic notation in general.
What do people think?
—Steven G. Johnson 15:40, 26 September 2006 (UTC)
The article at the moment uses a real mess of different notations. This won't do. We should choose a single notation and use it throughout except for a section that describes alternative notations. The only suitable default notation is the one using "=" because that is what is used 99.9% of the time in professional mathematics and CS. Anyone disagree? McKay 08:17, 16 November 2006 (UTC)
In the properties section there is a paragraph on the effects of changing units. This implies that doing so can affect the order of the algorithm, which sounds counter-intuitive.
It also says (in part): "If, however, an algorithm runs in the order of , replacing n with cn gives . This is not equivalent to (unless, of course, c=1)." I thought the actual base of the exponent was irrelevant, and we simply used base 2 as a convenience for computer nerds who are used to thinking in binary. is still a constant base, and so it's still
Any comments? Am I right? Gazimon 11:07, 5 December 2006 (UTC)
With the way is defined, it would almost seem to confuse comparisons of the running time of algorithms. Suppose we have two functions, and . I could, quite rightly according to the defintion and the examples presented, claim that and that . Now, when deciding which algorithm is the fastest, you would think that would be the best to implement, when, clearly, this is not actually the case.
Perhaps we could add some comment about usefulness -- namely that in order to compare two functions, say and , we find minimal, 'monic' expressions (i.e. without any arbitrary constants -- for the sake of cleanliness: it is really seen that ) for and before proclaiming that and . Then, we can come to the conclusion that .
I hope this makes sense. 137.205.139.149 16:24, 12 January 2007 (UTC)
I'm not a mathematician, but I like math. I also like Unix, and the big O notation comes up a lot in Unix. My understanding of it is: for a function f(n), O(f(n)) is the maximum number of iterations that may be required to compute f(n).
It seems to match the examples given in the article:
Notation | Name | Example |
---|---|---|
O(1) | constant | Determining if a number is even or odd |
O(log n) | logarithmic | Finding an item in a sorted list with the binary search algorithm |
O(n) | linear | Finding an item in an unsorted list |
Did I get this right?
If I did, what the £%*µ! does THIS mean?
And this?
1. What's g? I assume that g(x) is the same as O(f(x))
2. Same thing for M. What is M? It makes sense in the mathematical example, but not in the practical ones, the ones Common_orders_of_functions.
The second definition, I can sort of make out:
But I don't understand how this relates to the examples I gave above. (If they are indeed right.)
If that vulgarisation of the mathematical definition is correct, how can I apply it to the sign-up sheet or the phone book examples? Or for that matter to the odd and even problem? How can I apply the two mathematical definitions to those problems?
The closest I can get is with the sign-up sheet example. f(x) is defined as follows. There are n names on a list. Each x is a person looking for someone on the sign-up sheet, reading from top to bottom. In the best-case scenario, the person they're looking for is the first, they only have to read one name. In the worst-case, they are looking for the last name and have to read all of the names on the list. The maximum number of operations requited in n. Thus Big O of f(x) is O(n). Would be the first person to read all of the names? If there are 14 names on the, list f(x) will always be less or equal to 14. But in my example, how do x and n relate to the mathematical definition? In the mathematical definition, both f and g use the same argument.
Eje211 17:31, 12 January 2007 (UTC)
Big-O notation is just a way of being able to compare expected run times without getting bogged down in detail. Suppose you need to run an algorithm on a set of n inputs. Perhaps the actual number of 'operations' (however you choose to define that term) that would need to be carried out is something like: 2n^2 + 3n - 5 This is really only interesting when n gets large, in which case the only term that really matters is the n^2 term, which dwarfs the others. And in fact, the constant coefficient isn't really that important, either. So we just say that the algorithm runs in 'order n squared' time, or O(n^2)."
Is "algebraic integer" really used as a term for O(n^k), 0<k<1? If not, what term should be used? I like the recent addition, assuming it's otherwise correct, but the term doesn't look right at all. Help? CRGreathouse ( t | c) 00:21, 19 January 2007 (UTC)
May I ask, McKay: In what context is nc (n an integer, c a rational number) called a "rational integer" ? Certainly not in any part of mathematics that I've ever encountered. Daqu ( talk) 17:06, 3 June 2008 (UTC)
what would mean that ?
and the fact that for big x then ?
could we say something about f(x) in both cases? -- Karl-H 23:02, 25 January 2007 (UTC)
The table currently contains the functions in this order:
Notation | Name |
---|---|
O(log n) | logarithmic |
O(n) | linear |
O(n log n) | linearithmic |
O(n^2) | quadratic |
O(n^c), c > 1 | polynomial |
Is it infact the case that O(n^c), 1 < c < 2, grows faster than O(n^2)?
There is a legitimate reason why we use instead of because there are normally more than one function that are Big O of another function. Consider:
and
Both and are Big O of
If we write = and = , are we then going to say ? -- CBKAtTopsails 16:13, 25 April 2007 (UTC)
There appears to be a lot of abuse of notation in this article that needs to be cleaned up to make it easier to read particularly for people who are not familiar with the subject. I've done a few. Who is going to volunteer to finish the rest? -- CBKAtTopsails 16:27, 25 April 2007 (UTC)
Even though this is about a 'scientific' or 'mathematical' subject, the summary should be much more approachable. I believe it should be changed to something like:
Big O notation roughly estimates the runtime, a relative measure of computer instructions, of a function in terms of n where n is the size of the input.
This is much more clear and less cryptic. No one without significant education is going to understand the current summary , let alone page. It's like some one is trying to validate their extended education by making this simple topic more complex than it actually is. This should be simplified to meet wikipedia standards. -- 65.189.189.23 22:40, 5 March 2007 (UTC)
I agree completely that we should be as clear as possible for beginners. Unfortunately, your suggested text won't work.
First of all, the use in computer science is just a specialization of its use in mathematical analysis. But even assuming you're only talking about its application in the complexity theory and the analysis of algorithms, it has several serious problems:
In sum, though I agree completely that we need to make an effort to be simple, we also have to be accurate. I'll be happy to work with you and other editors to improve the article. -- Macrakis 00:14, 6 March 2007 (UTC)
How about: Big O notation is often used to estimate an algorithm's performance, usually speed or memory usage, typically as a function of the size of the input. Big O notation is also used in many other scientific and mathematical fields to provide similar estimations.
Slight variation.. less terse but more common English replacements for: estimate, function, input. Going to wait a bit and ask around office of programmers and then will change. Big O notation is often used to characterize an algorithm's performance, usually in terms of how processing time or working space requirements grow as the number of items to be processed grows. Big O notation is used in a similar way in many other scientific and mathematical fields. — Preceding unsigned comment added by 72.14.228.137 ( talk) 21:48, 26 July 2011 (UTC)
In "Formal Definition" it says "for all Xo". Shouldn't it be "exists Xo"? Italo Tasso 22:11, 27 May 2007 (UTC)
I still don't think the formal def. is quite right. Should be this For some M>0, there exists x-naught such that |f(x)| <= M * | g(x) | for all x > x-naught Seems to be currently stating the converse Anyone agree? —Preceding unsigned comment added by 144.26.117.1 ( talk) 22:17, 3 September 2008 (UTC)
I agree that the formal definition is incorrect. At least according to Erdelyi (Asymptotic Expansions, DOver 1956). Erdelyi says (page 5) that for x in R (R a Hausdorff space) we write phi = O(psi) if there exists a constant (i.e. a number independent of x) A, such that |phi| <= A |psi| for all x in R.
There is no mention of a limit point, yet. Erdelyi goes on to write: phi = O(psi) as x-> xo if there exists a constant A and a neighborhood U of xo so that |phi| <= A |psi| for all x common to U and R.
On a related (but different) note: I confess to being bothered by the fact that in Kevorkian and Cole (Perturbation Methods in Applied Mathematics) they define O differently. K&C say: phi, psi, etc are scalar functions of the variable x (which may be a vector) and the scalar parameter epsilon. The variable x ranges over some domain D and epsilon belongs to some interval I. THen they define big O: Let x be fixed. We say phi = O(psi) in I if there exists a k(x) such that |phi| <= k(x) |psi| for all epsilon in I.
Erdelyi goes on to say: "If the functions involved in an order relation depend on parameters, in general also the constant A, and the neighborhoods U, U_epsilon involved in the definitions will depend on the parameters. If A, U, U_epsilon may be chosen to be independent of the parameters, the order relation is said to hold uniformly in the parameters." I'm not sure what the difference is between "x" and "a parameter". —Preceding unsigned comment added by 65.19.15.124 ( talk) 16:25, 29 November 2009 (UTC)
I believe that, in general, both addition and multiplication are of order log(N). This is surely worthy of mention, as these are rather commonly used operations (and people tend to assume that they take constant time). Pog 16:06, 1 November 2007 (UTC)
Hello,
in the beginning of the article it is said that capital theta notation is "described below". It is not. I could remove this "described below" phrase, but it would be nice if someone actually describes it. 87.228.109.83 16:22, 10 November 2007 (UTC)
Wouldn't it be simpler to define big Omega and big Theta as follows?
Is that correct? —Preceding unsigned comment added by 84.221.201.63 ( talk) 04:40, 3 April 2008 (UTC)
I would guess that many (most?) readers who visit this page want information about big O notation from a computer, not maths, perspective.
Can some kind, clever person give a simple, authoritative explanation with examples?
Sam Dutton 22:16, 14 November 2007 (UTC)
More than a year ago, someone changed the notation
into
and nobody complained (I think). I want to complain now, as almost no serious publications in mathematics or CS use . In fact the most common usage is . Who will object to changing it now? McKay ( talk) 20:49, 14 April 2008 (UTC)
I find this article maddening because until one gets to the "formal defintion" section -- which is admirably clear -- there is nothing that can be even called an informal definition. Before there is even an informal defintion, I don't want to know anything else about big-oh notation. (The rough description in the opening sentence is accurate, but much too vague to be of any help. All the intervening stuff up to the "formal definition" is just plain annoying, absent any clear definition of what is being discussed.) Daqu ( talk) 17:00, 3 June 2008 (UTC)
The article says
but these expressions are meaningless or their meaning has not been defined in the article: we have defined the menaing of
but not the menaning of
In analytic number theory (which is what I do) typically O(f(x))=O(g(x)) just means f(x)=O(g(x)). It is used in long strings of equalities. Jordan toronto ( talk) 19:00, 24 June 2008 (UTC)
The first three claims in the new column in the second table are wrong, and the last one says nothing new. That makes 2/6. The previous version of this column was also useless in my opinion. I suggest deleting it completely. McKay ( talk) 12:43, 28 July 2008 (UTC)
Since the first two are still wrong, I'm going to remove them and leave the cells blank for the moment. I added the column (not knowing there had been a previous attempt) because I think the analogy between O, theta, and omega and the <,>,<=,>=, etc symbols makes them much easier to understand. Feel free to remove the column, but if you could somehow replace the top two with statements that are both mathematically correct and still make connection (as CRGreathouse did), I think that would be ideal. mcs ( talk) 21:44, 28 July 2008 (UTC)
The comment(s) below were originally left at Talk:Big O notation/Comments, and are posted here for posterity. Following several discussions in past years, these subpages are now deprecated. The comments may be irrelevant or outdated; if so, please feel free to remove this section.
Geometry guy 14:14, 21 May 2007 (UTC)
Just what are "tight bounds"? The only reference I can find int this article on "Big O". 71.117.209.73 23:16, 21 June 2007 (UTC)Charles Pergiel chuck.pergiel@verizon.net |
Last edited at 23:16, 21 June 2007 (UTC). Substituted at 20:04, 2 May 2016 (UTC)