![]() | This is an archive of past discussions. Do not edit the contents of this page. If you wish to start a new discussion or revive an old one, please do so on the current talk page. |
Archive 1 | Archive 2 | Archive 3 | Archive 4 |
Would it be worth mentioning somewhere in the article that two algorithms can have the same complexity, yet one may be significantly faster for real-world implementations? For example, if algorithm A takes n seconds and algorithm B takes 10^9*n seconds, obviously A is a billion times faster even though both algorithms are O(n) (even ). Though this may be obvious to those of us familiar with the notation or with limits in general, this may be entirely unobvious to novices. Thoughts? I'm not sure if this would be encyclopedic enough to warrant inclusion. Mickeyg13 ( talk) 21:22, 18 September 2008 (UTC)
This is not right. There might not be any untruncated series, or, as in the case highlighted (asymptotic expansions), the untruncated series might diverge. A O() term is used to indicate how accurately the truncated series approximates the original function, not how accurately it approximates the infinite series. Here is my attempt:
McKay ( talk) 13:39, 19 September 2008 (UTC)
Thanks for the improvement to my wording. — David Eppstein ( talk) 14:40, 19 September 2008 (UTC)
I just read the article, as I've heard the term O(n) several times by hackers when referring to things like filesystem performance.
However, I don't understand it one bit. The article looks like it's been written for those that already understand it, which is crap. Can somebody write it in a format that's actually understandable by human beings?
I'm not stupid. I've written a ton of simple algorithms, do PHP+MySQL programming as a hobby, and have written C and Delphi in the past. I did an IQ test recently, and it came out as 121. Yet I cannot understand this article. Something is fundamentally wrong. -- 121.44.28.236 ( talk) 23:03, 11 March 2009 (UTC)
I think there is too much "stuff" in this article that is only marginally useful. IMHO it would be more helpful if it were pared down a little. I would be happy to try to make changes along these lines by axing some stuff, but I'm not sure of the etiquette for undoing other people's work. (It is also slightly inconsistent in its use of O(.) and Theta(): e.g., in the "Orders of common functions" table, the implication is clearly that the example algorithms have at least the running time shown.) Alex Selby ( talk) 23:31, 17 April 2009 (UTC)
(Just watching recent edits.) I didn't know about the "super-exponential" in the tetration sense, but in practice (or at least in computer science and analysis, where Big O notation is used) AFAIK "superexponential" does usually mean 'something that grows faster than exponential'. Shreevatsa ( talk) 14:52, 16 July 2009 (UTC)
That section heading made me think of complex numbers. Perhaps we could reword this. Professor M. Fiendish, Esq. 07:25, 25 August 2009 (UTC)
I believe that (not the other way around). So the table is wrong. Am I wrong? — Preceding unsigned comment added by 189.122.210.182 ( talk) 22:59, 24 September 2009 (UTC)
Shouldn't we add that the product property is valid only when ? Here it says that given and then . This if false if (and maybe also if x goes to any scalar k?): let's say that we have and ( and constants). Now, clearly and . However, , and by the definition given as ; more importantly, , which invalidates the property. In the case I'm getting it all wrong, please do correct me, that would be very helpful! :) -- Roberto →@me 17:20, 18 November 2009 (UTC)
Quasipolynomial time is a notion which is rapidly becoming common usage among algorithm designers, especially those working in approximation algorithms. However, such a definition is missing here. I am adding it to the table, and I added it to the blurb on subexponential time. -- deeparnab —Preceding unsigned comment added by Deeparnab ( talk • contribs) 17:25, 1 December 2009 (UTC)
We have that iff
Should we also add similar conditions for big-Omega and big-Theta bounds? I was surprised to not find them in the article.
If I recall correctly the appropriate conditions are iff
and iff
These conditions can be useful in practice for finding asymptotic bounds on some less friendly functions. 24.224.217.167 ( talk) 02:44, 18 February 2010 (UTC)
In the section on little-o, the clause
while the former has to be true for at least one constant M the later need to be true for any positive constant.
doesn't make clear just what "the former" and "the latter" refer to, especially since the definition of Big-Oh does not appear there for comparison.
Also, "latter" is misspelled, and some of the language in this section just ain't English:
...if and only if, for any positive constant M, exist a constant x0, such that...
should be
...if and only if, for every positive constant M, there exists a constant x0, such that...
and
...the latter need to be true ...
should be
...the latter need be true ...
I also changed "for any" to "for every" in the first correction above, following Jim Baumgartner's advice many years ago at Dartmouth: since "any" can be ambiguous, it's safest to avoid using it at all, especially since we don't really need it; "some" and "every" suffice. So I also prefer
... the latter need be true for every positive constant.
As a general comment, I've always encouraged students to pronounce f = O(g) as "f is of order no more than g" so as to discourage the kind of misuse of Big-Oh that Knuth objects to in [6].
Finn (John T) ( talk) 14:14, 28 May 2010 (UTC)
In the little o notation section, is mentioned as a fact. As it's the only place in that section where appears, I can only assume that someone went out of his way to say that it's a proper subset, that is, for any there is a function bounded but not dominated by . Now, this is clearly true for all except the constant zero function (or something locally identical to it for all sufficiently large ), which means that it's pretty much true whenever you would use it, but I can't see anything that strictly disallows to be constantly zero. Have I missed something? 85.226.206.92 ( talk) 05:42, 29 July 2010 (UTC)
I've been studying algorithms at a fairly simple level and was looking for some helpful information on Big-O - but I scanned this page and very quickly gave up - and I wonder how many others do too. A simple table showing the various common O's in order (similar to this one I eventually found: http://leepoint.net/notes-java/algorithms/big-oh/bigoh.html) and perhaps a graph showing them visually would be a huge help to everyone looking for introductory information on the topic - otherwise it just puts people with little mathematical background off the topic. —Preceding unsigned comment added by Essentialtech ( talk • contribs) 22:14, 1 November 2010 (UTC)
This page needs to address the meaning of subscripts on the O somewhere, as in
meaning that the constant implicit in the big O notation is allowed to depend on ε. RobHar ( talk) 22:24, 13 April 2011 (UTC)
This links to an article which defines a convex cone as a certain kind of subset of a vector space. If the statement is true, it would be interesting to see more discussion of the vector space of big O sets: how exactly it's defined. Or does the writer only mean "somewhat analogous to a convex cone". Dependent Variable ( talk) 16:23, 17 April 2011 (UTC)
The last user to revert my edit says in the edit summary "that's not the standard usage of the term; that definition would not allow log(x) = O(x^e) for e > 0, for example."
x→∞ always means both directions, that is, x→+∞ and x→-∞. This is the standard, at least in calculus and mathematical analysis. If there is another convention in computer programming, you should point it out and reference it. I think you most likely confused the real number x with natural number n, the latter being the one used most often in programming and since n is always non-negative, n→∞ equals n→+∞. For instance, log(x) = O(xα) is indeed not correct, but log(n) = O(nα) is.-- Netheril96 ( talk) 03:12, 21 May 2011 (UTC)
x → ∞ in complex analysis can mean that x is going off to infinity in any direction, and in that field O() might cover any such path to infinity. Often you see statements like "f(x) = O(g(x)) if x → ∞ with arg(x) in some interval". In real analysis, usually x → ∞ means x → +∞ and x → -∞ is different. So Netheril96 has a point, though I'm not sure of the best way to handle it in the article. McKay ( talk) 03:47, 25 May 2011 (UTC)
This issue was raised by me before but no action was taken. Now I propose to take action:
So, I'm going to change it unless someone makes a pretty good case for keeping . McKay ( talk) 02:56, 31 May 2011 (UTC)
The article is a bit obscure about the origin of the notation. It mentions Hardy as the one proposing it, but there is no reference given. The Hardy-Littlewood paper Some Problems of Diophantine Approximation that is mentioned in the relevant section does not use this notation as far as I can see. The Order of Infinity paper of Hardy also does not mention this notation; it uses to say that , which is what the article says the meaning of is. Did I miss something in one of those papers? Dorian in the skies ( talk) 08:03, 10 June 2011 (UTC)
The article at the moment defines Ω() and Θ() using absolute values. No source is given. It makes perfect sense to use the absolute value for O(), as most sources do, but it is less clear for Ω() and Θ(). Knuth's famous SIGACT article uses absolute value for O() only. Can someone please check Vol 1 of TAOCP (3rd edn), section 1.2.11.1? To be precise, I'm wondering why we define as
rather than
People in asymptotic combinatorics like me usually use the second version, for example we write for a function decaying faster than some negative power. People in algorithms only ever use it for lower bounds on positive things so it makes no difference to them. McKay ( talk) 06:11, 22 June 2011 (UTC)
![]() | This article may be too technical for most readers to understand.(July 2011) |
I simply want to be able to compare the efficiency of computer programs and, in particular, sorting algorithms. This is something that is useful for people who want to understand information technology. This article presupposes a knowledge of mathematics and mathematical notation that the reader might not have. For example, the articles on sorting algorithms use the 'Ω' symbol, so I went to this article to look up the meaning of 'Ω' but this article explains it using a formal definition. (It is especially annoying when textbook authors and Wikipedia editors alike refer to difficult mathematical concepts as "trivial.") I understand the need for precision in formal definitions, but can you please also provide informal definitions that are actually comprehensible? (As I see it, the point of a general encyclopædia is to impart knowledge to the public, rather than merely to share it amongst experts.) 69.251.180.224 ( talk) 17:44, 30 June 2011 (UTC)
People who learn about the O-notation often expect is to induce a total quasi-order, at least on ascending functions on the positive numbers, which means that if f and g are two such functions, at least one of or . This is not true in general, but there are papers that prove it for functions composed using some basic operations (like sums, products, exponents) and even discuss order-theoretic properties of this set. However, I cannot recall the bibliographic references. I for one would be thankful if somebody could help in including such information in the article.
AmirOnWiki ( talk) 10:13, 12 July 2011 (UTC)
The article currently states (in the Generalizations and related usages section):
Could someone who knows more about this topic rewrite this to be more precise and specify what those "quite general spaces" are, perchance? I'd be quite interested. Thanks! 82.82.131.70 ( talk) 23:01, 22 September 2011 (UTC)
but for programming, wouldn't it be easier to just give f(x,y,z,...) where the parameters to the function are the parameters to the algorithm? e.g.
function badSum(number x, number y, number z) { number sum; for(number i = x; x > 0; x--) { sum = sum + 1 } sum = sum + y + z; return sum; }
would have
f(x,y,z) = x+2
or the like?
Forgive me if I'm stupid. — Preceding unsigned comment added by 75.72.68.21 ( talk) 01:26, 20 April 2012 (UTC)
Please do not move pages
-- Eloquence 15:12 29 May 2003 (UTC)
-- Taku 15:16 29 May 2003 (UTC)
Sure. -- Taku 15:26 29 May 2003 (UTC)
I also vote against the move to "Big oh" until / unless cites are presented to show that it is now the common use: maybe my CS training is old-fashioned, but I recall the use of "big O" thoughout. However, show me the evidence, and I'll be willing to change my mind. The Anome 15:27 29 May 2003 (UTC)
Here is the result of my research. I couldn't find out ten books containing either big O notation or big oh notation but what is common usage seems apparent.
Except two books, all books I grabed at random above use big oh notation. -- Taku 19:11 29 May 2003 (UTC)
I think some of them may be saying how to pronounce big-O. Still, Knuth is usually pretty canonical. Can we call the article "O-notation", and direct both big-O and big-oh here? The Anome 19:23 29 May 2003 (UTC)
-- Taku 19:33 29 May 2003 (UTC)
Actually no,
Only three out of 8 refer exclusively to "Big Oh". The other clarify O-notation to be pronounced "Big Oh", this is to avoid confusion with a zero. We already avoid this by having the text "with a capital letter O, not a zero". I see no reason to change the title from "Big O notation", since this is what Google likes the most, perfectly correct and does not require us to fix any double redirects (the way I know Taku's moves, he probably won't do it himself). However, a redirect here is of course acceptable.
The page should only be moved to asymptotic notation (note singular) if it actually covers other types of this notation, not in the expectation that it will at some point. -- Eloquence 19:37 29 May 2003 (UTC)
-- Taku 19:46 29 May 2003 (UTC)
No, in case of a much-linked page, it's preferable to
It is not desirable to leave Wikipedia in an inconsistent state (in this case, 45 links that suddenly show the user a confusing redirect page, because of the double redir caused by your move) because your move would then be "easier to revert". It should not have to be reverted, because it was properly discussed first.
You have the annoying habit of simply moving pages around without prior discussion, in the expectation that people will complain if they disagree. That's correct, they will complain, bud they will also get pissed. If you want to avoid that, discuss first, move later. -- Eloquence 19:52 29 May 2003 (UTC)
Sorry I think the comment above is not good. Hostility is the last thing we want. I can see oftentimes I piss you off. For example, the last time of datadump. We should be able to have fun not have fight. So my apology of moving this article suddely and I have made a decision that I won't move any article altogether because I don't want to risk to annoy/piss off people. You may think it is extreme but I think it is safe to me because I am often careless. You can see the statement about this in my userpage. -- Taku 03:10 30 May 2003 (UTC)
Anyway above is completely off-topic. Eloquence, you claim Goolgle likes the number of hits with"Big-O notation" is twice as much as that of "Big-oh notation". It is true (by the way, "big o-notation" with 6,450 while "big oh-notation" with 2,990). But my impression is that although big o outweights big-oh, pages with good and formal writing seem to use big-oh notation. First it is consistent with other notations, big-omega and big-theta. Omega and theta are pronounciation of each letter, but so is big-O. Besides, see the list above. Only one textbook uses Big-O notation. I think it is logical to think that the sentence like
Big-Oh notation is used in "Big Java, 2nd Edition" by Cay Horstman on page 712. Superslacker87 11:49, 15 July 2006 (UTC)
Two important points:
1. Search engine "tests" are not Wikipedic : refer to
WP:SET
2. This notation hasn't originated in computer science, so the constant reference to computer science needs sme justification. I wonder whether WP attracts an unrepresentatively large number of computer scientists, rather than (say) pure mathematicians, thus skewing the discussion.
—DIV (
138.194.12.224 (
talk) 06:42, 30 August 2012 (UTC))
I know this is a really minor point, but can we standardize the italicization of O? Is it O(n) or O(n)? It's displayed both ways on this page. — Caesura 19:17, 20 Nov 2004 (UTC)
$O(n)$
. That's just a big O in "math mode", but it would be rendered in italics, like a function name in TeX's math mode. The equivalent wiki <math> markup would be <math>O(n)</math>
, which is rendered as , or it could be approximated by ''O(n)''
, which is rendered as O(n). —
AlanBarrett 16:46, 15 Dec 2004 (UTC)\mathcal{O}
, probably because it's shorter. —
145.120.10.71 12:12, 14 April 2007 (UTC)
As far as I can see, the logical formatting would be
roman, not
italic, if the symbol is treated as a function.
See references
cited at
Italic_type#When_to_use, with URL's.
—DIV (
138.194.12.224 (
talk) 06:38, 30 August 2012 (UTC))
If I'm not mistaken O(nc), 0 < c < 1 can be moved to the top of the table of function growth ordering as it always grows slower than O(1). Is that right/wrong? — Preceding unsigned comment added by 124.169.2.38 ( talk) 01:51, 4 August 2012 (UTC)
Under "Formal Definition", the statement:
limsup as x->a | f(x)/ g(x) | < infinity
only holds for functions whose values are in the same field, since the two values must be compatible if you want to divide one by the other.
To have the statement hold for functions with arbitrary normable values (like, for example, two different metric vector spaces), the absolute value bars should go around each of the functions f and g, instead of their quotient:
limsup as x->a | f(x) | / | g(x) | < infinity — Preceding unsigned comment added by Soulpa7ch ( talk • contribs) 18:47, 19 November 2012 (UTC)
This page is large, so any objections to setting it for autoarchive at 90 days? Glrx ( talk) 17:31, 20 November 2012 (UTC)
I notice that the section on "Little-o notation" has a hyphen. I personally prefer the hyphen and think it makes more sense. Also, if you look at "Big O" disambiguation page, there are a lot of things referred to as "Big O" and no others which are "Big-O". I would suggest renaming this "Big-O notation," and having "Big O notation" redirect to the renamed page. Natkuhn ( talk) 03:51, 25 December 2012 (UTC)
As someone wanting to discover big O notation, I find this article far too complex far to early in the text. I really do believe that the only people able to understand this article are those who are already expert in its use and thus don't need it. Thus the article is redundant. It would be much better if it started of a bit gentler. Right now I beleive it does more harm than good as it scares people off of wanting to learn big O notation. Scottie UK. 01:00, 04 March 2013 (GMT) — Preceding unsigned comment added by Scottie UK ( talk • contribs)
Just started loling after stumbling upon this. But it seems to be real. big-O, little-o, f little-o g, serious? Why would you say that? Whats wrong with "f grows slower than g"? Too much complexity to summarize? Calling it omnikron couldn't be to hard either. Please rename article to Landau notation. 91.51.114.77 ( talk) 06:10, 15 January 2013 (UTC) 91.51.114.77 ( talk) 06:10, 15 January 2013 (UTC)
While the historical notation for asymptotic notation is pretty horrible, saying "f grows slower than g" is entirely imprecise.
Having the name Landau notation as an alias probably isn't so bad, but most people know the topic by the name "Big Oh notation" and not by any other. Tac-Tics ( talk) 22:46, 11 March 2013 (UTC)
Before May 28, this article strongly favoured the computer scientists' version of big-Omega over the number theorists', which was bad. But the discussion now has heavy editorializing against Knuth's introduction of the computer scientists' version. Others have remarked on this (in edit comments and tags) but nobody has changed it, perhaps since it's so well referenced. So I am taking that editorializing out, while leaving in the facts (most of which were already elsewhere in the page). — Toby Bartels ( talk) 16:45, 22 June 2013 (UTC)
Seems like "infinitely many" in the table for the big omega description is not very descriptive. Shouldn't it be "for all n" rather than for infinitely many n? Richard Giuly ( talk) 21:32, 14 August 2013 (UTC)
In Big_O_notation#Other_arithmetic_operators, it says that can be replaced with . Shouldn't this be since ? Made the edit accordingly
50.136.177.227 ( talk) 16:17, 25 September 2013 (UTC)
There is no algorithm that "equals" big O of anything. Big O is a function from a set of functions to another set of functions. So when one writes O(expression), this denotes a set of functions. It is inconsistent to write that any algorithm equals O(expression), because an algorithm is not a set. Furthermore, even if that issue were fixed, it is imprecise and misleading to say that an algorithm is in a big O set. The algorithm itself cannot be compared with the elements of a big O set, which are mathematical functions. It is more precise to indicate that a mathematical function that expresses a property of the algorithm is in some big O set.
For example, it would be incorrect to say that mergesort is in O(n log n), but it would be correct to say that the work of mergesort as a function of input size n is in O(n log n). — Preceding unsigned comment added by 128.237.218.106 ( talk) 11:33, 16 October 2013 (UTC)
(Below "Family of Bachmann-Landau notation" table)
This "mnemonics" business appears very unclear and clumsy to me. I strongly suspect it is the private production of a well-intentioned wikipedia contributor. Neither Bachmann, nor Landau, nor Knuth (at least in the references cited) mentioned anything close to what is claimed below the "Bachmann-Landau" notation table. Knuth uses the word "mnemonic" in his paper, but only to refer to the fact that the symbol O is commonly used and has become a reference. I note in passing that the "mnemonic" concerning the Omega symbol refers to the Knuth version of this symbol, and not to the Hardy-Littlewood version (which is the only one Landau knew): so who in the world is supposed to have devised these mnemonic "recipes"? I still think a precise reference is very much needed to justify the conservation of this part in the article. Sapphorain ( talk) 21:21, 3 September 2013 (UTC)
Currently in this article, the notation f(x) is used to denote a function. In the articles Function (mathematics), Limit (mathematics) etc., f is used to denote a function (a rule for mapping numbers to numbers) and f(x) is unambiguously used to denote its value (a number). These are two different approaches to the notation of functions: in the first approach (used in this article), the letter f denotes a dependent variable or (physical) quantity, and when talking about the function's behavior, one must always say what variables are regarded as the independent variables or experiment parameters that it depends on; in the second approach (in Function (mathematics) etc.), f is the name of a function, where function is defined as a rule mapping objects to other objects. Perhaps the notation of functions should be unified in Wikipedia? 90.190.113.12 ( talk) 12:34, 9 January 2014 (UTC)
Sorry if I missed it, but I couldn't find specified anywhere what base of log is being used. Does it not matter since it is only comparing orders? Or is it assumed to be base 2 since it's computer science? Thanks for any clarification. — Preceding unsigned comment added by 68.250.141.172 ( talk) 20:06, 23 January 2014 (UTC)
Not only is the paper incorrectly cited, but if you read the paper (available here: http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1091181/pdf/pnas01947-0022.pdf), the letter Omega does not appear.
Discussion of the definition of limit superior as x approaches infinity is also absent from the paper, which is a relief because appears non-sensical to me. How can x approach infinity from above?
I'm inclined to strike this section from the page.
Thoughts?
76.105.173.109 ( talk) 20:49, 17 February 2014 (UTC)
I am not a native English speaker. I was wondering if this article should mention the proper, or any, way to say Big-O in spoken English. For instance, I don't know which one better: "this algorithm has a Big-O of n squared," "this algorithm has a complexity of n squared," or "this algorithm is n squared."
This discussion seems to be settled here Negrulio ( talk) 20:24, 10 June 2014 (UTC)
eg. f(x)=O(x^2) would be stated as "f(x) has order x squared". It may be a North American way of calling this, but no-one in Australia or Britain ever calls this "Big O Notation". It sounds very childish. We simply refer to this as "Order Notation" which is what the original author called it.
In the box on the top right, the caption should have a equals sign. — Preceding unsigned comment added by 81.129.12.48 ( talk) 13:07, 21 August 2014 (UTC)
The source "Big O Notation (MIT Lecture)" references Wikipedia. It seems the citation refers to only a small part of the material in the lecture notes. This is, of course, a Bad Thing, but the source seems otherwise sound. Is it ok to keep?
Wootery ( talk) 16:03, 1 September 2014 (UTC)
The article is hard to read, sorry. Too much information, many duplications. I suggest at least moving a part of the section 6.4 "multiple usage", the properties of non-symmetry of the notation to the properties of the notation, since this seems to be an important property under the given non-symmetric definition. Sources lacking. Fixed a mathematical mistake. -- Yaroslav Nikitenko ( talk) 15:39, 8 October 2014 (UTC)
Reorganization of a WP article is tricky and I have not read this article carefully enough to make such a recommendation. Having said that, it seems to me that the article starts at a rather abstract level. Keep in mind that I am judging from the narrow perspective of teaching first year college engineering students. I like to link to WP articles in my teaching, and your organization is no problem because I can link down to the subsection I need. In this case it would be:
Big_O_notation#Infinitesimal_asymptotics
I usually make these links out of Wikiversity (e.g. Physics equations). In your article, I made the example more explicit by showing first and second order expansions for the same expression. I hope you don't mind. If you like the edit, and also make the request, I can try to align the two text sections embedded in the equations.
Yours truly,
-- guyvan52 ( talk) 15:03, 31 January 2015 (UTC)
Sapphorain reverted my remarks on a deviating definition by Sipser, stating that Sipser.1997's definition is equivalent to that of Graham.Knuth.Patashnik.1994 (given before in the article). She/he is probably right, as I didn't think much about this issue.
However, the article says about Graham.Knuth.Patashnik.1994's definition:
If g(x) is nonzero, or at least becomes nonzero beyond a certain point, the relation f(x) = o(g(x)) is equivalent to
while Sipser.1997 says f(n) ∈ o(g(n)) if
i.e. he doesn't require g(x) to becomes nonzero beyond a certain point.
Moreover, the article says about Graham.Knuth.Patashnik.1994's definition:
g itself is not [in little-o of g, unless it is identically zero near ∞
while Sipser.1997 says:
g(n)∉o(g(n)) for all g
i.e. he doesn't require g not to be identically zero near ∞.
For these reasons, I felt (and still feel) that both definitions slightly differ.
In particular, the Sapphorain's statement (in the edit summary) "a function is never a o of itself (in either definition!)" appears to challenge the restriction "unless it is identically zero near ∞" made in the article; if s/he is right, that restriction is confusing and should be removed. - Jochen Burghardt ( talk) 15:44, 7 March 2015 (UTC)
You are right: g can't be 0 infinitely often when the limit exists. I overlooked that Sipser (see "Further reading" in the article for the full reference) requires the range of f and g to be the set of positive (i.e. >0) reals. Sorry for the confusion. - Jochen Burghardt ( talk) 18:52, 7 March 2015 (UTC)
Is it its own class? Or where does it belong to? -- RokerHRO ( talk) 13:55, 19 March 2015 (UTC)
74.111.162.230 ( talk) 16:04, 1 May 2015 (UTC) Using exponential identities it can be shown that x^x=E^(x ln(x)) so it is faster then an exponential but slower then E^(x^(1+ε)) for any ε>0. It is about as fast as the factorial as explained here.
It is a fact that some consider the usual way of using the O notation an abuse of notation, but it is also a fact that some others don't. It is not the role of wikipedia to teach its readers what they should consider. Sapphorain ( talk) 19:35, 7 July 2015 (UTC)
I suppressed a reference in the lead to a so-called "MIT Lecture". I'm very skeptical regarding this denomination, but there is another reason for the deletion: The source given at the end of the "lecture" is … the big oh page of wikipedia! (and an old version, as it begins by stating that the symbol O was invented by Landau, which is false). Sapphorain ( talk) 08:57, 10 July 2015 (UTC)
The section "The Hardy–Littlewood definition" contains this sentence:
I'm a bit horrified to see inequality relations used with little-o at all, but if one must try to derive a meaning for it I see ≤ and ≥ rather than < and >. McKay ( talk) 04:52, 31 August 2015 (UTC)
This article is not consistent wrt. "big O" vs "Big O" (case). What should it be? -- Mortense ( talk) 10:04, 5 February 2016 (UTC)
I have suppressed the two references in the lead, which were (and this is an understatement) very imprecise. The first one, while correctly reporting the first use of O by Bachmann, and its adoption by Landau in 1909, asserted it was "included in a more elaborate notation which included o(.), etc". It is true that Landau adopted the symbol O and invented the symbol o in 1909. But that's it. So the last "etc" in the author's assertion indicates only one thing: that he himself never read Landau's book. The second one asserted that both symbols (o and O) were first introduced by Bachmann in 1894. Which is false. So I replaced these references by Bachmann's and Landau's books. Sapphorain ( talk) 20:36, 3 July 2016 (UTC)
![]() | This is an archive of past discussions. Do not edit the contents of this page. If you wish to start a new discussion or revive an old one, please do so on the current talk page. |
Archive 1 | Archive 2 | Archive 3 | Archive 4 |
Would it be worth mentioning somewhere in the article that two algorithms can have the same complexity, yet one may be significantly faster for real-world implementations? For example, if algorithm A takes n seconds and algorithm B takes 10^9*n seconds, obviously A is a billion times faster even though both algorithms are O(n) (even ). Though this may be obvious to those of us familiar with the notation or with limits in general, this may be entirely unobvious to novices. Thoughts? I'm not sure if this would be encyclopedic enough to warrant inclusion. Mickeyg13 ( talk) 21:22, 18 September 2008 (UTC)
This is not right. There might not be any untruncated series, or, as in the case highlighted (asymptotic expansions), the untruncated series might diverge. A O() term is used to indicate how accurately the truncated series approximates the original function, not how accurately it approximates the infinite series. Here is my attempt:
McKay ( talk) 13:39, 19 September 2008 (UTC)
Thanks for the improvement to my wording. — David Eppstein ( talk) 14:40, 19 September 2008 (UTC)
I just read the article, as I've heard the term O(n) several times by hackers when referring to things like filesystem performance.
However, I don't understand it one bit. The article looks like it's been written for those that already understand it, which is crap. Can somebody write it in a format that's actually understandable by human beings?
I'm not stupid. I've written a ton of simple algorithms, do PHP+MySQL programming as a hobby, and have written C and Delphi in the past. I did an IQ test recently, and it came out as 121. Yet I cannot understand this article. Something is fundamentally wrong. -- 121.44.28.236 ( talk) 23:03, 11 March 2009 (UTC)
I think there is too much "stuff" in this article that is only marginally useful. IMHO it would be more helpful if it were pared down a little. I would be happy to try to make changes along these lines by axing some stuff, but I'm not sure of the etiquette for undoing other people's work. (It is also slightly inconsistent in its use of O(.) and Theta(): e.g., in the "Orders of common functions" table, the implication is clearly that the example algorithms have at least the running time shown.) Alex Selby ( talk) 23:31, 17 April 2009 (UTC)
(Just watching recent edits.) I didn't know about the "super-exponential" in the tetration sense, but in practice (or at least in computer science and analysis, where Big O notation is used) AFAIK "superexponential" does usually mean 'something that grows faster than exponential'. Shreevatsa ( talk) 14:52, 16 July 2009 (UTC)
That section heading made me think of complex numbers. Perhaps we could reword this. Professor M. Fiendish, Esq. 07:25, 25 August 2009 (UTC)
I believe that (not the other way around). So the table is wrong. Am I wrong? — Preceding unsigned comment added by 189.122.210.182 ( talk) 22:59, 24 September 2009 (UTC)
Shouldn't we add that the product property is valid only when ? Here it says that given and then . This if false if (and maybe also if x goes to any scalar k?): let's say that we have and ( and constants). Now, clearly and . However, , and by the definition given as ; more importantly, , which invalidates the property. In the case I'm getting it all wrong, please do correct me, that would be very helpful! :) -- Roberto →@me 17:20, 18 November 2009 (UTC)
Quasipolynomial time is a notion which is rapidly becoming common usage among algorithm designers, especially those working in approximation algorithms. However, such a definition is missing here. I am adding it to the table, and I added it to the blurb on subexponential time. -- deeparnab —Preceding unsigned comment added by Deeparnab ( talk • contribs) 17:25, 1 December 2009 (UTC)
We have that iff
Should we also add similar conditions for big-Omega and big-Theta bounds? I was surprised to not find them in the article.
If I recall correctly the appropriate conditions are iff
and iff
These conditions can be useful in practice for finding asymptotic bounds on some less friendly functions. 24.224.217.167 ( talk) 02:44, 18 February 2010 (UTC)
In the section on little-o, the clause
while the former has to be true for at least one constant M the later need to be true for any positive constant.
doesn't make clear just what "the former" and "the latter" refer to, especially since the definition of Big-Oh does not appear there for comparison.
Also, "latter" is misspelled, and some of the language in this section just ain't English:
...if and only if, for any positive constant M, exist a constant x0, such that...
should be
...if and only if, for every positive constant M, there exists a constant x0, such that...
and
...the latter need to be true ...
should be
...the latter need be true ...
I also changed "for any" to "for every" in the first correction above, following Jim Baumgartner's advice many years ago at Dartmouth: since "any" can be ambiguous, it's safest to avoid using it at all, especially since we don't really need it; "some" and "every" suffice. So I also prefer
... the latter need be true for every positive constant.
As a general comment, I've always encouraged students to pronounce f = O(g) as "f is of order no more than g" so as to discourage the kind of misuse of Big-Oh that Knuth objects to in [6].
Finn (John T) ( talk) 14:14, 28 May 2010 (UTC)
In the little o notation section, is mentioned as a fact. As it's the only place in that section where appears, I can only assume that someone went out of his way to say that it's a proper subset, that is, for any there is a function bounded but not dominated by . Now, this is clearly true for all except the constant zero function (or something locally identical to it for all sufficiently large ), which means that it's pretty much true whenever you would use it, but I can't see anything that strictly disallows to be constantly zero. Have I missed something? 85.226.206.92 ( talk) 05:42, 29 July 2010 (UTC)
I've been studying algorithms at a fairly simple level and was looking for some helpful information on Big-O - but I scanned this page and very quickly gave up - and I wonder how many others do too. A simple table showing the various common O's in order (similar to this one I eventually found: http://leepoint.net/notes-java/algorithms/big-oh/bigoh.html) and perhaps a graph showing them visually would be a huge help to everyone looking for introductory information on the topic - otherwise it just puts people with little mathematical background off the topic. —Preceding unsigned comment added by Essentialtech ( talk • contribs) 22:14, 1 November 2010 (UTC)
This page needs to address the meaning of subscripts on the O somewhere, as in
meaning that the constant implicit in the big O notation is allowed to depend on ε. RobHar ( talk) 22:24, 13 April 2011 (UTC)
This links to an article which defines a convex cone as a certain kind of subset of a vector space. If the statement is true, it would be interesting to see more discussion of the vector space of big O sets: how exactly it's defined. Or does the writer only mean "somewhat analogous to a convex cone". Dependent Variable ( talk) 16:23, 17 April 2011 (UTC)
The last user to revert my edit says in the edit summary "that's not the standard usage of the term; that definition would not allow log(x) = O(x^e) for e > 0, for example."
x→∞ always means both directions, that is, x→+∞ and x→-∞. This is the standard, at least in calculus and mathematical analysis. If there is another convention in computer programming, you should point it out and reference it. I think you most likely confused the real number x with natural number n, the latter being the one used most often in programming and since n is always non-negative, n→∞ equals n→+∞. For instance, log(x) = O(xα) is indeed not correct, but log(n) = O(nα) is.-- Netheril96 ( talk) 03:12, 21 May 2011 (UTC)
x → ∞ in complex analysis can mean that x is going off to infinity in any direction, and in that field O() might cover any such path to infinity. Often you see statements like "f(x) = O(g(x)) if x → ∞ with arg(x) in some interval". In real analysis, usually x → ∞ means x → +∞ and x → -∞ is different. So Netheril96 has a point, though I'm not sure of the best way to handle it in the article. McKay ( talk) 03:47, 25 May 2011 (UTC)
This issue was raised by me before but no action was taken. Now I propose to take action:
So, I'm going to change it unless someone makes a pretty good case for keeping . McKay ( talk) 02:56, 31 May 2011 (UTC)
The article is a bit obscure about the origin of the notation. It mentions Hardy as the one proposing it, but there is no reference given. The Hardy-Littlewood paper Some Problems of Diophantine Approximation that is mentioned in the relevant section does not use this notation as far as I can see. The Order of Infinity paper of Hardy also does not mention this notation; it uses to say that , which is what the article says the meaning of is. Did I miss something in one of those papers? Dorian in the skies ( talk) 08:03, 10 June 2011 (UTC)
The article at the moment defines Ω() and Θ() using absolute values. No source is given. It makes perfect sense to use the absolute value for O(), as most sources do, but it is less clear for Ω() and Θ(). Knuth's famous SIGACT article uses absolute value for O() only. Can someone please check Vol 1 of TAOCP (3rd edn), section 1.2.11.1? To be precise, I'm wondering why we define as
rather than
People in asymptotic combinatorics like me usually use the second version, for example we write for a function decaying faster than some negative power. People in algorithms only ever use it for lower bounds on positive things so it makes no difference to them. McKay ( talk) 06:11, 22 June 2011 (UTC)
![]() | This article may be too technical for most readers to understand.(July 2011) |
I simply want to be able to compare the efficiency of computer programs and, in particular, sorting algorithms. This is something that is useful for people who want to understand information technology. This article presupposes a knowledge of mathematics and mathematical notation that the reader might not have. For example, the articles on sorting algorithms use the 'Ω' symbol, so I went to this article to look up the meaning of 'Ω' but this article explains it using a formal definition. (It is especially annoying when textbook authors and Wikipedia editors alike refer to difficult mathematical concepts as "trivial.") I understand the need for precision in formal definitions, but can you please also provide informal definitions that are actually comprehensible? (As I see it, the point of a general encyclopædia is to impart knowledge to the public, rather than merely to share it amongst experts.) 69.251.180.224 ( talk) 17:44, 30 June 2011 (UTC)
People who learn about the O-notation often expect is to induce a total quasi-order, at least on ascending functions on the positive numbers, which means that if f and g are two such functions, at least one of or . This is not true in general, but there are papers that prove it for functions composed using some basic operations (like sums, products, exponents) and even discuss order-theoretic properties of this set. However, I cannot recall the bibliographic references. I for one would be thankful if somebody could help in including such information in the article.
AmirOnWiki ( talk) 10:13, 12 July 2011 (UTC)
The article currently states (in the Generalizations and related usages section):
Could someone who knows more about this topic rewrite this to be more precise and specify what those "quite general spaces" are, perchance? I'd be quite interested. Thanks! 82.82.131.70 ( talk) 23:01, 22 September 2011 (UTC)
but for programming, wouldn't it be easier to just give f(x,y,z,...) where the parameters to the function are the parameters to the algorithm? e.g.
function badSum(number x, number y, number z) { number sum; for(number i = x; x > 0; x--) { sum = sum + 1 } sum = sum + y + z; return sum; }
would have
f(x,y,z) = x+2
or the like?
Forgive me if I'm stupid. — Preceding unsigned comment added by 75.72.68.21 ( talk) 01:26, 20 April 2012 (UTC)
Please do not move pages
-- Eloquence 15:12 29 May 2003 (UTC)
-- Taku 15:16 29 May 2003 (UTC)
Sure. -- Taku 15:26 29 May 2003 (UTC)
I also vote against the move to "Big oh" until / unless cites are presented to show that it is now the common use: maybe my CS training is old-fashioned, but I recall the use of "big O" thoughout. However, show me the evidence, and I'll be willing to change my mind. The Anome 15:27 29 May 2003 (UTC)
Here is the result of my research. I couldn't find out ten books containing either big O notation or big oh notation but what is common usage seems apparent.
Except two books, all books I grabed at random above use big oh notation. -- Taku 19:11 29 May 2003 (UTC)
I think some of them may be saying how to pronounce big-O. Still, Knuth is usually pretty canonical. Can we call the article "O-notation", and direct both big-O and big-oh here? The Anome 19:23 29 May 2003 (UTC)
-- Taku 19:33 29 May 2003 (UTC)
Actually no,
Only three out of 8 refer exclusively to "Big Oh". The other clarify O-notation to be pronounced "Big Oh", this is to avoid confusion with a zero. We already avoid this by having the text "with a capital letter O, not a zero". I see no reason to change the title from "Big O notation", since this is what Google likes the most, perfectly correct and does not require us to fix any double redirects (the way I know Taku's moves, he probably won't do it himself). However, a redirect here is of course acceptable.
The page should only be moved to asymptotic notation (note singular) if it actually covers other types of this notation, not in the expectation that it will at some point. -- Eloquence 19:37 29 May 2003 (UTC)
-- Taku 19:46 29 May 2003 (UTC)
No, in case of a much-linked page, it's preferable to
It is not desirable to leave Wikipedia in an inconsistent state (in this case, 45 links that suddenly show the user a confusing redirect page, because of the double redir caused by your move) because your move would then be "easier to revert". It should not have to be reverted, because it was properly discussed first.
You have the annoying habit of simply moving pages around without prior discussion, in the expectation that people will complain if they disagree. That's correct, they will complain, bud they will also get pissed. If you want to avoid that, discuss first, move later. -- Eloquence 19:52 29 May 2003 (UTC)
Sorry I think the comment above is not good. Hostility is the last thing we want. I can see oftentimes I piss you off. For example, the last time of datadump. We should be able to have fun not have fight. So my apology of moving this article suddely and I have made a decision that I won't move any article altogether because I don't want to risk to annoy/piss off people. You may think it is extreme but I think it is safe to me because I am often careless. You can see the statement about this in my userpage. -- Taku 03:10 30 May 2003 (UTC)
Anyway above is completely off-topic. Eloquence, you claim Goolgle likes the number of hits with"Big-O notation" is twice as much as that of "Big-oh notation". It is true (by the way, "big o-notation" with 6,450 while "big oh-notation" with 2,990). But my impression is that although big o outweights big-oh, pages with good and formal writing seem to use big-oh notation. First it is consistent with other notations, big-omega and big-theta. Omega and theta are pronounciation of each letter, but so is big-O. Besides, see the list above. Only one textbook uses Big-O notation. I think it is logical to think that the sentence like
Big-Oh notation is used in "Big Java, 2nd Edition" by Cay Horstman on page 712. Superslacker87 11:49, 15 July 2006 (UTC)
Two important points:
1. Search engine "tests" are not Wikipedic : refer to
WP:SET
2. This notation hasn't originated in computer science, so the constant reference to computer science needs sme justification. I wonder whether WP attracts an unrepresentatively large number of computer scientists, rather than (say) pure mathematicians, thus skewing the discussion.
—DIV (
138.194.12.224 (
talk) 06:42, 30 August 2012 (UTC))
I know this is a really minor point, but can we standardize the italicization of O? Is it O(n) or O(n)? It's displayed both ways on this page. — Caesura 19:17, 20 Nov 2004 (UTC)
$O(n)$
. That's just a big O in "math mode", but it would be rendered in italics, like a function name in TeX's math mode. The equivalent wiki <math> markup would be <math>O(n)</math>
, which is rendered as , or it could be approximated by ''O(n)''
, which is rendered as O(n). —
AlanBarrett 16:46, 15 Dec 2004 (UTC)\mathcal{O}
, probably because it's shorter. —
145.120.10.71 12:12, 14 April 2007 (UTC)
As far as I can see, the logical formatting would be
roman, not
italic, if the symbol is treated as a function.
See references
cited at
Italic_type#When_to_use, with URL's.
—DIV (
138.194.12.224 (
talk) 06:38, 30 August 2012 (UTC))
If I'm not mistaken O(nc), 0 < c < 1 can be moved to the top of the table of function growth ordering as it always grows slower than O(1). Is that right/wrong? — Preceding unsigned comment added by 124.169.2.38 ( talk) 01:51, 4 August 2012 (UTC)
Under "Formal Definition", the statement:
limsup as x->a | f(x)/ g(x) | < infinity
only holds for functions whose values are in the same field, since the two values must be compatible if you want to divide one by the other.
To have the statement hold for functions with arbitrary normable values (like, for example, two different metric vector spaces), the absolute value bars should go around each of the functions f and g, instead of their quotient:
limsup as x->a | f(x) | / | g(x) | < infinity — Preceding unsigned comment added by Soulpa7ch ( talk • contribs) 18:47, 19 November 2012 (UTC)
This page is large, so any objections to setting it for autoarchive at 90 days? Glrx ( talk) 17:31, 20 November 2012 (UTC)
I notice that the section on "Little-o notation" has a hyphen. I personally prefer the hyphen and think it makes more sense. Also, if you look at "Big O" disambiguation page, there are a lot of things referred to as "Big O" and no others which are "Big-O". I would suggest renaming this "Big-O notation," and having "Big O notation" redirect to the renamed page. Natkuhn ( talk) 03:51, 25 December 2012 (UTC)
As someone wanting to discover big O notation, I find this article far too complex far to early in the text. I really do believe that the only people able to understand this article are those who are already expert in its use and thus don't need it. Thus the article is redundant. It would be much better if it started of a bit gentler. Right now I beleive it does more harm than good as it scares people off of wanting to learn big O notation. Scottie UK. 01:00, 04 March 2013 (GMT) — Preceding unsigned comment added by Scottie UK ( talk • contribs)
Just started loling after stumbling upon this. But it seems to be real. big-O, little-o, f little-o g, serious? Why would you say that? Whats wrong with "f grows slower than g"? Too much complexity to summarize? Calling it omnikron couldn't be to hard either. Please rename article to Landau notation. 91.51.114.77 ( talk) 06:10, 15 January 2013 (UTC) 91.51.114.77 ( talk) 06:10, 15 January 2013 (UTC)
While the historical notation for asymptotic notation is pretty horrible, saying "f grows slower than g" is entirely imprecise.
Having the name Landau notation as an alias probably isn't so bad, but most people know the topic by the name "Big Oh notation" and not by any other. Tac-Tics ( talk) 22:46, 11 March 2013 (UTC)
Before May 28, this article strongly favoured the computer scientists' version of big-Omega over the number theorists', which was bad. But the discussion now has heavy editorializing against Knuth's introduction of the computer scientists' version. Others have remarked on this (in edit comments and tags) but nobody has changed it, perhaps since it's so well referenced. So I am taking that editorializing out, while leaving in the facts (most of which were already elsewhere in the page). — Toby Bartels ( talk) 16:45, 22 June 2013 (UTC)
Seems like "infinitely many" in the table for the big omega description is not very descriptive. Shouldn't it be "for all n" rather than for infinitely many n? Richard Giuly ( talk) 21:32, 14 August 2013 (UTC)
In Big_O_notation#Other_arithmetic_operators, it says that can be replaced with . Shouldn't this be since ? Made the edit accordingly
50.136.177.227 ( talk) 16:17, 25 September 2013 (UTC)
There is no algorithm that "equals" big O of anything. Big O is a function from a set of functions to another set of functions. So when one writes O(expression), this denotes a set of functions. It is inconsistent to write that any algorithm equals O(expression), because an algorithm is not a set. Furthermore, even if that issue were fixed, it is imprecise and misleading to say that an algorithm is in a big O set. The algorithm itself cannot be compared with the elements of a big O set, which are mathematical functions. It is more precise to indicate that a mathematical function that expresses a property of the algorithm is in some big O set.
For example, it would be incorrect to say that mergesort is in O(n log n), but it would be correct to say that the work of mergesort as a function of input size n is in O(n log n). — Preceding unsigned comment added by 128.237.218.106 ( talk) 11:33, 16 October 2013 (UTC)
(Below "Family of Bachmann-Landau notation" table)
This "mnemonics" business appears very unclear and clumsy to me. I strongly suspect it is the private production of a well-intentioned wikipedia contributor. Neither Bachmann, nor Landau, nor Knuth (at least in the references cited) mentioned anything close to what is claimed below the "Bachmann-Landau" notation table. Knuth uses the word "mnemonic" in his paper, but only to refer to the fact that the symbol O is commonly used and has become a reference. I note in passing that the "mnemonic" concerning the Omega symbol refers to the Knuth version of this symbol, and not to the Hardy-Littlewood version (which is the only one Landau knew): so who in the world is supposed to have devised these mnemonic "recipes"? I still think a precise reference is very much needed to justify the conservation of this part in the article. Sapphorain ( talk) 21:21, 3 September 2013 (UTC)
Currently in this article, the notation f(x) is used to denote a function. In the articles Function (mathematics), Limit (mathematics) etc., f is used to denote a function (a rule for mapping numbers to numbers) and f(x) is unambiguously used to denote its value (a number). These are two different approaches to the notation of functions: in the first approach (used in this article), the letter f denotes a dependent variable or (physical) quantity, and when talking about the function's behavior, one must always say what variables are regarded as the independent variables or experiment parameters that it depends on; in the second approach (in Function (mathematics) etc.), f is the name of a function, where function is defined as a rule mapping objects to other objects. Perhaps the notation of functions should be unified in Wikipedia? 90.190.113.12 ( talk) 12:34, 9 January 2014 (UTC)
Sorry if I missed it, but I couldn't find specified anywhere what base of log is being used. Does it not matter since it is only comparing orders? Or is it assumed to be base 2 since it's computer science? Thanks for any clarification. — Preceding unsigned comment added by 68.250.141.172 ( talk) 20:06, 23 January 2014 (UTC)
Not only is the paper incorrectly cited, but if you read the paper (available here: http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1091181/pdf/pnas01947-0022.pdf), the letter Omega does not appear.
Discussion of the definition of limit superior as x approaches infinity is also absent from the paper, which is a relief because appears non-sensical to me. How can x approach infinity from above?
I'm inclined to strike this section from the page.
Thoughts?
76.105.173.109 ( talk) 20:49, 17 February 2014 (UTC)
I am not a native English speaker. I was wondering if this article should mention the proper, or any, way to say Big-O in spoken English. For instance, I don't know which one better: "this algorithm has a Big-O of n squared," "this algorithm has a complexity of n squared," or "this algorithm is n squared."
This discussion seems to be settled here Negrulio ( talk) 20:24, 10 June 2014 (UTC)
eg. f(x)=O(x^2) would be stated as "f(x) has order x squared". It may be a North American way of calling this, but no-one in Australia or Britain ever calls this "Big O Notation". It sounds very childish. We simply refer to this as "Order Notation" which is what the original author called it.
In the box on the top right, the caption should have a equals sign. — Preceding unsigned comment added by 81.129.12.48 ( talk) 13:07, 21 August 2014 (UTC)
The source "Big O Notation (MIT Lecture)" references Wikipedia. It seems the citation refers to only a small part of the material in the lecture notes. This is, of course, a Bad Thing, but the source seems otherwise sound. Is it ok to keep?
Wootery ( talk) 16:03, 1 September 2014 (UTC)
The article is hard to read, sorry. Too much information, many duplications. I suggest at least moving a part of the section 6.4 "multiple usage", the properties of non-symmetry of the notation to the properties of the notation, since this seems to be an important property under the given non-symmetric definition. Sources lacking. Fixed a mathematical mistake. -- Yaroslav Nikitenko ( talk) 15:39, 8 October 2014 (UTC)
Reorganization of a WP article is tricky and I have not read this article carefully enough to make such a recommendation. Having said that, it seems to me that the article starts at a rather abstract level. Keep in mind that I am judging from the narrow perspective of teaching first year college engineering students. I like to link to WP articles in my teaching, and your organization is no problem because I can link down to the subsection I need. In this case it would be:
Big_O_notation#Infinitesimal_asymptotics
I usually make these links out of Wikiversity (e.g. Physics equations). In your article, I made the example more explicit by showing first and second order expansions for the same expression. I hope you don't mind. If you like the edit, and also make the request, I can try to align the two text sections embedded in the equations.
Yours truly,
-- guyvan52 ( talk) 15:03, 31 January 2015 (UTC)
Sapphorain reverted my remarks on a deviating definition by Sipser, stating that Sipser.1997's definition is equivalent to that of Graham.Knuth.Patashnik.1994 (given before in the article). She/he is probably right, as I didn't think much about this issue.
However, the article says about Graham.Knuth.Patashnik.1994's definition:
If g(x) is nonzero, or at least becomes nonzero beyond a certain point, the relation f(x) = o(g(x)) is equivalent to
while Sipser.1997 says f(n) ∈ o(g(n)) if
i.e. he doesn't require g(x) to becomes nonzero beyond a certain point.
Moreover, the article says about Graham.Knuth.Patashnik.1994's definition:
g itself is not [in little-o of g, unless it is identically zero near ∞
while Sipser.1997 says:
g(n)∉o(g(n)) for all g
i.e. he doesn't require g not to be identically zero near ∞.
For these reasons, I felt (and still feel) that both definitions slightly differ.
In particular, the Sapphorain's statement (in the edit summary) "a function is never a o of itself (in either definition!)" appears to challenge the restriction "unless it is identically zero near ∞" made in the article; if s/he is right, that restriction is confusing and should be removed. - Jochen Burghardt ( talk) 15:44, 7 March 2015 (UTC)
You are right: g can't be 0 infinitely often when the limit exists. I overlooked that Sipser (see "Further reading" in the article for the full reference) requires the range of f and g to be the set of positive (i.e. >0) reals. Sorry for the confusion. - Jochen Burghardt ( talk) 18:52, 7 March 2015 (UTC)
Is it its own class? Or where does it belong to? -- RokerHRO ( talk) 13:55, 19 March 2015 (UTC)
74.111.162.230 ( talk) 16:04, 1 May 2015 (UTC) Using exponential identities it can be shown that x^x=E^(x ln(x)) so it is faster then an exponential but slower then E^(x^(1+ε)) for any ε>0. It is about as fast as the factorial as explained here.
It is a fact that some consider the usual way of using the O notation an abuse of notation, but it is also a fact that some others don't. It is not the role of wikipedia to teach its readers what they should consider. Sapphorain ( talk) 19:35, 7 July 2015 (UTC)
I suppressed a reference in the lead to a so-called "MIT Lecture". I'm very skeptical regarding this denomination, but there is another reason for the deletion: The source given at the end of the "lecture" is … the big oh page of wikipedia! (and an old version, as it begins by stating that the symbol O was invented by Landau, which is false). Sapphorain ( talk) 08:57, 10 July 2015 (UTC)
The section "The Hardy–Littlewood definition" contains this sentence:
I'm a bit horrified to see inequality relations used with little-o at all, but if one must try to derive a meaning for it I see ≤ and ≥ rather than < and >. McKay ( talk) 04:52, 31 August 2015 (UTC)
This article is not consistent wrt. "big O" vs "Big O" (case). What should it be? -- Mortense ( talk) 10:04, 5 February 2016 (UTC)
I have suppressed the two references in the lead, which were (and this is an understatement) very imprecise. The first one, while correctly reporting the first use of O by Bachmann, and its adoption by Landau in 1909, asserted it was "included in a more elaborate notation which included o(.), etc". It is true that Landau adopted the symbol O and invented the symbol o in 1909. But that's it. So the last "etc" in the author's assertion indicates only one thing: that he himself never read Landau's book. The second one asserted that both symbols (o and O) were first introduced by Bachmann in 1894. Which is false. So I replaced these references by Bachmann's and Landau's books. Sapphorain ( talk) 20:36, 3 July 2016 (UTC)