This is an archive of past discussions. Do not edit the contents of this page. If you wish to start a new discussion or revive an old one, please do so on the current talk page. |
Archive 1 | Archive 2 | Archive 3 |
Madhava didn't invent the Taylor series, but he may have discovered the equivalent series expansion for a few limited cases, which is very different (but still impressive):
Madhava discovered the series equivalent to the Maclaurin expansions of sin x, cos x, and arctan x around 1400, which is over two hundred years before they were rediscovered in Europe. Details appear in a number of works written by his followers such as Mahajyanayana prakara which means Method of computing the great sines. In fact this work had been claimed by some historians such as Sarma (see for example [2]) to be by Madhava himself but this seems highly unlikely and it is now accepted by most historians to be a 16th century work by a follower of Madhava. This is discussed in detail in [4].
That quote is taken from [1], which is also the first external link listed on the Madhava article. I left in the credit to Madhava despite the fact that the above source calls into question whether he or one of his followers (two centuries later) discovered the aforementioned examples, because I'm in no position to weigh the validity of such claims. I think it's still significant enough to merit mention, since these examples from Indian mathematicians do seem to be the earliest known examples.
Sounds good to me. -- Pranathi 22:32, 30 September 2005 (UTC)
Hi, I'm not positive I've found an error so I'm just referring it to you for checking. In the proof of the multivariable form of Taylor's theorem, I believe that a paramaterizing variable 't' has been assigned a false value. In the article it is assigned a value of zero, while I think that is should be of value 'one'. I'm only a lowely undergrad, so I'm most likely wrong here, but all the same, I'd appreciate it if you would check it out and let me know if it does indeed need correcting.
I'm also a little leery of the explanation for the coefficiant of 'a' in same proof: i!*C(i,alpha) does not equal 1/alpha! by my intuition, but instead equals 1/[(alpha!*(i-alpha)!]. Perhaps my confusion stems from a sloppy transition from n=1 to n=N. This seems probable, but then would need considerable re-writing.
http://en.wikipedia.org/wiki/Taylor%27s_theorem P.S. I'd really appreciate feedback, Thanks! -- student4life 04:06, 9 February 2006 (UTC) Also I'm going to edit a few errors in the explicit taylor series expansions of both ln(1+x) and e^x/sinx. student4life 22:03, 9 February 2006 (UTC)
Hey, I think you are right about this. I am also an undergraduate, and dont have that much experience in this area, but with my knowledge their is always only one solution to ,but if one wanted the sum over vectors that have an absolute value of 1, then there would be many solutions and would actually need to be put in summation notation. --RETROFUTURE
In the final example given exp(x)/sin(x), the Maclaurin series for each of the functions are used and we are told to compare powers of x to evaluate the unknown coefficients, however the coefficient of x^0 on the RHS is 1, while the coefficient of x^0 on the LHS is 0. This expansion is not a Taylor series. —Preceding unsigned comment added by 137.219.45.123 ( talk • contribs)
Series expansion redirects to this article. My opinion is that there are several kinds of series expansions, with the Taylor series being one of them. Therefore wouldn't it be better to have an own article about series expansion? -- Abdull 13:46, 5 June 2006 (UTC)
I have a feeling that the taylor series doesn't have to have an infinitely differentiable function. Such a series would simply end long before infinity, which isn't a crime as far as i'm concerned. I'll strike it from the definition if I get a confirmation.
Second question: what does a do? When it says that the function is "around the point x= " something what does that mean? How does one choose a? These variables need to be better defined, i'll start a bit. Fresheneesz 10:46, 29 March 2006 (UTC)
Fresheneesz, I disagree with your replacing real functions in Taylor's series with complex functions. If you want to be full general, the function can take values in any Banach space, but that is besides the point. Let us stick to the most widespread case, that being functions of a real variable. I told you about this many times before, please do not try to be most concise, most general, etc. It harms the understanding of the article by people who don't know this stuff. Oleg Alexandrov ( talk) 17:27, 29 March 2006 (UTC)
Let f(x) = 1/(x^2 + 1). The Taylor series about x = 0 converges only in a disk of radius 1 about the center 0. But why? f(x) is perfectly well-behaved on all of the real numbers. The answer is that f(z) = 1/(z^2 + 1) becomes infinite as z -> i (or -i) in the complex numbers. Since Taylor series always converge in a circular disk in the complex plane, that disk's radius cannot exceed 1 (or it would include ±i, where the function is undefined!). Daqu 22:57, 13 July 2006 (UTC)
The first example ends by saying Expanding by using multinomial coefficients gives the requisite Taylor series. Actually, using multinomial coefficients is not enough: to really get the coefficients we need to add infinitely many terms, a thing which should at least be mentioned. I think it would be better to substitute the cosinus with the sinus in the example, in order to get finitely many terms to add. 62.94.48.91 09:56, 28 August 2006 (UTC)
I did a partial revert of your changes to the Taylor series article. Your changes introduced some mistakes. A Taylor series is not a sum of derivatives, it is a sum of terms with each term being a derivative times a power over a factorial. Also, not all trignometric functions are globally analytic, like the tangent function. Also, you introduced a subtle mistake by implying that partial sums are always a good approximation to an infinitely differentiable function. That is true only for analytic functions, and only then just in a range. You can reply here if you have any comments. Thanks. Oleg Alexandrov ( talk) 04:24, 1 November 2006 (UTC)
Most text books give error bounds -- either in terms of an integral or the value of one of the derivatives at a point in the interval between a and x. Why do we not have them here? JRSpriggs 12:55, 10 December 2006 (UTC)
I think it is necessary to include information regarding the difference between a taylor series and a taylor polynomial. They are not the same.
A Taylor series is an INFINITE series of terms which, as they approach the nth term, will be EQUIVALENT to the stated function, whether it be sin x, cos x, ex... etc
A Taylor polynomial is a defined number of terms as specified by the notation, Pn(x), where n is the given amount of terms. Because n is defined as a finite number, Pn(x) will be EQUAL to that expanded series to that degree, and therefore will not be equal to the taylor series. It will be an approximation of it.
Please verify this information. EDIT...I messed up what was in bold...fixed now—The preceding unsigned comment was added by 24.229.193.72 ( talk) 16:10, 25 February 2007 (UTC).
Do we really need to transpose the gradient vector?
rather than
Which one is the convention? Jackzhp 21:36, 11 April 2007 (UTC)
As it is right now, it states that Taylor series can be used as partial sums to approximate the function, but wouldn't it also be useful to say that as an infinite sum it can be used to show convergence and as an infinite sum Taylor series exactly are the function? RageGarden 18:35, 19 April 2007 (UTC)
The most common name for the series is Taylor series, although it's often called MacLaurin series when used at 0. The article states all this correctly, but my question is: why did this way of naming arise? Sure, the MacLaurin series is a "special case" of the "more general" Taylor series, but it is so only in a very superficial sense. If you want the Taylor series expansion for e.g. sine at a, you can get it with the MacLaurin expansion by simple translation - that is, get the MacLaurin series expansion of sin(x-a). Given that, as articles says, MacLaurin's result was published earlier than Taylor's, why is the most common name the Taylor series? For the uninitiated it would seem as if Taylor unwittingly has taken the credit of MacLaurin's discovery (if you believe that the first person to discover something is in some way special) simply by stating the theorem in a more popular way. 82.103.195.147 11:09, 12 August 2006 (UTC)
I reverted some edits ( link). Here is why:
Jesper Carlstrom 07:19, 14 May 2007 (UTC)
My spelling checker (the one built into Firefox), does not recognize British spellings. JRSpriggs 07:25, 16 May 2007 (UTC)
I'm trying to understand the history section (from today). What on earth does "the second-order Taylor series approximations of the sine and cosine functions" mean? The second-order term is 0 - is this the discovery? Could that be stated in the language of the time? Moreover, what does this mean: "the power series of the radius, diameter, circumference, angle θ, π and π/4, along with rational approximations of π, and infinite continued fractions." What is a power series of the radius? What is the power series of θ? What does infinite continued fractions have to do with this? I seriously begin to wonder if someone is making fun of us. Jesper Carlstrom 11:57, 21 May 2007 (UTC)
"as the degree of the taylor series rises" is not nice because a power series has no degree.
The editors mainly consider real analysis rather than complex analysis, but they are not explicit about it.
In complex analysis a convergent taylor series always converges to the function value f(x).
In real analysis a convergent taylor series may converge to a value different from the function value f(x).
Bo Jacoby 23:23, 22 July 2007 (UTC).
I noticed the following comment associated with the Taylor series formula: As stated below, the Taylor series need not equal the function. So please don't write f(x)=... here Current formula
However should the ammendment be made which satisfies the statement above
-- Zven 22:45, 13 July 2007 (UTC)
I think something on the error estimates for a truncated series would be useful. That's exactly what I'm looking for right now. User:NeilenMarais
I'm looking for the exact same thing. 69.140.90.164 01:15, 10 January 2006 (UTC)
I'm studying for a test, and looking for the same thing too. For now I'll stop being lazy and read my textbook. Maybe I can add something on the subject later when I have time. Eumedemito 03:26, 21 October 2007 (UTC)
Can anyone support this claim:
"The Taylor series need not in general be a convergent series, but often it is." Randomblue 20:57, 15 November 2007 (UTC)
In the Convergence section, I think these sentences are unclear:
Examples of which functions? Functions that are analytic? Entire? Both? I think it's unclear as currently written. -- Kweeket Talk 00:30, 30 November 2007 (UTC)
Perhaps the following is worthy of addition into the article?
An alternative, more compact notation for the multivariable Taylor series.
Let be a function of real variables. Define the vectors and . If is infinitely differentiable at the point , then the Taylor series expansion for about the point is:
Where is the gradient vector of .
Note therefore that is a differential operator. Saran T. ( talk) 11:46, 8 May 2008 (UTC)
There was a huge ruckus at my school when i asked my maths teacher what this would be; he claimed this is inevaluable. I looked up some sites and its widely stated that the gaussian function (integral of e^(-x^2)) is evaluated using the taylor series expansion of e^(-x^2). My simple question is that if the taylor expansions accept complex arguments, would it be possible to substitute x by xi (i being square root of -1) and reduce the gaussian function expansion to e^(x^2), thereby evaluating the above integral term by term? Leif edling ( talk) 18:00, 23 April 2008 (UTC)
Rightly pointed out there Lambiam. But, unfortunately the error function is way beyond our syllabus at high school level (infact it was probably beyond our math teacher's scope because he obviously knew nothing about it :P). Using the convergent power series expansion seems logical enough, as does using the error function. But is the error function valid for an indefinite integral? Leif edling ( talk) 00:53, 29 April 2008 (UTC)
A new problem's arisen; a few mathematics textbooks here in India say that this integral "cannot be evaluated" alongwith a few other standard forms e.g. xtanx (which can also, apparently, be evaluated as an infinite power series utilizing the taylor series expansion). Isn't the statement "cannot be evaluated" wrong on the part of the authors? Leif edling ( talk) 07:42, 15 May 2008 (UTC)
I suggest to split the list into a new article.-- 79.111.200.210 ( talk) 17:14, 28 July 2008 (UTC)
The Parker-Sochacki method is a recent advance in finding Taylor series which are solutions to differential equations. This algorithm is an extension of the Picard iteration.
Could someone please modify the first 2 equations. The first one should say f(x)=f(a)+f'(a)*(x-x_a)/1! +.... in stead of simply f(a) + ... i understand that you can get the "f(x)=" idea from the sentence, but the equation should be written correctly and completely... The second equation, the one with the sigma, should also have a "f(x)=..." in front of it. For more info, see the link below. Marius 82.208.174.72 ( talk) 23:44, 18 July 2008 (UTC)
http://mathworld.wolfram.com/TaylorSeries.html
You *CANNOT* do a T.S.E. around @ x=0, for said function is not differentiable at point of interest. —Preceding unsigned comment added by 71.146.134.150 ( talk) 07:36, 23 August 2008 (UTC)
That's exactly the point. f(0) is *NOT DEFINED*. Of course if one wanted to say (f(x) = ... x =! 0, and f(x) = 0, x=0) to plug up your little discontinuity (at which point, continuity, differentiability all works out per definition), then that's fine, but that type of cavalier penmanship has no place in any sort of a mathematics forum. I'm not sure if this type of pathological function has a place in this article; just go pick rectangle(x) if you wanted to come up with something where the T.S.E. doesn't exactly make sense or is less than meaningful. —Preceding unsigned comment added by 71.146.134.150 ( talk) 21:41, 26 August 2008 (UTC)
In the introduction to the Taylor expansion it is stated, that the formulation is also valid for functions of complex variables. Does this mean, in practice, that one does not have to separate the complex variable z in its real and imaginary content for the Taylor expansion? One can just write the expansion in z itself and in the end everything goes well right? In the example:
where a and b are complex variables and c and d are complex constants, one could therefore write the first order Taylor expansion as
It might be helpfull to include some remarks on the use of complex functions and maybe possible restrictions in their application? —Preceding unsigned comment added by Ddeklerk ( talk • contribs) 07:34, 29 October 2007 (UTC)
When is an operator, everyone knows what means. The problem with the recent edits is that if applied to , the result is not what it should, because multiplication by and do not commute, so the powers of are not correct for Taylor's formula. Also I don't think this kind of non-classical stuff should appear (if ever) as early as in the Definition section. -- Bdmy ( talk) 13:04, 17 April 2009 (UTC)
By the way, I believe the original problem was with the multi-dimensional version which can be written using only an operator exponential, without redefining the exponential: (see MathWorld: Taylor Series. Note that my problem isn't so much with the exponential, but the flippant introduction of a bunch of new notation without proper context. Perhaps a section on "connection with the operator exponential" would be useful? Thanks! Plastikspork ( talk) 17:28, 18 April 2009 (UTC)
What about the following addition to the text, " ... " ? (In agreement with the suggestion of another user I would like to add it to the article, if you consider it as an improvement; otherwhise it should be kept in the discussion section.)
" Relation to the exponential function and a generalization
There is a relation of the Taylor series to the exponential function (see above). Namely, by analogy with the series with real numbers or complex ones, one can formally write where represents the derivation operator, i.e. —Preceding unsigned comment added by 87.160.75.55 ( talk) 08:56, 22 April 2009 (UTC)
Since the text was not yet ready, when it was already commented, I repeat:
What about the following addition to the text, " ... " ? (In agreement with the suggestion of another user I would like to add it to the article, if you consider it as an improvement; otherwhise it should be kept in the discussion section.)
" Relation to the exponential function
There is a relation of the Taylor series to the exponential map (see above). Namely, by analogy with the series with real numbers α and β or complex ones, and with Euler's number e (= one can formally write where represents the derivative-operator, i.e.
Generalization
In fact, this is not only formal, but leads to important generalizations. E.g., the exponential, without the function f to which it is applied, may be interpreted as an abstract representation of the one-dimensional translation group , since the argument of any function f and its derivatives is shifted from a to x, corresponding to a translation of one-dimensional objects by a-x. This group is a so-called Lie group, i.e. it has analogous derivative properties as the function f. More general Lie groups, e.g. SO(m) with m=2,3,... , the group of rotations of the m-dimensional real space or the corresponding group SU(m) for the space of complex numbers can be described by similar expressions, e.g. where the are the generating operators of the group; they correspond to (the quantity i is the imaginary unit) and are represented e.g. by matrices. In contrast, the real rsp. complex numbers site-dependent " gauge fields" in physical theories, describe the local strength of the action of the group and correspond to the variable (x-a). Of course at the same time the functions f are replaced by vector functions. "
Remark: All this is no "theory invention", but well known, e.g. from Wikipedia articles on Lie groups and Gauge fields, although not everyone sees all interrelations. Rather, everyone should get used to the idea, that even if he or she at present does not understand an item, he or she can learn, and should be informed, if necessary. This is the Wikipedia idea. Usually the understanding comes with time, and through interaction. —Preceding unsigned comment added by 87.160.75.55 ( talk) 10:56, 22 April 2009 (UTC) .
- In the hurry, I also forgot to sign; sorry, and best regards, 87.160.75.55 ( talk) 11:38, 22 April 2009 (UTC)
I have never seen the above-mentioned formalism. But I do know, for example, makes sense as a so-called a differential operator of infinite order, or pseudo-differential operator. Maybe there is no relation. But I share the same problem with Bdmy: I don't think the formalism obtained by the power series representation of the exponential function belongs to this article. The article should discuss the Tayor series in the usual sense, that is, one in real analysis. -- Taku ( talk) 13:11, 4 May 2009 (UTC)
(This part has been deleted in April 2009) 81.247.77.249 ( talk) 10:01, 5 June 2009 (UTC)
THE FOLLOWING SHOULD BE EXPLAINED: "Another reason why the Taylor series is the natural power series for studying a function f is given by the probabilistic interpretation of Taylor series. Given the value of f and its derivatives at a point a, the Taylor series is in some sense the most likely function that fits the given data." 212.123.27.210 ( talk) 11:50, 16 July 2009 (UTC)
The diagram and statement that the Taylor series of log(1+x) only converges in a small region needs to be clarified. This is actually only the McLaurin sequence; yet log(1+x) is analytic at all values of x except x=-1. ALso the definition of analytic needs to be clarified a bit. It should be explained more precisely how analytic means for any point x_0 the taylor series based at x_0 converges in a neighbourhood of x_0. —Preceding unsigned comment added by 137.205.56.18 ( talk) 13:52, 11 November 2009 (UTC)
What does the ! operator mean? SharkD ( talk) 12:45, 31 August 2009 (UTC)
'!' represents the factorial operation, which multiplies a positive integer with every preceding integer until 1. For example, if you were to evaluate 5!, the result would be: 5 X 4 X 3 X 2 X 1 = 120. 82.178.109.148 ( talk) 13:38, 9 April 2010 (UTC)
At the beginning of the examples section, it says "The Maclaurin series for any polynomial is the polynomial itself." That doesn't seem right. The Maclaurin series for x is 2x. For x^2 + 2x, it's 5x^2 + 4x. It would seem that the powers of the terms are the same, but the coefficients are different. Syndrome ( talk) 21:26, 4 January 2009 (UTC)
At the Calculation of Taylor series, first example. The aritmetic isn't clear enough. Or, sould I say the last step is wrong. Francisco —Preceding undated comment added 08:50, 29 June 2010 (UTC).
The example in the Taylor series in several variables section is very textbookish, and anyway is not a very good example since it only goes out to second order and essentially is only a routine calculus exercise. I suggest that the example be removed, or at the very least changed to something more suitable. Compare with the examples in Computing Taylor series which are actually needed to show the kinds of techniques one uses in practice. Sławomir Biały ( talk) 14:05, 7 July 2010 (UTC)
I suggest
Hello, I'm a student learning about Taylor Series and I was wondering if there can be some additional pages that derive the Taylor Series for say cos(x). It would be interesting to see it done. —Preceding unsigned comment added by 69.255.197.49 ( talk • contribs)
I am just starting here at wikipedia, so I still need experience in my word choice, flow, voice, et cetera. I created the 'A Casual Proof' part. I thought of the proof myself, but it isnt very formal, so someone could make it at least a little more formal. If you think this is unnecesary, then I suppose we could remove it. Otherwise, polish it if you will. And getting that darn derivative to not intersect the f would be a nice help too, if someone knows how to do this.
Can someone tell me why this was taken out? Was it unnecesary because there was a proof on the Taylor Theorem page? RETROFUTURE 01:46, 18 July 2006 (UTC)
Okay then. RETROFUTURE 16:10, 18 July 2006 (UTC)
A proof is essential to the article. Please put it back. BriEnBest ( talk) 21:52, 19 October 2010 (UTC)
If the cosmological redshift is assumed to be Doppler shift then the universe looks as if it were expanding and its expansion as accelerating with acceleration about dH/dt = −0.5(H0)2, where H is Hubble parameter, H0 is its value for t=0. t is time, and RE is Einstein's radius (radius of curvature of space).
It is possible to demonstrate though, with simple Newtonian math, that if energy is conserved globally then the universe couldn't be expanding since then the cosmological redshift Z(r), comes out as a special type of relativistic time dilation resulting in redshift Z(r) = exp(r/RE) − 1 and acceleration of this alleged "expansion" comes out as the second term of Taylor series of presented Z(r) around r=0, which was observed in 1998 by Supernova Cosmology Project team, less than one standard deviation off the presented above dH/dt.
So if one assumes that our universe is a stationary Einstein's universe then the cosmological redshift can't be the redshift resulting from Doppler shift but it might be the redshift resulting from the (relativistic) dynamical friction of photons (slowing down of proper time in deep space with distance from observer) and then one must come to a conclusion that the observed cosmological redshift Z(r) brings in natural way, as the second term of Taylor series, the "acceleration of expansion of universe", as a difference between Taylor series of the "observed expansion" Z(r) and the uniform expansion (Zu = r/RE + (r/RE)2 + ...) presently observed but considered by some cosmologists to be an action of " dark energy". Jim ( talk) 01:00, 20 December 2010 (UTC)
As a PhD Chemical Engineer I speak from the distinctly biased POV of a reader, not editor, of mathematical articles. I'd like to gauge consensus about changing the example for the multivariate expansion:
to
to render it more readily comprehensible. I'd suggest that examples such as this second order expansion are designed to allow a glimpse of the practical application of a theory for technical nonspecialists so the verbosity is justified.
Thanks for the great article and please consider this request. Doug ( talk) 20:04, 26 March 2011 (UTC)
This is an archive of past discussions. Do not edit the contents of this page. If you wish to start a new discussion or revive an old one, please do so on the current talk page. |
Archive 1 | Archive 2 | Archive 3 |
Madhava didn't invent the Taylor series, but he may have discovered the equivalent series expansion for a few limited cases, which is very different (but still impressive):
Madhava discovered the series equivalent to the Maclaurin expansions of sin x, cos x, and arctan x around 1400, which is over two hundred years before they were rediscovered in Europe. Details appear in a number of works written by his followers such as Mahajyanayana prakara which means Method of computing the great sines. In fact this work had been claimed by some historians such as Sarma (see for example [2]) to be by Madhava himself but this seems highly unlikely and it is now accepted by most historians to be a 16th century work by a follower of Madhava. This is discussed in detail in [4].
That quote is taken from [1], which is also the first external link listed on the Madhava article. I left in the credit to Madhava despite the fact that the above source calls into question whether he or one of his followers (two centuries later) discovered the aforementioned examples, because I'm in no position to weigh the validity of such claims. I think it's still significant enough to merit mention, since these examples from Indian mathematicians do seem to be the earliest known examples.
Sounds good to me. -- Pranathi 22:32, 30 September 2005 (UTC)
Hi, I'm not positive I've found an error so I'm just referring it to you for checking. In the proof of the multivariable form of Taylor's theorem, I believe that a paramaterizing variable 't' has been assigned a false value. In the article it is assigned a value of zero, while I think that is should be of value 'one'. I'm only a lowely undergrad, so I'm most likely wrong here, but all the same, I'd appreciate it if you would check it out and let me know if it does indeed need correcting.
I'm also a little leery of the explanation for the coefficiant of 'a' in same proof: i!*C(i,alpha) does not equal 1/alpha! by my intuition, but instead equals 1/[(alpha!*(i-alpha)!]. Perhaps my confusion stems from a sloppy transition from n=1 to n=N. This seems probable, but then would need considerable re-writing.
http://en.wikipedia.org/wiki/Taylor%27s_theorem P.S. I'd really appreciate feedback, Thanks! -- student4life 04:06, 9 February 2006 (UTC) Also I'm going to edit a few errors in the explicit taylor series expansions of both ln(1+x) and e^x/sinx. student4life 22:03, 9 February 2006 (UTC)
Hey, I think you are right about this. I am also an undergraduate, and dont have that much experience in this area, but with my knowledge their is always only one solution to ,but if one wanted the sum over vectors that have an absolute value of 1, then there would be many solutions and would actually need to be put in summation notation. --RETROFUTURE
In the final example given exp(x)/sin(x), the Maclaurin series for each of the functions are used and we are told to compare powers of x to evaluate the unknown coefficients, however the coefficient of x^0 on the RHS is 1, while the coefficient of x^0 on the LHS is 0. This expansion is not a Taylor series. —Preceding unsigned comment added by 137.219.45.123 ( talk • contribs)
Series expansion redirects to this article. My opinion is that there are several kinds of series expansions, with the Taylor series being one of them. Therefore wouldn't it be better to have an own article about series expansion? -- Abdull 13:46, 5 June 2006 (UTC)
I have a feeling that the taylor series doesn't have to have an infinitely differentiable function. Such a series would simply end long before infinity, which isn't a crime as far as i'm concerned. I'll strike it from the definition if I get a confirmation.
Second question: what does a do? When it says that the function is "around the point x= " something what does that mean? How does one choose a? These variables need to be better defined, i'll start a bit. Fresheneesz 10:46, 29 March 2006 (UTC)
Fresheneesz, I disagree with your replacing real functions in Taylor's series with complex functions. If you want to be full general, the function can take values in any Banach space, but that is besides the point. Let us stick to the most widespread case, that being functions of a real variable. I told you about this many times before, please do not try to be most concise, most general, etc. It harms the understanding of the article by people who don't know this stuff. Oleg Alexandrov ( talk) 17:27, 29 March 2006 (UTC)
Let f(x) = 1/(x^2 + 1). The Taylor series about x = 0 converges only in a disk of radius 1 about the center 0. But why? f(x) is perfectly well-behaved on all of the real numbers. The answer is that f(z) = 1/(z^2 + 1) becomes infinite as z -> i (or -i) in the complex numbers. Since Taylor series always converge in a circular disk in the complex plane, that disk's radius cannot exceed 1 (or it would include ±i, where the function is undefined!). Daqu 22:57, 13 July 2006 (UTC)
The first example ends by saying Expanding by using multinomial coefficients gives the requisite Taylor series. Actually, using multinomial coefficients is not enough: to really get the coefficients we need to add infinitely many terms, a thing which should at least be mentioned. I think it would be better to substitute the cosinus with the sinus in the example, in order to get finitely many terms to add. 62.94.48.91 09:56, 28 August 2006 (UTC)
I did a partial revert of your changes to the Taylor series article. Your changes introduced some mistakes. A Taylor series is not a sum of derivatives, it is a sum of terms with each term being a derivative times a power over a factorial. Also, not all trignometric functions are globally analytic, like the tangent function. Also, you introduced a subtle mistake by implying that partial sums are always a good approximation to an infinitely differentiable function. That is true only for analytic functions, and only then just in a range. You can reply here if you have any comments. Thanks. Oleg Alexandrov ( talk) 04:24, 1 November 2006 (UTC)
Most text books give error bounds -- either in terms of an integral or the value of one of the derivatives at a point in the interval between a and x. Why do we not have them here? JRSpriggs 12:55, 10 December 2006 (UTC)
I think it is necessary to include information regarding the difference between a taylor series and a taylor polynomial. They are not the same.
A Taylor series is an INFINITE series of terms which, as they approach the nth term, will be EQUIVALENT to the stated function, whether it be sin x, cos x, ex... etc
A Taylor polynomial is a defined number of terms as specified by the notation, Pn(x), where n is the given amount of terms. Because n is defined as a finite number, Pn(x) will be EQUAL to that expanded series to that degree, and therefore will not be equal to the taylor series. It will be an approximation of it.
Please verify this information. EDIT...I messed up what was in bold...fixed now—The preceding unsigned comment was added by 24.229.193.72 ( talk) 16:10, 25 February 2007 (UTC).
Do we really need to transpose the gradient vector?
rather than
Which one is the convention? Jackzhp 21:36, 11 April 2007 (UTC)
As it is right now, it states that Taylor series can be used as partial sums to approximate the function, but wouldn't it also be useful to say that as an infinite sum it can be used to show convergence and as an infinite sum Taylor series exactly are the function? RageGarden 18:35, 19 April 2007 (UTC)
The most common name for the series is Taylor series, although it's often called MacLaurin series when used at 0. The article states all this correctly, but my question is: why did this way of naming arise? Sure, the MacLaurin series is a "special case" of the "more general" Taylor series, but it is so only in a very superficial sense. If you want the Taylor series expansion for e.g. sine at a, you can get it with the MacLaurin expansion by simple translation - that is, get the MacLaurin series expansion of sin(x-a). Given that, as articles says, MacLaurin's result was published earlier than Taylor's, why is the most common name the Taylor series? For the uninitiated it would seem as if Taylor unwittingly has taken the credit of MacLaurin's discovery (if you believe that the first person to discover something is in some way special) simply by stating the theorem in a more popular way. 82.103.195.147 11:09, 12 August 2006 (UTC)
I reverted some edits ( link). Here is why:
Jesper Carlstrom 07:19, 14 May 2007 (UTC)
My spelling checker (the one built into Firefox), does not recognize British spellings. JRSpriggs 07:25, 16 May 2007 (UTC)
I'm trying to understand the history section (from today). What on earth does "the second-order Taylor series approximations of the sine and cosine functions" mean? The second-order term is 0 - is this the discovery? Could that be stated in the language of the time? Moreover, what does this mean: "the power series of the radius, diameter, circumference, angle θ, π and π/4, along with rational approximations of π, and infinite continued fractions." What is a power series of the radius? What is the power series of θ? What does infinite continued fractions have to do with this? I seriously begin to wonder if someone is making fun of us. Jesper Carlstrom 11:57, 21 May 2007 (UTC)
"as the degree of the taylor series rises" is not nice because a power series has no degree.
The editors mainly consider real analysis rather than complex analysis, but they are not explicit about it.
In complex analysis a convergent taylor series always converges to the function value f(x).
In real analysis a convergent taylor series may converge to a value different from the function value f(x).
Bo Jacoby 23:23, 22 July 2007 (UTC).
I noticed the following comment associated with the Taylor series formula: As stated below, the Taylor series need not equal the function. So please don't write f(x)=... here Current formula
However should the ammendment be made which satisfies the statement above
-- Zven 22:45, 13 July 2007 (UTC)
I think something on the error estimates for a truncated series would be useful. That's exactly what I'm looking for right now. User:NeilenMarais
I'm looking for the exact same thing. 69.140.90.164 01:15, 10 January 2006 (UTC)
I'm studying for a test, and looking for the same thing too. For now I'll stop being lazy and read my textbook. Maybe I can add something on the subject later when I have time. Eumedemito 03:26, 21 October 2007 (UTC)
Can anyone support this claim:
"The Taylor series need not in general be a convergent series, but often it is." Randomblue 20:57, 15 November 2007 (UTC)
In the Convergence section, I think these sentences are unclear:
Examples of which functions? Functions that are analytic? Entire? Both? I think it's unclear as currently written. -- Kweeket Talk 00:30, 30 November 2007 (UTC)
Perhaps the following is worthy of addition into the article?
An alternative, more compact notation for the multivariable Taylor series.
Let be a function of real variables. Define the vectors and . If is infinitely differentiable at the point , then the Taylor series expansion for about the point is:
Where is the gradient vector of .
Note therefore that is a differential operator. Saran T. ( talk) 11:46, 8 May 2008 (UTC)
There was a huge ruckus at my school when i asked my maths teacher what this would be; he claimed this is inevaluable. I looked up some sites and its widely stated that the gaussian function (integral of e^(-x^2)) is evaluated using the taylor series expansion of e^(-x^2). My simple question is that if the taylor expansions accept complex arguments, would it be possible to substitute x by xi (i being square root of -1) and reduce the gaussian function expansion to e^(x^2), thereby evaluating the above integral term by term? Leif edling ( talk) 18:00, 23 April 2008 (UTC)
Rightly pointed out there Lambiam. But, unfortunately the error function is way beyond our syllabus at high school level (infact it was probably beyond our math teacher's scope because he obviously knew nothing about it :P). Using the convergent power series expansion seems logical enough, as does using the error function. But is the error function valid for an indefinite integral? Leif edling ( talk) 00:53, 29 April 2008 (UTC)
A new problem's arisen; a few mathematics textbooks here in India say that this integral "cannot be evaluated" alongwith a few other standard forms e.g. xtanx (which can also, apparently, be evaluated as an infinite power series utilizing the taylor series expansion). Isn't the statement "cannot be evaluated" wrong on the part of the authors? Leif edling ( talk) 07:42, 15 May 2008 (UTC)
I suggest to split the list into a new article.-- 79.111.200.210 ( talk) 17:14, 28 July 2008 (UTC)
The Parker-Sochacki method is a recent advance in finding Taylor series which are solutions to differential equations. This algorithm is an extension of the Picard iteration.
Could someone please modify the first 2 equations. The first one should say f(x)=f(a)+f'(a)*(x-x_a)/1! +.... in stead of simply f(a) + ... i understand that you can get the "f(x)=" idea from the sentence, but the equation should be written correctly and completely... The second equation, the one with the sigma, should also have a "f(x)=..." in front of it. For more info, see the link below. Marius 82.208.174.72 ( talk) 23:44, 18 July 2008 (UTC)
http://mathworld.wolfram.com/TaylorSeries.html
You *CANNOT* do a T.S.E. around @ x=0, for said function is not differentiable at point of interest. —Preceding unsigned comment added by 71.146.134.150 ( talk) 07:36, 23 August 2008 (UTC)
That's exactly the point. f(0) is *NOT DEFINED*. Of course if one wanted to say (f(x) = ... x =! 0, and f(x) = 0, x=0) to plug up your little discontinuity (at which point, continuity, differentiability all works out per definition), then that's fine, but that type of cavalier penmanship has no place in any sort of a mathematics forum. I'm not sure if this type of pathological function has a place in this article; just go pick rectangle(x) if you wanted to come up with something where the T.S.E. doesn't exactly make sense or is less than meaningful. —Preceding unsigned comment added by 71.146.134.150 ( talk) 21:41, 26 August 2008 (UTC)
In the introduction to the Taylor expansion it is stated, that the formulation is also valid for functions of complex variables. Does this mean, in practice, that one does not have to separate the complex variable z in its real and imaginary content for the Taylor expansion? One can just write the expansion in z itself and in the end everything goes well right? In the example:
where a and b are complex variables and c and d are complex constants, one could therefore write the first order Taylor expansion as
It might be helpfull to include some remarks on the use of complex functions and maybe possible restrictions in their application? —Preceding unsigned comment added by Ddeklerk ( talk • contribs) 07:34, 29 October 2007 (UTC)
When is an operator, everyone knows what means. The problem with the recent edits is that if applied to , the result is not what it should, because multiplication by and do not commute, so the powers of are not correct for Taylor's formula. Also I don't think this kind of non-classical stuff should appear (if ever) as early as in the Definition section. -- Bdmy ( talk) 13:04, 17 April 2009 (UTC)
By the way, I believe the original problem was with the multi-dimensional version which can be written using only an operator exponential, without redefining the exponential: (see MathWorld: Taylor Series. Note that my problem isn't so much with the exponential, but the flippant introduction of a bunch of new notation without proper context. Perhaps a section on "connection with the operator exponential" would be useful? Thanks! Plastikspork ( talk) 17:28, 18 April 2009 (UTC)
What about the following addition to the text, " ... " ? (In agreement with the suggestion of another user I would like to add it to the article, if you consider it as an improvement; otherwhise it should be kept in the discussion section.)
" Relation to the exponential function and a generalization
There is a relation of the Taylor series to the exponential function (see above). Namely, by analogy with the series with real numbers or complex ones, one can formally write where represents the derivation operator, i.e. —Preceding unsigned comment added by 87.160.75.55 ( talk) 08:56, 22 April 2009 (UTC)
Since the text was not yet ready, when it was already commented, I repeat:
What about the following addition to the text, " ... " ? (In agreement with the suggestion of another user I would like to add it to the article, if you consider it as an improvement; otherwhise it should be kept in the discussion section.)
" Relation to the exponential function
There is a relation of the Taylor series to the exponential map (see above). Namely, by analogy with the series with real numbers α and β or complex ones, and with Euler's number e (= one can formally write where represents the derivative-operator, i.e.
Generalization
In fact, this is not only formal, but leads to important generalizations. E.g., the exponential, without the function f to which it is applied, may be interpreted as an abstract representation of the one-dimensional translation group , since the argument of any function f and its derivatives is shifted from a to x, corresponding to a translation of one-dimensional objects by a-x. This group is a so-called Lie group, i.e. it has analogous derivative properties as the function f. More general Lie groups, e.g. SO(m) with m=2,3,... , the group of rotations of the m-dimensional real space or the corresponding group SU(m) for the space of complex numbers can be described by similar expressions, e.g. where the are the generating operators of the group; they correspond to (the quantity i is the imaginary unit) and are represented e.g. by matrices. In contrast, the real rsp. complex numbers site-dependent " gauge fields" in physical theories, describe the local strength of the action of the group and correspond to the variable (x-a). Of course at the same time the functions f are replaced by vector functions. "
Remark: All this is no "theory invention", but well known, e.g. from Wikipedia articles on Lie groups and Gauge fields, although not everyone sees all interrelations. Rather, everyone should get used to the idea, that even if he or she at present does not understand an item, he or she can learn, and should be informed, if necessary. This is the Wikipedia idea. Usually the understanding comes with time, and through interaction. —Preceding unsigned comment added by 87.160.75.55 ( talk) 10:56, 22 April 2009 (UTC) .
- In the hurry, I also forgot to sign; sorry, and best regards, 87.160.75.55 ( talk) 11:38, 22 April 2009 (UTC)
I have never seen the above-mentioned formalism. But I do know, for example, makes sense as a so-called a differential operator of infinite order, or pseudo-differential operator. Maybe there is no relation. But I share the same problem with Bdmy: I don't think the formalism obtained by the power series representation of the exponential function belongs to this article. The article should discuss the Tayor series in the usual sense, that is, one in real analysis. -- Taku ( talk) 13:11, 4 May 2009 (UTC)
(This part has been deleted in April 2009) 81.247.77.249 ( talk) 10:01, 5 June 2009 (UTC)
THE FOLLOWING SHOULD BE EXPLAINED: "Another reason why the Taylor series is the natural power series for studying a function f is given by the probabilistic interpretation of Taylor series. Given the value of f and its derivatives at a point a, the Taylor series is in some sense the most likely function that fits the given data." 212.123.27.210 ( talk) 11:50, 16 July 2009 (UTC)
The diagram and statement that the Taylor series of log(1+x) only converges in a small region needs to be clarified. This is actually only the McLaurin sequence; yet log(1+x) is analytic at all values of x except x=-1. ALso the definition of analytic needs to be clarified a bit. It should be explained more precisely how analytic means for any point x_0 the taylor series based at x_0 converges in a neighbourhood of x_0. —Preceding unsigned comment added by 137.205.56.18 ( talk) 13:52, 11 November 2009 (UTC)
What does the ! operator mean? SharkD ( talk) 12:45, 31 August 2009 (UTC)
'!' represents the factorial operation, which multiplies a positive integer with every preceding integer until 1. For example, if you were to evaluate 5!, the result would be: 5 X 4 X 3 X 2 X 1 = 120. 82.178.109.148 ( talk) 13:38, 9 April 2010 (UTC)
At the beginning of the examples section, it says "The Maclaurin series for any polynomial is the polynomial itself." That doesn't seem right. The Maclaurin series for x is 2x. For x^2 + 2x, it's 5x^2 + 4x. It would seem that the powers of the terms are the same, but the coefficients are different. Syndrome ( talk) 21:26, 4 January 2009 (UTC)
At the Calculation of Taylor series, first example. The aritmetic isn't clear enough. Or, sould I say the last step is wrong. Francisco —Preceding undated comment added 08:50, 29 June 2010 (UTC).
The example in the Taylor series in several variables section is very textbookish, and anyway is not a very good example since it only goes out to second order and essentially is only a routine calculus exercise. I suggest that the example be removed, or at the very least changed to something more suitable. Compare with the examples in Computing Taylor series which are actually needed to show the kinds of techniques one uses in practice. Sławomir Biały ( talk) 14:05, 7 July 2010 (UTC)
I suggest
Hello, I'm a student learning about Taylor Series and I was wondering if there can be some additional pages that derive the Taylor Series for say cos(x). It would be interesting to see it done. —Preceding unsigned comment added by 69.255.197.49 ( talk • contribs)
I am just starting here at wikipedia, so I still need experience in my word choice, flow, voice, et cetera. I created the 'A Casual Proof' part. I thought of the proof myself, but it isnt very formal, so someone could make it at least a little more formal. If you think this is unnecesary, then I suppose we could remove it. Otherwise, polish it if you will. And getting that darn derivative to not intersect the f would be a nice help too, if someone knows how to do this.
Can someone tell me why this was taken out? Was it unnecesary because there was a proof on the Taylor Theorem page? RETROFUTURE 01:46, 18 July 2006 (UTC)
Okay then. RETROFUTURE 16:10, 18 July 2006 (UTC)
A proof is essential to the article. Please put it back. BriEnBest ( talk) 21:52, 19 October 2010 (UTC)
If the cosmological redshift is assumed to be Doppler shift then the universe looks as if it were expanding and its expansion as accelerating with acceleration about dH/dt = −0.5(H0)2, where H is Hubble parameter, H0 is its value for t=0. t is time, and RE is Einstein's radius (radius of curvature of space).
It is possible to demonstrate though, with simple Newtonian math, that if energy is conserved globally then the universe couldn't be expanding since then the cosmological redshift Z(r), comes out as a special type of relativistic time dilation resulting in redshift Z(r) = exp(r/RE) − 1 and acceleration of this alleged "expansion" comes out as the second term of Taylor series of presented Z(r) around r=0, which was observed in 1998 by Supernova Cosmology Project team, less than one standard deviation off the presented above dH/dt.
So if one assumes that our universe is a stationary Einstein's universe then the cosmological redshift can't be the redshift resulting from Doppler shift but it might be the redshift resulting from the (relativistic) dynamical friction of photons (slowing down of proper time in deep space with distance from observer) and then one must come to a conclusion that the observed cosmological redshift Z(r) brings in natural way, as the second term of Taylor series, the "acceleration of expansion of universe", as a difference between Taylor series of the "observed expansion" Z(r) and the uniform expansion (Zu = r/RE + (r/RE)2 + ...) presently observed but considered by some cosmologists to be an action of " dark energy". Jim ( talk) 01:00, 20 December 2010 (UTC)
As a PhD Chemical Engineer I speak from the distinctly biased POV of a reader, not editor, of mathematical articles. I'd like to gauge consensus about changing the example for the multivariate expansion:
to
to render it more readily comprehensible. I'd suggest that examples such as this second order expansion are designed to allow a glimpse of the practical application of a theory for technical nonspecialists so the verbosity is justified.
Thanks for the great article and please consider this request. Doug ( talk) 20:04, 26 March 2011 (UTC)