Discussion about math, puzzles, games and fun. Useful symbols: ÷ × ½ √ ∞ ≠ ≤ ≥ ≈ ⇒ ± ∈ Δ θ ∴ ∑ ∫ π -¹ ² ³ °

You are not logged in.

- Topics: Active | Unanswered

**mikau****Member**- Registered: 2005-08-22
- Posts: 1,504

My calculus book explained how to form macluarin polynomials but said nothing on why they work. Over the past couple days I've been turning it over in my head trying to figure it out. I haven't found a proof but its begining to make sense. I thought it might make an intresting discussion for anyone who has never seen the proof or is not advanced enough to understand it yet.

Say the nth derivative of a function evaluated at zero is 5. If it were a polynomial with a limited number of terms, the nth derivative could be 5 itself. Lets integrate to find the (n - 1)th derivative 5x + c. If the (n - 1)th derivative evaluated at zero were 7, then c would be 7.

OOH THIS IS FUN! Lets do it again! Integrate to find the (n - 2)th derivative, we get 5/2 x^2 + 7x + C. If at zero the (n - 2)th derivative is 14, then C would be 14. get So we've found the (n - 2)th derivative.

Now lets assume the n we spoke of had a value of 3, then the (n - 3)th derivative would be the 0th derivative, or the function itself. So lets integrate the (n - 2)th derivative, to get 5/6 x^3 + 7/2 x^2 + 14x + C, if at zero the function had a value of 17 then C would be 17. So the function would look like

5/2! x^3 + 7/2 x^2 + 14x + 17 if it were in fact a polynomial with n terms.

Ok, now lets look back at what happened. Instead of saying (n - 1), (n - 2) or (n - 3) for each scenario lets just say u for whatever derivative we're working with at the time. The u'th derivative evaluated at zero was always some consant "c" (where u is some number between n and 0) this term is integrated u times and appears in the final function. Anyone familiar with calculus can see that the this term would end up being ( c x^u )/ u!.

So the function of x would be Summation of the u'th derivative at (0) multiplied by (x^u) / u! from u = infinity (ideally) to u = 0. We could reverse the order of the sum (which won't effect the answer) and sum from u = 0 to u = infinity, we would end up with:

f(0)/0! + f'(0)x/1! + f''(0)/2! + f'''(0)/3! ......

Well what do you know? The macluarin polynomial! Like I said this is not a proof. We made a bunch of assumptions, assuming the function was a polynomial when it may not be. We just used the clues to "mold" a polynomial with similar characteristics, (or exact when evaluated at 0) but I suppose the numerous stipulations tie the curve down to various places, and kind of leave little room for the function to stray far from the other on the inbetween points. Also we assumed the nth derivatve was a constant when it may not be at all. The nth derivative could have been 5 cos(x) However, if the n is very very large then this mistake should have little effect on the final function. Why? Because the limit of a x^n/n! as n approaches infinity is in fact zero. (I forget how you prove this but I remember doing it. Hmm... gotta review) Anyway, that doesn't prove it converges but it would prove it diverges if if the limit was not zero. So its at least consistant.

Anyways, this doesn't fully explain or proove why they work, but I think it gives you a pretty good idea of HOW they work. Like peeking under the hood of a car. Doesn't reveal everything but helps you understand it at least on a basic level. Whoever this Mr. Maclaurin was, he was a complete and utter genius who could probably move things with his mind.

*Last edited by mikau (2006-06-30 11:40:48)*

A logarithm is just a misspelled algorithm.

Offline

**mikau****Member**- Registered: 2005-08-22
- Posts: 1,504

To me the biggest mystery is still why its characteristics at f(0) can be used to determine its behavior anywhere if its approximated to enough terms. Like I said, if the function matches the behavior at zero to such a high degree, then its behavior for other numbers relatively close to zero can't be too far off. Still I'm trying to find an argument for why it works for other area's just because we matched its behavior in one particular spot. (0).

A logarithm is just a misspelled algorithm.

Offline

**John E. Franklin****Member**- Registered: 2005-08-29
- Posts: 3,588

Very interesting mikau. I like analogies and things that are not complete proofs because they help you to remember the equations and it's a big step toward understanding something. What is your favorite usage of maclaurin series? What is a usage of maclaurin that is too hard to use?

**igloo** **myrtilles** **fourmis**

Offline

**mikau****Member**- Registered: 2005-08-22
- Posts: 1,504

I suppose my favorite usage would be for approximating sine and cosine, as I am always infatuated with trig functions.

What usage would I say is the hardest? All of them really. Differentiating and evaluating at zero gets really tediuous with some functions. Once I had a problem in calc that took me nearly an hour and a half to get right. http://www.mathsisfun.com/forum/viewtopic.php?id=2367 Differentiating this expression multiple times and evaluating at zero was a NIGHTMAAAARE! WAY too easy to mess up on. So I guess the worse case scenario is when the differentials keep getting bigger or don't follow some obvious pattern.

*Last edited by mikau (2006-07-01 11:07:41)*

A logarithm is just a misspelled algorithm.

Offline

**John E. Franklin****Member**- Registered: 2005-08-29
- Posts: 3,588

Cool man, have a nice fourth in NJ. I'm in NH.

**igloo** **myrtilles** **fourmis**

Offline

**mikau****Member**- Registered: 2005-08-22
- Posts: 1,504

I'm not in NJ, I'm in PA.

A logarithm is just a misspelled algorithm.

Offline

**Ricky****Moderator**- Registered: 2005-12-04
- Posts: 3,791

I'm in NJ! Cool, were all in about the same area (of the globe).

"In the real world, this would be a problem. But in mathematics, we can just define a place where this problem doesn't exist. So we'll go ahead and do that now..."

Offline

**mikau****Member**- Registered: 2005-08-22
- Posts: 1,504

that must mean New Hampshire isn't far from pennsylvannia....

*Last edited by mikau (2006-07-04 12:59:34)*

A logarithm is just a misspelled algorithm.

Offline

**John E. Franklin****Member**- Registered: 2005-08-29
- Posts: 3,588

Yeah New York and Vermont separate NH and PA.

**igloo** **myrtilles** **fourmis**

Offline

**George,Y****Member**- Registered: 2006-03-12
- Posts: 1,316

Good Job, Mikau!

Well I guess we really have something in common. I self-studied calculus too. As I'm a helic believing proving instead of just trusting as you, I read many books and spent a good sum of time on that subject.

The proof on my book was complex, but I find something core- that is the polynomial should share from 0th derivative (the function value), then 1st derivative, ...,until nth derivative at the critical point with the function that it approximates.

This rule gives out 0!, 1!, 2!...n! coefficient.

On how well it approximates the function, I've no idea. And I'm very doubtful because Taylor Polynomials do not always converge.

**X'(y-Xβ)=0**

Offline

**mikau****Member**- Registered: 2005-08-22
- Posts: 1,504

I think my book stated the error is always less then or equal to the value of one additional term. (one term greater then the last term you used). I think this applied to all convergent series' as well but I"m not sure.

One thing I've noticed, you know how a polynomail function always have one less number of turning points as its degree? (makes sense since differentiating will decrease the degree by 1, and it will have exactly n - 1 solutions to equal zero) well when you create a sine or cosine function using maclaurins, understandably, adding an extra term creates another hump in the sine curve. In a sine curve, each hump appears every 180 dregree's (pi radians) so each additional term should increase the "accurate domain" by pi radians.

Another thing I've noticed is with alternating functions, ending with a positive or negative term has a great effect on accuracy (at least for approximations with less terms) I'd like to know a little more about knowing when this occurs and whether to end with a positive or negative term.

A logarithm is just a misspelled algorithm.

Offline

**George,Y****Member**- Registered: 2006-03-12
- Posts: 1,316

About the error I guess your book uses a limit proof. A limit proof itself implies locally being virturely the same- the numerator ( the error) is little enough to match the little denominator (x displacement), before both getting zero. So your book may give after the proof an example evaluating f(a+0.m) by macluarin series of f starting from a.

Here macluarin series play(s) a more accurate role than the derivative approach to approximate the value of a function near some point. And this role is regardless of convergence or divergence.

A more powerful application of macluarin series is to evaluate some function at any point, at any distance from the original point or critical point. For example to calculate Sin(10) from knowing taylor series of it. But this application requires the series to be convergent. Or simply put the taylor series from 0 at 10 should not goes to infinity when more and more terms are added. Our calculators use this application. And they may deal with tangents by equating them with sines devided by cosines, to avoid divergence of tan(x) taylor .

Yes the taylor of sine is very interesting, and you give a nice proof of how many times it would turn. Nice proof! I haven't investigated it though I saw the same miracle on a software.

**X'(y-Xβ)=0**

Offline