Discussion about math, puzzles, games and fun. Useful symbols: ÷ × ½ √ ∞ ≠ ≤ ≥ ≈ ⇒ ± ∈ Δ θ ∴ ∑ ∫ • π ƒ ¹ ² ³ °
 

You are not logged in. Post a replyTopic review (newest first)
So far all I can say is the Taylor theorem already states that somewhere so you are using a theorem to prove the linear approximation.
Yes.
Why?
Hi;
Good to learn, thanks!
Hi;
That depends on a lot of things. Least squares is for discrete data. To fit to continuous data or functions can be done using collocation or Taylor series or Fourier series.
Yes. This is what I meant when I said "But that's just one way of looking at the error". You can "define" your error to be whatever you want (as long as it's logical). You can define the error to be the sum of the absolute values (like I first said) of the differences but then you run into nondifferentiability problems because of the absolute value and hence the minimizationmaximization tools of calculus cannot be applied, which is why one uses least squares so that it is differentiable.
I think that has already been done. You take the sum of the squares of the errors, this is called least squares. They use the squares because it is simpler than using the absolute value for computation.
Me neither. I was just shooting ideas of what came to mind.
But that's just one way of looking at the error, I guess.
First of all I didnt say "the error OF the whole interval", I said "the error ON the whole interval". My idea was: I know what the error between F and L is at a single point. Then I wondered: How do I measure the error between F and L over the whole interval I? A plausible answer (although of course it may be wrong, hence why I'm asking here) was to take the error between F and L at every single point of the interval and add them up. (this is exactly the integral over I of F  L). This is then what I will call the error between F and L over the interval I. Then by looking at this integral, im assigning an "error" over the whole interval I to the line L.
Hi;
Hello,
That is exactly how I do it.
To be simple, take the case of a function from R to R, and a point c in R. What does it mean exactly that the derivative is THE best linear approximation? I guess it means that in the set of all lines passing through (c,f(c)), the line having slope = f'(c) , ie, the tangent line, is the one that provides a better approximation around a small interval I at c. However, how do we decide when a line is a "better approximation" than another line? Of course, we must speak of the error between the function values and the values of the line in the interval I, but how do we formalize this? 