Discussion about math, puzzles, games and fun. Useful symbols: ÷ × ½ √ ∞ ≠ ≤ ≥ ≈ ⇒ ± ∈ Δ θ ∴ ∑ ∫ • π ƒ ¹ ² ³ °
 

You are not logged in. #1 20100618 13:00:45
Infinite IterationsA few weeks ago I had an interesting idea: An infinite iteration. Well, I guess I can't say I had the idea. I'm sure someone has researched it before under a different name (If you have any idea what that name is please let me know). So I googled it and couldn't find anything. So I decided to make it my "project". I wasn't really sure exactly how to tackle this problem. It really weird. I had done a couple of things with looking at patters but then I came up with an efficient way. My idea for solving this is ironically tied to the zeros of Polynomial Functions and Newton's Method. It stands to reason that for any function (regardless of how we define ) we will always find a zero, if it exists. So we can write the identity: For the sake of simplicity let us say . So I can convery this idea, I have invented notation. Note that . Alternatively (in my notation) we can express Newton's Method as: Now let's say we wanted to find for any function . If we make a substitution of for we see that . So it is apparent that the infinite iteration of lies at the zeros of . Since we made that subsitution above we need to know what is in terms of . Setting them equal gives us: This is a first order forcibly exact differential equation! Solving for gives us: Using our identity above we see that . Dividing out and taking the ln of both sides leaves: I have tested this identity on as many functions as I can. I have yet to find a counter example. Although my proof here is sketchy. If rewritten properly, I do not believe I have made any errors. I want to name this "Alec's Identity" but I'm sure it probably already exists. If you know of something like this (or even this very formula) that already exists let me know. For now I'm pretty much just jumping up and down with excitement. I'm a bit overly enthusiastic about math. If this is in fact something new, what can I do with it? I'm only 15. Should I publish it? Is it not unique enough to be published? Am I crazy for even thinking this hasn't been done before? What do all of you guys think? P.S. I'm sorry about my spelling and grammar. English is not my strongsuit. Twitter: http://twitter.com/AlecBeta Blog: http://AlecBeta.us.to #2 20100618 14:17:41
Re: Infinite IterationsHi aleclarsen12; In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. #3 20100620 10:20:53
Re: Infinite IterationsI am saying that for any function . The infinite iteration (fixed point?) will be the solution for in the integrand: .This holds true with f(x)=cos(x). Consider that since an integral is just an infinite summation of infinitesimals in order for it to diverge toward negative infinity either one of the "pieces" is ∞ or each piece is not in fact infinitesimal. We see that with this integral that if that since the dependent values are with in the denominator in oder to "overpower" the infinitesimal it must become 0. So what value for "t" causes the denominator to be 0? We know this must be the value for Q because this is the only time one of the pieces can diverge. We don't have to worry about the lower bound of the integral because we know that the solution at that point is finite and is "sucked in" to the ∞. I'm sorry my explanation is so bad. I have never formally taken a Calculus class. As a result virtually everything I know about upper level math beyond Algebra II I have had to teach myself from articles that I have read online or any text books I've been able to borrow. On top of that I have a hard time remembering the names of thermos, postulates or axioms (Geometry was a nightmare). So I end up inventing my own terminology. Don't hesitate to let me know if I need to clarify something or if I'm just outright wrong. Thanks! Twitter: http://twitter.com/AlecBeta Blog: http://AlecBeta.us.to #4 20100620 11:09:47
Re: Infinite IterationsHi;
This is not necessarily true. Take for instance the sum and the integral.
I can't recommend inventing your own, but as your sophistication increases that will disappear by itself. I don't do much pulling up on people's terminology. Frankly, I am not qualified to do so. There are others here who are much more qualiifed for that. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. #5 20100621 12:16:37
Re: Infinite IterationsThat example is different because the integral itself does not converge; the limit of it that does. It we use my "made up method" of figuring out when the actual integrand diverges we see that the infinitesimal is "overpowered" when . This turns out to be true! Evaluating the integral and substituting that value does in fact cause the function to diverge. Again, I'm sorry about my terminology. This makes sense to me in my mind but I can't come up with a good way to explain it. Are you following what I am saying? Also, no, I do not use Maple. I chose the "$" notation because it was covenant to type into latex. When I'm working in my notebook I usually draw an upsidedown Delta to represent an infinite iteration and a upsidedown Delta with a number written inside to to show a bound iteration. Again, all of this is invented notation. I'm sure this is probably a real way of doing it... I'm just not aware. I do use WolframAlpha to help me do allot of my integral evaluations (because lets face it: I'm lazy ) Twitter: http://twitter.com/AlecBeta Blog: http://AlecBeta.us.to #6 20100621 12:53:32
Re: Infinite IterationsHi Alec; In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. #7 20100621 13:05:13
Re: Infinite IterationsRight. I'm not denying that the integral does diverge at infinity. I'm saying that it is not a counter example to my statement:
Because when you place the infinity in the bounds it is the limit causing divergence, not the integral itself. Using my previous statement we see that the integral in this case diverges at because when we divide by 0 the infinitesimal is "overpowered". Thus we are left with a noninfinitesimal value causing the integral (at that point) to diverge.We can see this holds true by evaluating the integral and substituting zero into the natural logarithm. It does in fact diverge toward ∞ (as my statement predicted). I'm not questioning the validity of the Zeta Function. Last edited by aleclarsen12 (20100621 13:06:06) Twitter: http://twitter.com/AlecBeta Blog: http://AlecBeta.us.to #8 20100621 13:48:18
Re: Infinite IterationsSome corrects about statements involving Newton's method:
This is not true. Quite clearly, the function has to be differential. But of course I assume that was implied. However more importantly, Newton's method will only work locally. For the function There is only one zero, at x = 0. For x_0 < 0.5, Newton's method will converge. For x_0 > 0.5, it will diverge to infinity. For x_0 = 0.5, it will oscillate between 0.5 and 0.5, never converging. The best you can say about Newton's method is the following: Converge is guaranteed if g is a differential function with continuous derivative, and x_0 is close enough to the zero, and the derivative at that point is nonzero. This is how I think you want to define \tilde{f}: Now it should be clear such a limit doesn't always exist, indeed would be quite rare for most functions. I'm not sure what A is, you introduce it without saying anything about it. Is that a recursive definition? An infinitely recursively defined function is not welldefined. Then you have the differential equation involving A which you solve, and this would seem to imply that the above is not the definition of A. So what is it? Next you assume that A has a zero. But if A had a zero, then your work in solving the differential equation: Would be entirely invalid! You can't simply assume that A has a zero when your work requires you to divide by that zero. As for your final identity, I don't know if I am reading the symbols correctly, but it doesn't seem to work for f(x) = x at the point x=1. "In the real world, this would be a problem. But in mathematics, we can just define a place where this problem doesn't exist. So we'll go ahead and do that now..." #9 20100621 13:57:29
Re: Infinite IterationsMany functions will give counter examples: e^x with x=2, then \tilde{f} = infinity and "In the real world, this would be a problem. But in mathematics, we can just define a place where this problem doesn't exist. So we'll go ahead and do that now..." #10 20100621 14:01:10
Re: Infinite IterationsHi;
Sorry, I forgot about that, as we started discussing other things. Rick is absolutely right newtons iteration is somewhat notorious. It will not always converge to a zero. Sometimes newtons will head off to the complex plane, other times it will oscillate. Still other times it will converge to extrema which are not roots, When functions have zeros that are multiple or extremely close together, newton's method will have major problems. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. 