You are not logged in.

- Topics: Active | Unanswered

Pages: **1**

**aleclarsen12****Member**- Registered: 2008-06-01
- Posts: 36

A few weeks ago I had an interesting idea:

An infinite iteration. Well, I guess I can't say I had the idea. I'm sure someone has researched it before under a different name (If you have any idea what that name is please let me know). So I googled it and couldn't find anything. So I decided to make it my "project".

I wasn't really sure exactly how to tackle this problem. It really weird. I had done a couple of things with looking at patters but then I came up with an efficient way. My idea for solving this is ironically tied to the zeros of Polynomial Functions and Newton's Method.

It stands to reason that for any function

(regardless of how we define ) we will always find a zero, if it exists.So we can write the identity:

For the sake of simplicity let us say

. So I can convery this idea, I have invented notation. Note that .Alternatively (in my notation) we can express Newton's Method as:

Now let's say we wanted to find

for any function . If we make a substitution of for we see that . So it is apparent that the infinite iteration of lies at the zeros of .Since we made that subsitution above we need to know what

is in terms of . Setting them equal gives us:This is a first order forcibly exact differential equation! Solving for

gives us:Using our identity above we see that

. Dividing out and taking the ln of both sides leaves:I have tested this identity on as many functions as I can. I have yet to find a counter example. Although my proof here is sketchy. If rewritten properly, I do not believe I have made any errors.

I want to name this "Alec's Identity" but I'm sure it probably already exists. If you know of something like this (or even this very formula) that already exists let me know. For now I'm pretty much just jumping up and down with excitement. I'm a bit overly enthusiastic about math.

If this is in fact something new, what can I do with it? I'm only 15. Should I publish it? Is it not unique enough to be published? Am I crazy for even thinking this hasn't been done before? What do all of you guys think?

P.S. I'm sorry about my spelling and grammar. English is not my strong-suit.

Twitter: http://twitter.com/AlecBeta

Blog: http://AlecBeta.us.to

Offline

**bobbym****Administrator**- From: Bumpkinland
- Registered: 2009-04-12
- Posts: 105,355

Hi aleclarsen12;

The integral I believe is correct. I just don't understand the result of the infinite nesting of the function. That should produce a fixed point which is not necessarily a zero of the function. Try g(x) = cos(x). That should produce the fixed point of .7390851332151606416... I mean being a fixed point it is the zero of cos(x) = x but not a zero of cos(x).

What are you saying from here. What am I missing?

You can try googling for fixed points, to see if your stuff has been done before. There is huge amount of literature on fixed points so don't be discouraged if someone got there first.

For instance newton's iteration, and other iterative forms depend on fixed point to find roots of equations.

http://www.stat.uiowa.edu/ftp/atkinson/ … ec_3-4.pdf

Interesting facts about fixed points in geometry.

**In mathematics, you don't understand things. You just get used to them.****If it ain't broke, fix it until it is.****No great discovery was ever made without a bold guess. **

**Online**

**aleclarsen12****Member**- Registered: 2008-06-01
- Posts: 36

I am saying that for any function

. The infinite iteration (fixed point?) will be the solution for in the integrand: .This holds true with f(x)=cos(x). Consider that since an integral is just an infinite summation of infinitesimals in order for it to diverge toward negative infinity either one of the "pieces" is -∞ or each piece is not in fact infinitesimal.

We see that with this integral that if that since the dependent values are with in the denominator in oder to "overpower" the infinitesimal it must become 0. So what value for "t" causes the denominator to be 0?

We know this must be the value for Q because this is the only time one of the pieces can diverge. We don't have to worry about the lower bound of the integral because we know that the solution at that point is finite and is "sucked in" to the -∞.

I'm sorry my explanation is so bad. I have never formally taken a Calculus class. As a result virtually everything I know about upper level math beyond Algebra II I have had to teach myself from articles that I have read online or any text books I've been able to borrow. On top of that I have a hard time remembering the names of thermos, postulates or axioms (Geometry was a nightmare). So I end up inventing my own terminology. Don't hesitate to let me know if I need to clarify something or if I'm just out-right wrong.

Thanks!

Twitter: http://twitter.com/AlecBeta

Blog: http://AlecBeta.us.to

Offline

**bobbym****Administrator**- From: Bumpkinland
- Registered: 2009-04-12
- Posts: 105,355

Hi;

Yes, the integral is correct.

Consider that since an integral is just an infinite summation of infinitesimals in order for it to diverge toward negative infinity either one of the "pieces" is -∞ or each piece is not in fact infinitesimal.

This is not necessarily true. Take for instance the sum and the integral.

So I end up inventing my own terminology. Don't hesitate to let me know if I need to clarify something or if I'm just out-right wrong.

I can't recommend inventing your own, but as your sophistication increases that will disappear by itself. I don't do much pulling up on people's terminology. Frankly, I am not qualified to do so. There are others here who are much more qualiifed for that.

For theorems, go over the proof and try to understand it. Try to understand and remember what the theorem says. More valuable to be able to use the theorem when required. You don't have to memorize everything. That's what books and wikipedia are for!

Back to your idea, I am eqiuipped to evaluate and converse with you on the computational aspects of your idea. I have done some work with fixed points, cobwebs and such. I am less able to help out with correctness of jargon and concepts. I do think your ideas are interesting enough to discuss.

I noticed you use the sequence operator from Maple ( $ ) . Do you have access to a CAS and are you learning one?

**In mathematics, you don't understand things. You just get used to them.****If it ain't broke, fix it until it is.****No great discovery was ever made without a bold guess. **

**Online**

**aleclarsen12****Member**- Registered: 2008-06-01
- Posts: 36

That example is different because the integral itself does not converge; the limit of it that does.

It we use my "made up method" of figuring out when the actual integrand diverges we see that the infinitesimal is "overpowered" when

. This turns out to be true! Evaluating the integral and substituting that value does in fact cause the function to diverge.Again, I'm sorry about my terminology. This makes sense to me in my mind but I can't come up with a good way to explain it. Are you following what I am saying?

Also, no, I do not use Maple. I chose the "$" notation because it was covenant to type into latex. When I'm working in my notebook I usually draw an upside-down Delta to represent an infinite iteration and a upside-down Delta with a number written inside to to show a bound iteration. Again, all of this is invented notation. I'm sure this is probably a real way of doing it... I'm just not aware. I do use WolframAlpha to help me do allot of my integral evaluations (because lets face it: I'm lazy )

Twitter: http://twitter.com/AlecBeta

Blog: http://AlecBeta.us.to

Offline

**bobbym****Administrator**- From: Bumpkinland
- Registered: 2009-04-12
- Posts: 105,355

Hi Alec;

I think that the limit of that integal also diverges.

**In mathematics, you don't understand things. You just get used to them.****If it ain't broke, fix it until it is.****No great discovery was ever made without a bold guess. **

**Online**

**aleclarsen12****Member**- Registered: 2008-06-01
- Posts: 36

Right. I'm not denying that the integral does diverge at infinity. I'm saying that it is not a counter example to my statement:

Consider that since an integral is just an infinite summation of infinitesimals in order for it to diverge toward negative infinity either one of the "pieces" is -∞ or each piece is not in fact infinitesimal.

Because when you place the infinity in the bounds it is the **limit** causing divergence, not the **integral** itself. Using my previous statement we see that the **integral** in this case diverges at

We can see this holds true by evaluating the integral and substituting zero into the natural logarithm. It does in fact diverge toward -∞ (as my statement predicted).

I'm not questioning the validity of the Zeta Function.

*Last edited by aleclarsen12 (2010-06-20 15:06:06)*

Twitter: http://twitter.com/AlecBeta

Blog: http://AlecBeta.us.to

Offline

**Ricky****Moderator**- Registered: 2005-12-04
- Posts: 3,791

Some corrects about statements involving Newton's method:

It stands to reason that for any function g (regardless of how we define x_0) we will always find a zero, if it exists.

This is not true. Quite clearly, the function has to be differential. But of course I assume that was implied. However more importantly, Newton's method will only work locally. For the function

There is only one zero, at x = 0. For x_0 < 0.5, Newton's method will converge. For x_0 > 0.5, it will diverge to infinity. For x_0 = 0.5, it will oscillate between 0.5 and -0.5, never converging.

The best you can say about Newton's method is the following: Converge is guaranteed if g is a differential function with continuous derivative, and x_0 is close enough to the zero, and the derivative at that point is nonzero.

This is how I think you want to define \tilde{f}:

Now it should be clear such a limit doesn't always exist, indeed would be quite rare for most functions. I'm not sure what A is, you introduce it without saying anything about it. Is that a recursive definition? An infinitely recursively defined function is not well-defined. Then you have the differential equation involving A which you solve, and this would seem to imply that the above is not the definition of A. So what is it?

Next you assume that A has a zero. But if A had a zero, then your work in solving the differential equation:

Would be entirely invalid! You can't simply assume that A has a zero when your work requires you to divide by that zero.

As for your final identity, I don't know if I am reading the symbols correctly, but it doesn't seem to work for f(x) = x at the point x=1.

Offline

**Ricky****Moderator**- Registered: 2005-12-04
- Posts: 3,791

Many functions will give counter examples: e^x with x=2, then \tilde{f} = infinity and

Offline

**bobbym****Administrator**- From: Bumpkinland
- Registered: 2009-04-12
- Posts: 105,355

Hi;

It stands to reason that for any function (regardless of how we define) we will always find a zero, if it exists.

Sorry, I forgot about that, as we started discussing other things. Rick is absolutely right newtons iteration is somewhat notorious. It will not always converge to a zero. Sometimes newtons will head off to the complex plane, other times it will oscillate. Still other times it will converge to extrema which are not roots, When functions have zeros that are multiple or extremely close together, newton's method will have major problems.

I urged you to treat your idea as a fixed point where f(x) = x. Then you will be on firm mathematical ground.

**In mathematics, you don't understand things. You just get used to them.****If it ain't broke, fix it until it is.****No great discovery was ever made without a bold guess. **

**Online**

Pages: **1**