You are not logged in.

- Topics: Active | Unanswered

**mikau****Member**- Registered: 2005-08-22
- Posts: 1,504

for any value "a" less then 1 but greater then zero, a^infinity = 0, correct?

Take a function like f(x) = [cos(x)]^infinity. Anywhere on the function except where f(x) = 1, the function will be zero, correct? But at x = 0, the function would JUMP to 1! (and at any values of x where cos x = 1) How can that be possible? There must be some mid point on the graph between y = 1 and y = 0, but then there would be some number a less then 1 but greater then zero such that a^infinity does NOT equal zero! (CREEPYYYY!!!) so would the graph litteraly break itself? Or would there be some infinetsimal where the function has a value between 1 and 0? Hmm.. wonder what [cos( 1/infinity) ]^infinity would be...

another wierd thing to think about which this topic brings up, the function cos(x) will fluctuate from 1 to -1, when its -1 what will the function output be? (-1)^infinity? Is that -1 or 1? Is infinity odd or even?

*Last edited by mikau (2006-07-09 13:47:49)*

A logarithm is just a misspelled algorithm.

Offline

**MathsIsFun****Administrator**- Registered: 2005-01-21
- Posts: 7,608

Interesting! Play with it: Plot of cos(x)^1000 vs cos(x)^1001

"The physicists defer only to mathematicians, and the mathematicians defer only to God ..." - Leon M. Lederman

Offline

**mikau****Member**- Registered: 2005-08-22
- Posts: 1,504

precisly! Just look at the difference (+ 1) can make! Now get out there and vote!

A logarithm is just a misspelled algorithm.

Offline

**Ricky****Moderator**- Registered: 2005-12-04
- Posts: 3,791

mikau, if you've come across this by just thinking about math (i.e. not reading it in a book or from a professor), then I am really impressed. Even more so since (I think) you've never taken a reals course.

As a disclaimer, we are talking about the real numbers here, and thus, there does not exist any infinitesimals or values larger than infinity.

a^infinity = 0, correct?

Not to nit-pick or anything, and I'm pretty sure you know what you mean when you say the above, but it should read:

a^n approaches 0 as n approaches infinity

Which is of course, correct for values of a between 0 and 1. Like I said, that isn't entirely nit-picking as using that phrase will have value when we try more complex examples such as:

Take a function like f(x) = [cos(x)]^infinity. Anywhere on the function except where f(x) = 1, the function will be zero, correct? But at x = 0, the function would JUMP to 1!

Let's apply the same terminology:

and f_n(x) approaches 0 as n approaches infinity at any point where f(x) != 1.This is called squences of functions. It works just like a squence of numbers, just replace numbers with a function.

And they are really *weird*. I mean really, really weird. And you just showed the weird thing about them:

How can that be possible? There must be some mid point on the graph between y = 1 and y = 0, but then there would be some number a less then 1 but greater then zero such that a^infinity does NOT equal zero! (CREEPYYYY!!!) so would the graph litteraly break itself?

And yes, it sounds like there must be. But there isn't. The graph is not continuous. Let me introduce some terminology to make talking about these easier:

When you have a sequence of functions f_n(x) where n in a natural number, it is possible that the series that as n approaches infinity, the squence of functions converges to one function. We call this function f(x).

So for example: f_n(x) = x/n. As n approaches infinity, f_n(x) approaches 0 for all values x, and so we say f(x) = 0.

Another example: f_n(x) = x - 1/n. As n approaches infinity, f_n(x) approaches x, so we say that f(x) = x.

Now here is the weird part. It is possible to have f_n(x) to converge a function f(x), and have f_n(x) be continuous for every single n, but to have f(x) be *discontinuous*. Weird, I know. Your cosine is an example of such a function.

The first function you named does exactly the same thing:

f_n(x) = x^n

For this function:

f(x) =

0 for 0 ≤ x < 1

1 for x = 1

diverges for x > 1

As you can see, it's discontinuous at 1.

I'll post more on the subject tomorrow, if you would like. But I'll let that brew in your mind for now as it is late and I can't seem to find my Real Analysis book.

"In the real world, this would be a problem. But in mathematics, we can just define a place where this problem doesn't exist. So we'll go ahead and do that now..."

Offline

**George,Y****Member**- Registered: 2006-03-12
- Posts: 1,306

As I always claim, using the concept of Real Infinity is using guessing instead of logic, and self-defeating.

**X'(y-Xβ)=0**

Offline

**Ricky****Moderator**- Registered: 2005-12-04
- Posts: 3,791

Are you saying limits shouldn't be used, George?

"In the real world, this would be a problem. But in mathematics, we can just define a place where this problem doesn't exist. So we'll go ahead and do that now..."

Offline

**mikau****Member**- Registered: 2005-08-22
- Posts: 1,504

Thanks, Ricky! Yes I came up with this while eating dinner at a local pizza shop ( I said it before: I'm such a nerd) and I wanted to tear off my clothes and run down the street screaming "Eureka!" :-D

No I have not yet taken any courses on "reals".

I think what george y means is we could never actually test this to see if the function would infact JUMP, we can only use our reason to search for a logical conclusion, but really the topic is completely abstract.

And yes, I would like for you to post more on the subject. :-)

A logarithm is just a misspelled algorithm.

Offline

**George,Y****Member**- Registered: 2006-03-12
- Posts: 1,306

I mean the Real Infinity concept can only be "used" after Approaching method. Without approaching, equating approaching with being can cause fault.

As we discussed earlier, Ricky, the being is the part inferring approaching to be being after the approaching proof, always.

**X'(y-Xβ)=0**

Offline

**George,Y****Member**- Registered: 2006-03-12
- Posts: 1,306

Simply put, to mikau, do not use an independent, non-variable infinity or infinitesimal at first and that will save you from lots of faults.

**X'(y-Xβ)=0**

Offline

**Ricky****Moderator**- Registered: 2005-12-04
- Posts: 3,791

Ok, I think I'm starting to understand what you are saying, Geroge. It was, "Without approaching, equating approaching with being can cause fault." that got me. I've never heard of the words "Real Infinity" (nor can I find them used on the internet), so I think the language barriar was the thing that got me.

But you are, of course, right. It is a very dangerous pitfall to think that approaching means the same as being. The way I always describe limits is to say that it is what would happen if you ever got to infinity, but you can never get to infinity anyways. So in that sense, it is sort of useless, but what it tells you is what happens to a graph so that you don't have to write it out.

But anyways, back to sequences of functions. I should make a post on it later tonight, after I find that darn reals book.

"In the real world, this would be a problem. But in mathematics, we can just define a place where this problem doesn't exist. So we'll go ahead and do that now..."

Offline

**mikau****Member**- Registered: 2005-08-22
- Posts: 1,504

heheh!

Yeah I know, but I didn't have enough energy at the time. Technically you should say the limit of 1/n as n approaches infinity is zero, I just didn't have the energy to say it, lol.

Anyways, you got my point. It is rather fascinating you can make a non piecewise function discontinuous without causing division by zero, or taking the sqrt or log of a negative number.

Good luck finding that book, Ricky! I await your post with great anticipation!

A logarithm is just a misspelled algorithm.

Offline

**Ricky****Moderator**- Registered: 2005-12-04
- Posts: 3,791

By putting reals in quotes, mikau, I take it you've never really heard of real analysis before. So let me take a brief minute to explain what it is. Newton started off calculus, an entirely new form (well, not really) of math. Then some more funky dudes with wigs on countinued studying it. But it wasn't really till the 1800's that they started noticing weird things. There were all sorts of crazy functions being defined, and they had really weird properties. They started seeing some contradictions in different problems, some theorems were'nt working as expected. So a few guys sat down and said, "All right, let's take this back to square 1, and start off from the beginning making sure we are careful." And so Real Analysis was born. It is called real analysis because it is the study of the real numbers and functions on the real numbers, trying to find out their properties and analyzing what they do.

To sum up, Real Analysis is basically "Careful Calculus", making sure everything has a rigorous definition. So you won't hear things like, "A continuous function is one I can draw without lifting up my pencil."

Ok, start off with some basic definitions:

Pointwise convergence: If f_n(x) converges to a point y when n approaches infinity, we say that it converges pointwise to y. In this way, f(x) is just taking all those y points and combining them in a function.

So consider f_n(x) = (x^2 + nx) / n.

Now I will prove an earlier statement. I will do the proof, and then an explanation of it. I suggest you read the proof, then the explanation, then go back and reread the proof.

Prove f_n(x) = x^n converges pointwise to 0 for all values 0 < x < 1

Proof: Let a_n be a squence such that a_n = {a, a^2, a^3, ...}, for some number 0 < a < 1. Let ∈ > 0. Then what we must show is that there exists N in the natural numbers such that |a_n| < ∈ for every n ≥ N.

If we show that a_n is constantly decreasing and bounded below by 0 (i.e. does not contain a value less than 0), then this will be sufficent to show that such a number exists.

So first, we must show that a_n is decreasing. We do this by induction. Since a < 1, a*a < a (multiply both sides by a). So that is the base case. Assume that a_n > a_n+1. Now we must show that a_n+1 > a_n+2. Since 0 < a_n, and 0 < a, 0 < a_n*a, or rather, 0 < a_n+1. But since a_n > a_n+1, a_n*a > a_n+1*a since 0 < a, and so a_n+1 > a_n+2. Thus, 0 < a_n+2 < a_n+1, and so a_n is always decreasing.

Now we must show a is bounded below by 0. Already did that with the 0 < a_n+2. So it holds that for every ∈ > 0, there exists some N in the natural numbers such that for every n ≥ N, |a_n| < ∈

And now I claim that we're done. But what the heck did we just do? Well, we showed that if you out far enough to the right (or rather, a higher value for n), for any real number you pick, there will be a point such that every single a_n will be less than that real number. So this means that if I pick ∈ = 0.00000000001, I can still find an N such that all n's past it, a^n < ∈. And the same thing will happen if I pick ∈ = 0.00000000000000000000000000000000000001. And the same for ∈ = 10^-1000000. So this proves that the farther I go out to the right, the closer my sequence will get to 0, and thus, it is the limit of the sequence.

Now all that's left to show that 1^n = 1 as n goes to infinity, which is easy enough, so we showed that the function f(x) (what f_n(x) approaches) is discontinuous, even though ever f_n(x) itself is continuous, as all polynomials are.

I'm sure you must have some questions, so I'll stop here for now.

Offline

**mikau****Member**- Registered: 2005-08-22
- Posts: 1,504

I probably would have questions if I understood it a little better. I'm just struggling to keep up. I've never really dealt with induction before.

It still seems somewhat incomplete. It does seem logical that a^infinty would equal zero in any case, but actuallly cos(1/10000)^10000 is very close to 1 and not zero. At a certain point when n gets big enough, there should be a turning point, but perhaps if a certain ratio is maintained between the power and the infinitsimal distance from zero, it actually converges to a point between 0 and 1.

Think of it this way, a number like 0.99^n will reduce itself much slower then 0.5^n as n gets larger. For relativly low values of n it won't be reduced much, but for large values of n it becomes more significant. But what if the distance from 1 is determined in some way by n? such as (1 - 1/n)^n As n gets very large it may not necessary have such a big impact because as the power makes it reduce the function more, the 1/n makes it even slower to reduce, where is the balance? The limit. Invert the sign of the 1/n part and we get the definition of e. (1 + 1/n)^n

If the distance from 1 is some function of the power, I do believe it may be possible the closeness to 1 may override or ballance out the reducing strength of the power. Maybe not with a^n, but perhaps with cos(1/n)^n.

A logarithm is just a misspelled algorithm.

Offline

**Ricky****Moderator**- Registered: 2005-12-04
- Posts: 3,791

Ok, so let me take a few giant steps back. That first post was really to see how much you understand, and I think I got a pretty good idea.

For induction, it follows a simple pattern:

Base Case

Inductive Assumption

Show that the base case is true for one more

The base case is simply giving an example. For this intro, I will use a simple set of numbers:

This simple set looks like {1, 2, 3, 4, 5, ...}

So a_0 = 1, a_1 = 2, and so on.

What we wish to prove is that a_n < a_n+1 for all n in the natural numbers. It seems pretty obvious, no? But how do we prove it?

First, note that 1 < 2. That is, a_0 < a_1. This is the base case. Now we make our inductive assumption. a_n < a_n+1. It seems like we are assuming the conclusion, but we aren't. I'll come back to this. Since a_n < a_n+1, it must be that (a_n) + 1 < (a_n+1) + 1. But we also know that (a_n)+1 = a_n+1 and that (a_n+1) + 1 = a_n+2. So it holds that a_n+1 < a_n+2.

Now I can claim that a_n < a_n+1 for all n in the naturals. And here's why. We first showed that a_0 < a_1. We also showed that *if* a_n < a_n+1, *then* it must be true that a_n+1 < a_n+2.

So since a_0 < a_1, it must be that a_1 < a_2. And since a_1 < a_2, it must be that a_2 < a_3. And since a_2 < a_3.... and so on. Since this pattern must continue for all natural n, we have showed it true for all natural numbers.

Induction is extremely important in real analysis, as we always deal with sets where the elements have some sort of relationship to one another, just like the set {1, 2, 3, 4...}.

Here is a formal induction proof:

Prove that

for all n∈NProof: This proof is by induction on n.

Note that

. Thus, the base case holds.Assume that

. Now it must be shown that , as we are just taking off the last number in the summation and writting it seperately. (remember the inductive assumptionAnd so

.Therefore, by PMI,

for all n∈N.(PMI stands for Principal of Mathimatical Induction)

Any questions about induction? If not, try a few good induction proofs.

Prove that

Prove that

Prove that for any number 0 < a < 1, a^n > a^(n+1).

Offline

**George,Y****Member**- Registered: 2006-03-12
- Posts: 1,306

Probably it's not the language barrier, Ricky.

If a Real or Reached Infinity does not exist or is not valid.

0.999...!=1

and any sum of series is invalid, like the form of

1+1/2+1/4+1/8+...=2

Or "=" here is not in usual sense and it's an "imaginary" equating.

**X'(y-Xβ)=0**

Offline

**mikau****Member**- Registered: 2005-08-22
- Posts: 1,504

I think I understood most of that, Ricky. I mean it makes sense it would be bounded above by 1 and below by 0, and I have little interest in proving such an obvious fact while most mathematicians do. (no offense or anything, your math knowledge is clearly quite extensive!)

I'm more interested in proving whether or not it could converge to some intermediat value. Like i said cos(1/10000)^10000 is actually quite closed to 1 and not zero. but this may be just that the the exponant is not large enough to dominate over the slowing effect of extreme closeness to 1. Perhaps with a large value, the balance will shift, but what if the closeness to 1 is determined by a function of the power? The balance may never be able to shift! Perhaps there is some some function "f" of x such that the limit of [1 - f(x)]^x = 0.5 where 0 < f(x) < 1 for all x.

A logarithm is just a misspelled algorithm.

Offline

**Ricky****Moderator**- Registered: 2005-12-04
- Posts: 3,791

[1 - f(x)]^x = 0.5

By making "n" (which is f(x) in your equation) in terms of x, you are completely changing the problem. You have to set n in stone, so to speak, and then show that as you approach infinity , (1-n)^x = 0.5. n can't vary with x. And again, I state that no such real number exists. I'll formulate a very clear and careful proof possibly tomorrow.

0.999...!=1

and any sum of series is invalid, like the form of

1+1/2+1/4+1/8+...=2

Language barrier, again, I think. By writing an infinite series, you are already implying that you are talking about limits, George.

Offline

**John E. Franklin****Member**- Registered: 2005-08-29
- Posts: 3,588

Thanks I am gradually learning from you guys, but this equation has one mistake.

The summation grows one step ahead of the n up 2.

Maybe this is better?

**igloo** **myrtilles** **fourmis**

Offline

**John E. Franklin****Member**- Registered: 2005-08-29
- Posts: 3,588

One more thing. Are the following two things equivalent with and without parenthesis??

**igloo** **myrtilles** **fourmis**

Offline

**George,Y****Member**- Registered: 2006-03-12
- Posts: 1,306

I think you make a word misguidance, Ricky. Bounded below zero seems to imply that the bound is some number negative. Should it be bounded above zero?

Also you should prove that 0 is the maximum of all bounds possible including -1,-0.5 and so on.

With this proof added, you can say that the series convergent to 0.

To Mikau's question, intermediate position not possible, for any of your position like 0.000001 is just a good candidate of ∈, hence sufficient large Ns will garantee it is always going away from it.

I have another proof on decreasing distance to show it. but since it's invented by myself and not a textbook I've ever seen have adopteditI would like to see Ricky's proof first to know if main stream maths has already got a solution.

___________________________________

It's my mistake to use a blablabla without specification.

My ... actually mean the moving series. like {0.9, 0.99, 0.999...} type.

I don't accept "all" number digits is 9 concept because this "all" involves reached infinity.

**X'(y-Xβ)=0**

Offline

**George,Y****Member**- Registered: 2006-03-12
- Posts: 1,306

for x, x>0 and x<1, x^n converges to 0

Proof of

for any given number ∈>0 there exist a number N such that n larger than N would give x^n<∈

x^N<∈ requires N<log(x,∈) hence such N exists.

proven.

But it's not the proof I previously referred.

**X'(y-Xβ)=0**

Offline

**mikau****Member**- Registered: 2005-08-22
- Posts: 1,504

Lets say the limit of f(x)^x as x approaches infinity is 0.5 and 0 <= f(x) <= 1 for all x and f(x) is continuous....

Change the function, the limit of cos(x)^n as n approaches infinity. Since n in this case is not a function of x, a ratio is not mantained, but since cos(x) fluctuates from 0 to 1 and is continuous, then at some point cos(x) must equal f(x) (as far as y height) since f(x) is also between 0 and 1 and continuous. And at that point, cos(x)^infinity should equal 0.5. It must be a value unfathomably close to 1 but its still enough to suggest the function is continuous.

*Last edited by mikau (2006-07-14 04:52:57)*

A logarithm is just a misspelled algorithm.

Offline

**Zhylliolom****Real Member**- Registered: 2005-09-05
- Posts: 412

Consider this:

If we were to graph the values of

we would get 0 for every value of *x* except for *x* such that cos *x* = ±1, since ±1[sup]∞[/sup] is indeterminate. Thus, we would have a line tracing the *x*-axis with point discontinuities at all *x* = nπ for n ∈ **Z**.

Offline

**Ricky****Moderator**- Registered: 2005-12-04
- Posts: 3,791

I think you make a word misguidance, Ricky. Bounded below zero seems to imply that the bound is some number negative. Should it be bounded above zero?

Also you should prove that 0 is the maximum of all bounds possible including -1,-0.5 and so on.

With this proof added, you can say that the series convergent to 0.

Right you are. Or rather would be, if there wasn't a trick, which I get to below.

First, let's go over the definition of a limit as a function goes to infinity. Formal definition:

The limit of a function f(n) as n goes to infinity is L, if and only if for every real number ∈, there exists an N in the natural numbers such that for every n ≥ N, |f(n) - L| < ∈.

Now whether you know it or not, this is the definition you use when you do limits in calculus. But when you do calculus, you are using limit laws, which are proven through this definition.

Let's just do a simple example.

----------------------------------------------------------------------

Prove that f(n) = 1/n goes to 0 as n goes to infinity.

Proof: What we wish to show is that for eveyr ∈ > 0, there exists an N in the naturals such that for all n ≥ N, |f(n) - 0| < ∈, or rather |f(n)| < ∈.

Let ∈ > 0. Also, let N > 1/∈. Then f(N) = 1/N. Since 1/∈ < N, it must be that 1 < N∈ and 1/N < ∈ since both N and ∈ are positive. Since n ≥ N, it must be that 1/n < 1/N, and so 1/n < 1/N < ∈. Thus, 1/n < ∈ for every n ≥ N.

Therefore, f(n) = 1/n goes to 0 as n goes to infinity.

----------------------------------------------------------------------

Ok, not too bad right?

monotonely decreasing: a function such that if n > m, then it must be that f(n) < f(m). As the name implies, this is just a function that is always decreasing as the n's get higher.

Now it should be clear that if a function is monotonely decreasing and bounded below, then that function converges to some number, above or at that bound. It's a fairly simple proof, but plain to see as well. Ask me if you want to see it.

Also, it should be noted that if lim f(n) is L as n goes to infinity, then the lim f(n+1) as n goes to infinity is also L. This is a very easy proof because all you have to say is that n+1 > n. Again, ask me if you wish to see it.

Without further ado:

----------------------------------------------------------------------

QED

Edit: And furthermore, something I just saw, is that this proof makes complete sense if a = 1, because then the limit doens't have to be 0! Hah, don't you just love it when everything comes together so well?

Offline

**George,Y****Member**- Registered: 2006-03-12
- Posts: 1,306

Great proof in Latex! Your proof of

is ideal because it requires virtually little prerequisite.

Good effort,Ricky!

Considering seri monotunely decreasing and bounded below has a limit, if your proof does not require a property of real numbers I would like to see it.

To Mikau, that is not a fair game, even when you choose 0.999999999 as a, n can be large enough to lower a^n down to 0.0000000001- the defination of limit is powerful enough to beat you.

*Last edited by George,Y (2006-07-15 03:03:50)*

**X'(y-Xβ)=0**

Offline