You are not logged in.
Now I understood my method can be used for harder problems. I give you this one:
You throw a dice multiple times. What's the average amonth of throws, required to obtain to consecutive equal numbers?
For example, one may have:
1 2 3 6 5 1 1 -> stop
I'm waiting for a solution...
For the second question, direct computation gives 7:
F[] := (r = n = 0;
Do[n += 2 Random[Integer, {0, 1}] - 1;
If[n == 0, r++];, {i, 1, 100}]; Return[r];)
Mean[Table[F[],{i,1,10000}]]
Now here is my really simple argument:
OK. Call the average amount of trows P.
Now look at this:
+-->H X
+-->H
| +-->T
0
|
+-->T
this is the beginning of the graph for our possible first cases - begins at 0 then goes to H or T with probability 1/2, ect.
Now, here's my reasoning:
1) If at the first move you go to T (you throw tail), which happens with probability 1/2, then it's most likely to finish in P+1 steps (because you already made one and this case is exacly as in the beginning)
This adds
My simulation is showing a convergence to 2.5.
CoinToss[] := (r = 2; previous = Random[Integer, {0, 1}]; current = \ Random[Integer, {0, 1}]; While[previous ≠ 1 && current ≠ 1, r++; previous = current; current = Random[Integer, {0, 1}]]; \ Return[r]) sum = 0; Do[sum += CoinToss[], {i, 0, 100000}]; sum/100000.0
OOh! MAthematica code!
Thats fine!
You are the first person I see writing Mathematica code except me
The problem is interesting, but I thing something simular had been posted before...
mathsy is on the right track, but he must have mistake somewhere...
[EDIT]Sorry mathsy, you're right, I'm wrong! (again)
I got answer 6 in interesting way, I'll post it.
to Ricky - what a mistake! It sertiantly is not easy-recognisible, but here is it:
----
While[previous ≠ 1 &&
current ≠ 1,
-----
This should not be AND, this should be OR!
Now:
CoinToss[] := (r = 2; previous = Random[Integer, {0, 1}]; current = \
Random[Integer, {0, 1}]; While[previous ≠ 1 ||
current ≠ 1, r++; previous = current; current = Random[Integer, {0, 1}]]; \
Return[r])
sum = 0; Do[sum += CoinToss[], {i, 0, 100000}]; sum/100000.0
which works just fine. And it gives 6!
there will be areas, in which the convergence will be to the wrong direction, but these will be only extremly cases.
actually, my method seems to work:
sin(x) + cos(0.3x) - y = 0
p/px = cos(x) - 0.3sin(0.3x)
p/py = -1when you click a point (only do it above the graph) it uses the partial derivitaves (normalised as a vector) to step through space until it cross's the graph, then calculates distance from start point. the slower you make it step, the more accurate the distane is.
http://denvish.net/ulf/010807/63500_ite.php
atm, im using euler integration to step through space using the derivitaves, but if i used a better method, it would reduce the number of steps needed for same accuracy again.
I believe that under normal circumstances, this is alot faster than trying to find all (x,y) or (x,y,z) whose normal passes through point to find distance.
Great! Fascinating!
It's really good, but the implict surface is another thing. When you have the equation of the curve - that's the way. And - something little - you said if you click below the curve, the point goes down. So you need to determine in which direction will you go. Because in general the curve may be pretty complex- with twists ect.
And last advice - you can change the step during evaluation. For example, if your point is far enough (f(x,y,z) much greater than 0) then you could use bigger step.
To measure code by the number of lines is to measure a car by how simple its engine is.
OK. Let's say simplisity is the one main thing, the other is effectiveness. We may write:
WORTH = SIMPLICITY*EFFECTIVENESS
But I want to appologize, because I understood it can be made much simpler from computer-scientist-point of view. We may use approximations of derivatives.
for example here's a code for approx derivation:
double epsilon = VERY_TINY_CONSTANT_BUT_ENOUGH_NOT_TO_CRASH_DOUBLE_LIBRARY
double derivative(function f, double x){
//finds an approx to the f'(x)
return (f(x+epsilon)-f(x))/epsilon;
}
That must give good numberical results. We can make greater, if we use TaylorS, but let's leave this for now.
OK. We have a differentiating method. Now what?
JaneFairfax in her last post pointed the whole 2D theory needed for solving the problem.
We need a tiny generalization to tackle the 3D case.
But first, a little MATH:
Gradient of a two-variable function
2Dvector gradient(function f, double x, double y){
double f_x_y = f(x,y);
return {(f(x+epsilon,y)-f_x_y)/epsilon, (f(x,y+epsilon))/epsilon};
}
(sorry for the messy c-like language - I hope you understand)
now, for our gradient (u,v), if we take u^2+v^2, it will be null only for (0,0), this can be done by:
double norm( 2dvector v){
return v.x*v.x+v.y*v.y;
}
Here we found an effective function to represent the derivative of the distance from the point to the surface, namely
norm(gradient(f,x,y))
and the question is... to find ALL its roots.
The only way can think of this, requires O(n^2) time...
If you have more effective way, for example O(1), the problem will be solved!
This is something like the Mihailescu's theorem: http://en.wikipedia.org/wiki/Mih%C4%83i … 7s_theorem
We CAN implement differentiating a function, then we CAN use Newton's method, for example, for finding the roots and CAN compare them, but these CANs (especially the first one) will require much (MUCH!) more code than my six-lines.
Good job!
Very interesting interpretation!
PS: "curve" in 3D is "surface".
Woah woah woah. Anytime you can replace an O(n²) algorithm for O(1), do it. Always use mathematical formulas rather than approximations.
I have to run out to dinner, but I'll make a post on this when I get back or possibly tomorrow morning.
I agree. But...
When you cannot, the O(n^2) is better than nothing!
Or we may use some non-trivial way-using generating functions instead.
First we find the generating function of
...and A(a,b) means that the coordinates of A are a and B, as you may have seen r(O,R) - a circle with radius R and origin O.
If it was a surface z = f(x,y), i could create a distance function d(x,y) = (f(x,y)-pZ)^2 + (y-pY)^2 + (x-pX)^2
and differentiate and solve for (x,y) such that it equals 0, and choose the minimum of d(x,y) for those values, but thats not exactly a method that can be done generically and for implicit equations too on a computer.
Why should you differentiate? If you have the distance function, just make two "for"-s to estimate the minimum:
double min = VERY_BIG_NUM
for(i=0;i<li;i+=step)
for(j=0;j<lj;j+=step){
d_i_j=d(i,j);
if(d_i_j<min) min=d_i_j;
}
It makes me laugh!
Will beat Mathematica AND Maple?
I don't think so.
no, no, no - it's good but it sertianly won't beat them -
OK. For example consider the contemporary topic in the integral marathon - can MM3 say you that
theoretically-experimental - agrees with the theory AND with the experiments.
Complex analysis you say? Does that mean it has an elementary solution that can be evaluated to produce a real number result ?
No, the solution involves complex numbers and the polylogarithm function
"Is the Gamma function something we just accept (like Taylor series) because it's too difficult to understand..."
Noo!
In mathematics we don't "just accept" things! Somethimes the things can become complicated at first look, but they are done with concrete axiomatic mathematical logic. The Taylor series may look weird, but they are something very foundamental - the complex-valued function extensions are defined mostly by them!
The Gamma function isn't an exception. It's just confusing in the beginning because there are new terms. And I thing it will be better if you start exploring the Gamma function with some more user-friendly text. I'm not saying that the Wolfram's page is not good- no, but it's mainly for reference and it's highly formalized.
I was on a programming school lately.
One of the themes was TOH - someone wrote the program and ran it and it generated 20GB output file!
OK. This is interesting.
I'm asking you to explain something, but if someone knows it - don't tell!
There's an amazing fact - in statistical data tables, the number 1 is first digit almost 35% of the cases!
You may think this is not true.
But wait a minute - i'm not lying you!
I'm asking you to explain this strange phenomenom.
What is this two-variable Gamma?
Is it some extension to the original Euler Gamma?
mikau, here's my point of view:
In an old clculus book the complex numbers were explained not as numbers from the form a+bi, but as doubles (a,b) with some basic definitions.
For example (a,b) + (c,d) = (a+c,b+d) ect.
With the complex numbers we want to "extend" the reals, but if we have a real function, we may extend it in many ways to complex.
So we may want to preserve some of the properties of the function.
It appears that the taylor expansion is pretty important characteristic of a function (differentiable). Many of the complex-valued functions are defined as a taylor series. We use the same series for the real and the complex function, because it preserves its properties.
But there are of course many functions which are defined over the complex plane by other ways with interesting properties - the zeta function and else.
You can't just borrow and put things like that.
After borrowing you should manipulate it a little bit, fit it to your case, so nobody realize what had happened.
What fixed point theory?
Do you mean fixed point theorem for functions in some weird mathemacal spaces (as Banach fixed point theorem)
If you want the area of this, it's simple:
You don't have to use arctan in the integral, just for a point.
The point where the line and the tan intersects has x-axis arctan(3).
Now if you draw a verical line from this point, the area of the below-the-tan part will be exactly