Math Is Fun Forum

  Discussion about math, puzzles, games and fun.   Useful symbols: ÷ × ½ √ ∞ ≠ ≤ ≥ ≈ ⇒ ± ∈ Δ θ ∴ ∑ ∫ • π ƒ -¹ ² ³ °

You are not logged in.

#1 2009-07-05 09:05:26

juriguen
Member
Registered: 2009-07-05
Posts: 59

Least Squares minimization problem

Hi to everyone


I am new to this forum, but I expect to be posting quite often! Hopefully not always for my own doubts smile


Lets get into the problem. I am trying to understand an algorithm developed by P. Debevec in the paper "Recovering High Dynamic Range Radiance Maps from Photographs" (this is only for anyone who has curiosity)

The fact is that I have to code it, which is rather easy since the author provides the main, and most complex, routine to solve the problem to minimize an objective function of interest.

However, what I cannot completely understand is how he derives the final system of equations' parameters' values from the objective function.


First things first smile The function in the paper is the following

The first term (the double sum) is, according to the author, the function to minimize, while the second term is a smoothing factor. Both are weighted to give less credit to the extreme values of z (or Z).

In short, z or Z_ij, are the values of the pixels of an image, for a certain color channel. Well, maybe that much information is not important. So they are simply integer values in [0, 255]

Then, i controls the number of the pixel within a photo (assuming the photos are matrices, but have been transformed to arrays just placing one row after another).

Also, j controls the number of the photo. We actually have a set of P exposures of the same photo, or P photos, all of size N pixels, and stored in Z being a (NxP) matrix.

Moreover, the author introduces the ln() so as to be able to separate some unknowns, E_i, from known values, t_i.

The next relation is also known, and already used in the cost function:

Which actually was derived from the expression:

by introducing ln() and calling


So, to conclude, does anybody know how to solve the least square problem given by the O function? The unknowns are both g(Z_ij), or call them g_ij, and ln(E_i), or x_i.

My problem is having for some unknowns the double index, and for others only the index i. Moreover, the smoothing equations are also a weird "addition", in the sense that I don't know why they are used, but they make the solution better.


As a clue, the author seems to arrive to the following solution:

Where the over-dimensioned system of equations contains NxP linear equations involving the solution to the minimization of the cost function, and 254 more equations to introduce the smoothing factor.

Note that also the next approximation is used, to work with discrete values


Thanks in advance,
Jose

PS: Hope this is not too long!
PS2: Really nice the latex tutorial! First post and first latex lines! Nice smile

Last edited by juriguen (2009-07-05 21:13:46)


“Make everything as simple as possible, but not simpler.” -- Albert Einstein

Offline

#2 2009-07-06 03:32:38

Ricky
Moderator
Registered: 2005-12-04
Posts: 3,791

Re: Least Squares minimization problem

I don't have much time, so I must be brief.  If you need anything in more detail, let me know.

Look at the equation for your objective function.  If you knew the perfect function f, then the first summation would be zero.  So the typical trick is "let's pretend we have the perfect function f and see what happens".  In other words, the author sets the terms in the first summation to zero, and derives the first line of his solution.

But that isn't quite good enough.  There are millions of functions that could satisfy that, and this is where the second summation comes in.  The author wants a simple function, a function that is as linear as possible (linear is the same exact thing as 2nd derivative is zero).  This is where the second term comes in.  To achieve the 2nd line in the author's solution, he sets this to zero and then uses the approximation.

Now minimizing O will define a function f that matches what you want, and is also going to tend to be linear.


"In the real world, this would be a problem.  But in mathematics, we can just define a place where this problem doesn't exist.  So we'll go ahead and do that now..."

Offline

#3 2009-07-06 04:06:15

juriguen
Member
Registered: 2009-07-05
Posts: 59

Re: Least Squares minimization problem

Great!


Thanks a lot for your explanation Ricky. I had suspected the fact that the author was simply setting the first term in the O function to zero, for 2 reasons:


- The first, obviously is that I had his solution, even though in a Matlab code, but straightforward enough, jaja.


- The second, that for being an optimization problem, there was something "missing" in the formulation. I mean, if I rewrite the objective function like this:

Where

Then, it resembles the following regular notation optimization problem

Where the diagonal of W contains the weights, and F is a matrix whose columns contain the basis functions with which we want to fit the data.

However, what I considered "missing" is the set of basis functions, so F = 1.

In this case, the solution is clearly as expected,


Is this formulation correct? I should explain it and I would like to use the "development" for another optimization problem smile


Regarding the smoothing equations, double thanks! I actually had no clue on that!


By the way, can you recommend me any good source / reference either on the Internet or a book where I can learn more about this topic?


Jose


“Make everything as simple as possible, but not simpler.” -- Albert Einstein

Offline

#4 2009-07-07 01:59:50

Ricky
Moderator
Registered: 2005-12-04
Posts: 3,791

Re: Least Squares minimization problem

Is this formulation correct?

Yes.

By the way, can you recommend me any good source / reference either on the Internet or a book where I can learn more about this topic?

Unfortunately no.  Most of what I know about numerical analysis and optimization I have picked up as I went.  The only thing I could recommend is Numerical Recipes:

http://www.nr.com/

As this is a book my professor loved.  But I don't think it will teach you much about the subject, only give you popular algorithms.


"In the real world, this would be a problem.  But in mathematics, we can just define a place where this problem doesn't exist.  So we'll go ahead and do that now..."

Offline

#5 2009-07-07 09:23:17

juriguen
Member
Registered: 2009-07-05
Posts: 59

Re: Least Squares minimization problem

Thanks a lot again


The "Numerical Recipes" resource seems really interesting, so I will have a look at it. Furthermore, I would really like to learn some algorithm development in C/C++, so even better smile


But for now, I would like to "abuse" from your knowledge a little further, if you have time to answer.

The problem I have to relate to the first shown in the post would be the following, which is a similar approach to the one explained but by another author (Mark A Robertson, "Dynamic Range Improvement Through Multiple Exposures", if anybody feels curious)


In this paper the author follows an statistical approach to address the same problem, but he uses a slightly different notation and he doesn't introduce logarithms.

First of all he arrives to the following objective function to minimize:

Where:

He introduces the constrain that I will have the intermediate value I_128 = 1


And the solution he proposes is retrieved by using a form of Gauss-Seidel relaxation, minimizing first with respect to I, and then with respect to x.


My question is, due to the fact that I am not familiar with optimization problems and, therefore, neither with iterative solutions to the problems... Is it possible to formulate the problem in a similar way as the first I posted? Then I guess it would be possible to directly find the solution using SVD, as Debevec proposed for the first post's problem...


Thanks in advance
Jose

Last edited by juriguen (2009-07-07 20:37:41)


“Make everything as simple as possible, but not simpler.” -- Albert Einstein

Offline

#6 2009-07-08 11:41:37

Ricky
Moderator
Registered: 2005-12-04
Posts: 3,791

Re: Least Squares minimization problem

It looks like x_j is unknown, is that correct?


"In the real world, this would be a problem.  But in mathematics, we can just define a place where this problem doesn't exist.  So we'll go ahead and do that now..."

Offline

#7 2009-07-08 19:30:17

juriguen
Member
Registered: 2009-07-05
Posts: 59

Re: Least Squares minimization problem

Sorry for answering so late!

Exactly, in this case the unknowns are again the function, f() or its inverse I(), and x_j.

Thanks!


“Make everything as simple as possible, but not simpler.” -- Albert Einstein

Offline

#8 2009-07-09 08:09:30

Sarah12
Guest

Re: Least Squares minimization problem

Welcome to the Forum.hope you ejoy the forum!!:cool:

#9 2009-07-09 10:48:28

Ricky
Moderator
Registered: 2005-12-04
Posts: 3,791

Re: Least Squares minimization problem

Is it possible to formulate the problem in a similar way as the first I posted?

In short: No.

In your first problem you are able to find a system of equations, and you can solve algebraically for the unknowns.  Here, the method of iterating is how you find the solution.  There is absolutely no need to try to convert this into an algebraic equation that you can solve.  Even if it were possible, even if it wasn't too hard to do, it would still probably be a bad idea.

You start out with some guess for the {x_j} elements, and then iterate.  After you do steps 1, 2, and 3 you will have a (hopefully) better set of {x_j} elements.  Then doing steps 1, 2, and 3 again will give you an even better set of {x_j}.  And doing it again, and again, and again, until you're satisfied with the result.  This is iteration.

Now you'll notice the "hopefully" in there.  It may end up that the new set of {x_j} that you get is actually worse.  If this happens, you typically try another guess and hope for the best.  The probability that your set {x_j} will get better is known as "stability": in a highly stable problem almost any guess will work.  In an unstable problem, you have to have a very good guess for iteration to work.

You measure whether or not your set {x_j} is "getting better" by plugging things in to your objective function.  If the result gets lower, great.  If the result gets higher, time to try a new guess.

Now if you want an explanation on how the iteration works, I have a fairly good idea.  Except on step 2 that is, I haven't been able to figure out why that's in there.

Edit: There is an error on step 2, is there not?  The RHS should be "k-1" instead of "k", or alternatively, the LHS should be "k+1" instead of "k"


"In the real world, this would be a problem.  But in mathematics, we can just define a place where this problem doesn't exist.  So we'll go ahead and do that now..."

Offline

#10 2009-07-09 20:29:50

juriguen
Member
Registered: 2009-07-05
Posts: 59

Re: Least Squares minimization problem

Hi again!


I had an idea regarding the iterative solution, since the author explains it in his paper. And I have worked with similar problems in the past. However, I like to read a properly justified and more detailed explanation than that given by the author, or than the vague idea that I had.


Regarding the solution to the problem, I think steps 1 and 3 are the minimization of the objective function with respect to I and x respectively.

And the second step is a normalization condition imposed by the author. I know the superscript is strange in this case, but in the paper none is used, and I tried to write it down myself.

I will try to find a better way of typing it, but what I mean is that right after step 1, the solution found for I is normalized by the value at position 128. I could simply write a more complicated formula in step 1, dividing the same expression over the one using m = 128. But I thought the way I wrote it was more clear, and I do not change k again since I consider that I change iteration from k-1 to k in the first step.


Hope this helps! I had already computed the solution using Matlab before posting my doubts, and compared it with the first problem I posted.

Both work pretty well, but I was wondering if I could explain them having more characteristics in common, from a theoretical point of view. That's why I posted smile

Following your advice, I will explain each method separately!


Thank you again
Jose


“Make everything as simple as possible, but not simpler.” -- Albert Einstein

Offline

#11 2009-07-10 03:43:26

Ricky
Moderator
Registered: 2005-12-04
Posts: 3,791

Re: Least Squares minimization problem

Ah I see now, I was thinking they were two separate problems.

There is no way that I would say one is by any means better than the other.  The iterative method can be unstable so that you have to take a whole lot of guesses before something works.  But a system of equations can be unstable as well: a small change in the equations can result to a huge change in the solution and this can cause many numerical errors and headaches.

NxP linear equations can be huge, and as far as efficiency goes it seems as if the iterative solution has it beat.  The other advantage of iteration is that you can choose how accurate you want your results to be.  The more times you iterate, the better accuracy you get, but you may not need all that much accuracy.  With this, iteration is much more flexible than the linear algebra.

Of course the other question is how do their solutions compare, but I'll leave that to you.


"In the real world, this would be a problem.  But in mathematics, we can just define a place where this problem doesn't exist.  So we'll go ahead and do that now..."

Offline

#12 2009-07-10 08:17:47

juriguen
Member
Registered: 2009-07-05
Posts: 59

Re: Least Squares minimization problem

Hi again


I guess this may be the last message of this post, since I know what I needed, which is much more than I knew before posting smile

So thanks a lot for your help.


As far as I have tested, both methods have turned out to be quite stable and accurate. However, the first, solving the overdetermined system of equations, has proven faster.

The reason is that the first proposed method finds the solution for the mapping function f, or g, and only a subset of the pixels, while the second iterates for all the values of f, or I, and finds the solution for all the pixels.

Obviously the first method still needs a final step after solving the system of equations to provide the values for all pixels, but is still faster, even though I accelerated the iterative method as much as I could, embedding C MEX code in the Matlab program.


Hope I will post again soon, and maybe not to ask, but to help!

Jose


“Make everything as simple as possible, but not simpler.” -- Albert Einstein

Offline

Board footer

Powered by FluxBB