• TIME TO 2024 UK RANKINGS

  • C++ Programming for Financial Engineering
    Highly recommended by thousands of MFE students. Covers essential C++ topics with applications to financial engineering. Learn more Join!
    Python for Finance with Intro to Data Science
    Gain practical understanding of Python to read, understand, and write professional Python code for your first day on the job. Learn more Join!
    An Intuition-Based Options Primer for FE
    Ideal for entry level positions interviews and graduate studies, specializing in options trading arbitrage and options valuation models. Learn more Join!

Crank Nicholson scheme in Matlab

luc

Joined
11/11/10
Messages
8
Points
11
Hi everyone,

I have a gradient descent problem of the following form:

(\psi_{n+1}=\psi_{n}+\alpha(\nabla\psi_{n}*D^{2}\psi))

I am trying this on a 256x256 image grid where everything is spaced uniformly and dx and dy =1. I am using s step size of 0.5 using a normal gradient descent.

Somewhere down the line the algorithm gets very stable and I see some artefacts appearing and the whole thing falls apart and never converges.

Looking through the internet, people recommend using the Crank-Nicholson scheme to solve these kind of systems. However, I am having trouble formulating this in that scheme.

Would anyone know how I can structure this problem using the CN scheme? Also, is there a way to determine the optimal step-size parameter so as not to cause unstability at each iteration?

Thanks,

Luca
 
What is the problem you are trying to solve?

It is not a time-dependent problem so CN cannot be applied.
 
The problem is actually an image processing problem where this is a gradient descent solution to a convex function.

For the CN bit, can't the step size of the GD be viewed as the time parameter as at each iteration one can see it as a flow towards the solution.

Thanks,
Luc
 
Hi,
Image processing is not my area, but are you solving a Laplace-Beltrami kind of elliptic equation?


I think you are solving an iterative method (but no time dependence?)

A link to the equation would be useful.
Daniel
 
Hi Daniel,

Thanks for the reply. Yes, the system is iterative but has no time dependence.

The equation is there in my original post. It is a convex reformulation of an old problem and the equation is a gradient descent type of formulation. The last term is the hessian.

The equation is in my original post but here it is again:

(\psi_{n+1}=\psi_{n}+\alpha(\nabla\psi_{n}*D^{2}\psi))

I am looking for advice on how to solve this. It is, I guess, a bad convex problem and after some iterations (depending on the step size parameter), I get these rapid oscillations and this could be due to the CFL violation. I read somewhere that using CN technique might be the way to go but I do not know much about it and am having difficulty translating this into a CN framework.

Thanks,

Luc
 
Hello,

Thanks for the reply. So, the equation in my original post is set up as a gradient descent system and represents one iteration of the gradient descent.

(\psi_{n+1}) is the current estimate.
(\psi_{n}) is the previous estimate.
(\alpha) is the step-size of the gradient descent. I arbitarrily set this to some value right now and that could be a source of instability as well.
The last part of the equation just needs the gradient and the determinant of the hessian of the variable (psi_{n}).

What I was observing when I tried to solve this was after some iterations, I would start seeing some rapid oscillations which suggested to me that maybe something like the CFL violation was going on and I was wondering if there was a more stable way to solve this system.

There is no time component here but I was assuming that the step size of the gradient descent could be seen as a temporal variable but that is probably stupid.

Maybe something like the LAX scheme is the way to go but I am not too clear on that as well. It seems that I should use central differences throughout (which is fine) and can use forward and backward differences at the edges but I am at a loss as to how to choose the step-size and how that would relate to the LAX or CN scheme.

Thanks,

Luc
 
The analogy: step-size, CFL and CN are not applicable, at all.You seem to be solving a minimisation problem but you do not confirm this assertion as far as I can see.

Is your problem this?(!)

http://en.wikipedia.org/wiki/Gradient_descent

Then your 'step-size' is a parameter that you compute from a 1d solver, e.g. Brent.

I am sure Matlab has it.
 
Yes, you are right. Can you comment on the validity of using a LAX type numerical scheme (I am not clear on what this entails here) to solve this problem?

I am guessing the LAX scheme is also not valid here as there is no time component but I should compute the derivatives and the hessian and the hessian using central differencing to ensure stability.

Thanks,
Luc
 
Let me ask a question

what is Lax scheme and how do you get to it?
why would it be useful in this context?

I asked you a vital question in my previus post which needs to answered.

Your problem is ambiguous, now.
 
Hi,

Thanks for the reply. In this case, I know that this would be a convex function. So, I just have to walk down the slope and the minima that I find would be the right one. So, in this sense the only task is to go down the hill somehow in a fast way.

I was using a simple gradient descent which I coded in matlab. The step-size was fixed and I noticed some artefacts in the images after a few iterations. This made me think that the algorithm was numerically unstable. Also, it was taking too long when I made the step-size smaller.

The LAX scheme was from this paper (from where I am trying to implement this) and the authors mention using a LAX scheme but they do not give much detail.

Would you happen to have any advice regarding that? The function is convex. What would be the best approach to find the minima? I tried using fminunc in matlab with this but it just takes way too long. To give you an idea it is a 512x512 image (so there is a phi vector for each voxel...).

Thanks,
Luc
 
Back
Top