• TIME TO 2024 UK RANKINGS

  • C++ Programming for Financial Engineering
    Highly recommended by thousands of MFE students. Covers essential C++ topics with applications to financial engineering. Learn more Join!
    Python for Finance with Intro to Data Science
    Gain practical understanding of Python to read, understand, and write professional Python code for your first day on the job. Learn more Join!
    An Intuition-Based Options Primer for FE
    Ideal for entry level positions interviews and graduate studies, specializing in options trading arbitrage and options valuation models. Learn more Join!

model validation

Lun

Joined
3/1/07
Messages
74
Points
18
After developing the theortical model, how is it validated ?

With the model, then we build it by a program. How the program is tested to make sure that the computation is correct (bug-free) ?

I wonder how they're done in the industry

Thanks !
 
Ken,

Are these practices in the industry ? Or just some theories proposed, but different companies use their own ways ?
 
Lun, it may be obvious but let me re-emphasize this. Every model is incorrect in reality but accurate if and only if the assumptions are realized and become true. Moreover, you can test the validity of the model's robustness by using a monte carlo simulation. I hope this helps.
 
I understand your points. My point is that it is very common for us to have human errors when (1) we develop a model, and (2) write a program, how can we figure these mistakes out by "testing"

I mean, it's impossible to always expect that we don't make mistakes. The fact is we always make mistake, and we need to figure it out. If there's no mistake, Microsoft need not service packs.
 
@Lun: The OCC docs (particularly the latter one, which was done in conjunction with the Fed) represents best practice. It's the standard to which all banks have to adhere. Like 186,282 miles per second: it's not just a good idea - it's the law.
 
Ken,

Model validation is a kind of scientific study (we call it "risk management") of models. Can we say in this way ?

And it is usually done by other parties such that everything is independent from the party for development & implementation. However, can 2 independent parties think in the same way and get the same result ? For example, with the same goal, different banks can develop different models to achieve the same goal, does it contradict ?

From the doc, I find
"Developers should be able to demonstrate that such data and information are suitable for the model and that they are consistent with the theory behind the approach and with the chosen methodology."
It is a little bit open for discussion. I mean, there should be more than one exact way to achieve what is defined, right ?

My feeling is that the doc is still abstract, it is an overview instead of sth very strict defined. I know that space should be left, but I want to know the details (or an example) of how things (validation) are done. Say, what are the exact steps you will do to fullfill the regulation ? I know nothing, that's why I look for examples (or step by step)

Am I thinking on the right track ? Thanks ....
 
put it simpler, I want the program to calculate the sum of Fibonacci numbers
1 + 1 + 2 + 3 + 5 + 8 + 13 + 21 + .... (for 50 terms)
without knowing the answer first, how can I know that the value returned by my program is correct (bug free) ?

well, one may suggest to compare with market data, but the point is that I haven't made sure my progam is bug-free, can values returned by program be trusted and compared with the market data ?

If we do compare with the market data, where is the source of error when we find differences ? problem of the program ? problem of the model ? I mean, how can we track down the source of error ? or how do we make sure that we have no human error ?

Is there anything we can reference before comparing with the market data ?
 
Back
Top