• C++ Programming for Financial Engineering
    Highly recommended by thousands of MFE students. Covers essential C++ topics with applications to financial engineering. Learn more Join!
    Python for Finance with Intro to Data Science
    Gain practical understanding of Python to read, understand, and write professional Python code for your first day on the job. Learn more Join!
    An Intuition-Based Options Primer for FE
    Ideal for entry level positions interviews and graduate studies, specializing in options trading arbitrage and options valuation models. Learn more Join!

The basics of ARCH / GARCH

Joined
7/6/13
Messages
14
Points
11
Hi,

I am looking to introduce some volatility filters into my MACD trading strategy and have encountered ARCH and GARCH. As a relative newbie I would like to find a layman's explanation of what these are and learn more about exactly how these can be used.

I have searched the internet and read the wikipedia pages etc. but somehow these don't give a real understanding of how these techniques can be used in practice. If anyone knows of any examples that are accompanied by an explanation or something similar then that would be absolutely ideal.

Thanks for your help in advance.

Taylrl
 
What do you meany by 'in practice'? What problem do you want to solve?
 
Well I currently have a MACD model set up and running and am quite interested in applying some volatility filters. I have encountered ARCH and various forms of GARCH in literature online but would just like some guidance on how I could actually incorporate one of these models into a MACD strategy. A lot of the explanations I can find are very academic and theoretical and what I would like is some kind of step by step approach to how I can implement one of these models and the impact it can have upon my strategy.

Failing that, some assistance in understanding these models from a theoretical perspective would be great.
 
Thanks for the link to the code.

My issue isn't with being able to calculate the values however (there are lots of nice libraries and packages out there to help me do so). My issue is with a more fundamental understanding of what is going on regarding GARCH itself. Basically, I know it can be used to predict volatility but what is actually going on eludes me.
 
I would stat on a good book on the foundations of GARCH?(?)
 
This is probably a bit too simple for you; but when I was getting GARCH related questions on interviews I started here. I used that video as a platform; I find that very often one just needs a simple "aha" moment before they can delve into deeper mathematics. After this I read a great stats book on auto-regressive models. I will try to recall the name and post here.
 
Chapter 49, "Overview of volatility modeling" in "Paul Wilmott on Quantitative Finance" is as simple as it gets.
 
Thanks for all your responses.

I think you are right Ken, ARIMA seems to be a good place to start.

Looking into this I used the methodology given on this site: http://financetrain.com/how-to-calculate-stocks-autocorrelation-in-excel/ to create the excel with the autocorrelation column given in the files I have uploaded named "Autocorrelation from Financetrain"

I am now quite confused as this value is very different from the ACF given in the example "Correlogram" file from http://www.spiderfinancial.com/support/documentation/numxl/tips-and-tricks/a-correlogram-tale which is in fact a percentage.

I am sure this is simple and I am missing the point somewhere. It would be great if someone who understands these things could help me to understand the explanations on either site.

Many thanks!
 
That would be amazing!

Am I correct in thinking determining stationarity would be a good place to start? Or maybe how to apply an Augmented Dickey-Fuller (ADF) test? If someone could suggest a good place to start then I will go away and try an example and see how I get on. It would then be great if I could come back with any questions or points for discussion.

Thanks so much.
 
Thanks for the feedback.

There seems to be some interest to do something as a group in this area. I suppose these models and their applications are new to many people (myself included), so maybe the initial approach (as Richard suggested) would be that each of us takes one aspect, works it in detail and tells the others.

What do you think?

As forum, you can use my site www.datasimfinancial.com or QN, whatever you prefer. (?)

Andy Nguyen
 
How could this discussion thread be set up? There will be multiple expectations and appoach IMHO we need to agree on what the scope of the thread is, what people deliver and what the 'deliverables' are ...

Ideas?
 
I think I will start by looking at AR.

Before I start, I know these things can easily be done using statistical packages but I like to know what is actually going on before just using them and that is basically the purpose of all that follows.

I am reading that in order to fit an AR model, the data-series must be stationary. This means that the mean, variance, autocorrelation etc. do not vary in time. As you can imagine, if this is the case then making predictions is far simpler. The data can be said to be stationary if there is a unit-root, which can be tested using the Dickey-Fuller test. In order that the data be stationary (if it isn't already and as it probably wont be with raw financial data), it has to be stationarized which has to be done through the use of mathematical transformations (don't worry it can be easier than it sounds). The prediction can then be made and the result un-transformed in order to obtain a result for your original data. One of the simplest ways to do this is through a method known as "differencing" whereby the differences in the values of the time series are compared at varying lags. If the differences are now stationary and auto-correlated with a value at earlier time periods then we can bring in our forecasting model (AR, ARMA, ARIMA etc.)

Now.....I know that choosing the order of the AR is very important and if not chosen correctly can lead to an estimator that is basically useless. I am looking at ways of testing which order is most appropriate and am looking at using the Auto-Correlation Function (ACF) and Partial Auto-Correlation Function (PACF). Am just thinking in theory I can perform this function on infinite different lags....... is that the point? This is going a bit off track here but does anyone have some code, or could help me write some code that could do that? Ideally it would be good to know where any auto-correlation sits for all possible lags, but that sounds like another project in its own right.

I was going to go on but that's a start for now. I will continue with determining the order via using the ACF and PACF in another post soon.

If anyone knows more than me about this or realizes that I'm off track with my understanding please let me know. It would be great to have my (currently minimal) understanding confirmed or hear any tricks or easy explanations.
 
Daniel - I had that post sitting waiting to be posted whilst you posted your last post and have dived in. I've just been trying to tackle it head on and get my teeth into something.

Basically I would like to arrive at a proper understanding of GARCH models having more or less started from nowhere - Could that be that some kind of scope?

I think the deliverables we will have to decide upon as we go as I currently have no idea of what to expect. At the moment I think I have found that the data needs to be 1. stationary and 2. auto-correlated. Maybe we could share some data and each work on one of those very basic points to begin with?

Let me know.
 
I think I will start by looking at AR.
I am reading that in order to fit an AR model, the data-series must be stationary. This means that the mean, variance, autocorrelation etc. do not vary in time. As you can imagine, if this is the case then making predictions is far simpler. The data can be said to be stationary if there is a unit-root, which can be tested using the Dickey-Fuller test. In order that the data be stationary (if it isn't already and as it probably wont be with raw financial data), it has to be stationarized which has to be done through the use of mathematical transformations (don't worry it can be easier than it sounds).

The above is not quite right;
- there are many kinds of non-stationarity (non-stationarity merely means that the probability distribution of the series is not translation-invariant, nothing more)
- one very particular example (quite mild, too) of non-stationary time-series (one of infinitely many) would be I(1) time-series, i.e., integrated of order one (having a unit root); a.k.a. unit-root, a.k.a. difference-stationary -- because the first difference is stationary (this is why this case is so mild: in general you can't be sure you can do anything to make a non-stationary process stationary),
- the tests mentioned here, so-called "unit root tests", are for this very particular example of non-stationarity (and only for this very particular example of non-stationarity), nothing more,
- unit root tests include DF (1976), ADF (1979), PP (1988) (and dozens more),
- the original Dickey–Fuller (DF) test is not asymptotically valid if there's serial correlation present in the error terms (it doesn't allow for serial correlation in the first differences),
- augmented Dickey–Fuller test (ADF) fixes the above,
- as for Phillips–Perron (PP): "there is now a good deal of evidence that PP tests perform less well in finite samples than ADF tests" (Davidson and MacKinnon, "Econometric Theory and Methods"), so you might as well ignore it,
- you might also consider KPSS -- while the above tests are to test for unit root as the null hypothesis (H0), KPSS has it as the alternative hypothesis (H1).

It's best not to use general-case-name and rare-and-special-case-name (like non-stationary and I(1)/with-unit-root/difference-stationary) terms interchangeably :)

Also avoid the related derived terms -- I probably wouldn't say "stationarized", since in general there's nothing you can do to non-stationary time-series to make them stationary (well, other than applying somewhat lossy transformations, like multiplying everything by zero ;]); again, if you're dealing with the special case of I(1)/difference-stationary then say so, and then it becomes obvious what to do with difference-stationary time-series, right?
 
Back
Top