• C++ Programming for Financial Engineering
    Highly recommended by thousands of MFE students. Covers essential C++ topics with applications to financial engineering. Learn more Join!
    Python for Finance with Intro to Data Science
    Gain practical understanding of Python to read, understand, and write professional Python code for your first day on the job. Learn more Join!
    An Intuition-Based Options Primer for FE
    Ideal for entry level positions interviews and graduate studies, specializing in options trading arbitrage and options valuation models. Learn more Join!

The Future of Quant Jobs

Joined
12/17/17
Messages
5
Points
11
1)Is it likely that the job of a quant could be completely(or partially) performed by an Artificial Intelligence system in the near future(the next 5-30 years)?
2)Would the happening of the event described in 1) lead to banks and other financial institutions replacing 'human-quants' with 'machine-quants'?
3)What skill could an aspiring quant learn now in order to guard against such an event described in 3) ?
4)Given the possible events above, is it risky to position oneself (in terms of extra reading, university courses etc) to become a quantitative analyst?
5) What additional tasks might a quant of the future(e.g in 20 years time) have to be able to perform in order be superior to a machine-quant?
6)What additional abilities might a quant of the future have to possess in order to be superior to a machine quant?

Many Thanks,
-Porimasu
 
Very interesting questions. If you look at the auto industry, you will see that autonomous, self-driving cars are already happening. It's not if but when it will be the future of driving. AI is the technology behind it.
You can be certain that AI will replace many jobs in many industries, finance included. You can't escape it, just try to stay one step ahead of it.
I don't have any specific suggestion for your situation but I believe we will need less workers in the finance industry and the fittest will survive.

It will be an on going discussion as we are facing the advent of AI going forward. Glad to ask him. I'm sure other members currently working would have the same worries as well.
 
As one example, the Wells Fargo people have assembled a new team including a stats professor from Michigan. The team is working on reinforcement learning and the objective is to make many/most of the Wells Fargo quants redundant. This dynamic is inexorable. For whether a human can be superior to a machine, look at games like chess and Go. In chess, no human player can compete against Rybka, Stockfish, or Komodo -- let alone AlphaZero. The same now for Go, where no human can compete against Alpha Go.

The way of the future is probably reinforcement learning -- Black-Scholes and the like are becoming very passé.
 
In the future nobody will have to work. Everyone will get 30K pocket-money.
 
The way of the future is probably reinforcement learning -- Black-Scholes and the like are becoming very passé.

New fads always tend to eclipse current technologies. But how long will it take before it becomes mainstream if it does not peter out in the meantime?

In computing, ~80% of software is COBOL.

The question is what is the size of the niche market that AI will take over?
 
Last edited:
do you mean finance and insurance? 80% of computing power in COBOL is a little bit of a stretch
I think the database stuff (account stuff).

Even if a new technology comes along does not mean it will be accepted. At the end of the 80s many people thought AI was ready for action.

edit:

A portion of this Reuters article about the Pentagon's inability to manage paying soldiers properly mentions that their payroll program has 'seven million lines of Cobol code that hasn't been updated.' It goes on to mention that the documentation has been lost, and no one really knows how to update it well. In trying to replace the program, the Pentagon spent a billion dollars and wasn't successful."
 
Last edited:
"AI" seems to have become a 'catch-all' term. Many of its algorithms have been around for years in other fields.
 
First impressions
CS researchers seem to like making up new terms for well-known existing concepts (maths folk no!), for example reinforcement learning RL (I have 0.0001 knowledge) which seems to be based on a lot of stuff like agent technology AT which we did C# work on 15 years ago; we made mobile agents and they could move around and do things in a network :) (BTW agent technology never took off, very difficult stuff).

So, RL is a stripped -down version of AT? n-arm Bandits with deterministic action-rewards is not very intelligent..(?)

Caveat: I know very little:)
 
Last edited:
First impressions
CS researchers seem to like making up new terms for well-known existing concepts (maths folk no!), for example reinforcement learning RL (I have 0.0001 knowledge) which seems to be based on a lot of stuff like agent technology AT which we did C# work on 15 years ago; we made mobile agents and they could move around and do things in a network :) (BTW agent technology never took off, very difficult stuff).

So, RL is a stripped -down version of AT? n-arm Bandits with deterministic action-rewards is not very intelligent..(?)

Caveat: I know very little:)
what is Agent Technology? are you talking about this?

Agent-based model - Wikipedia
 
As one example, the Wells Fargo people have assembled a new team including a stats professor from Michigan. The team is working on reinforcement learning and the objective is to make many/most of the Wells Fargo quants redundant. This dynamic is inexorable. For whether a human can be superior to a machine, look at games like chess and Go. In chess, no human player can compete against Rybka, Stockfish, or Komodo -- let alone AlphaZero. The same now for Go, where no human can compete against Alpha Go.

The way of the future is probably reinforcement learning -- Black-Scholes and the like are becoming very passé.

this is the opposite of what will happen in the future - it is spoken like somebody who has never worked in financial markets or does not understand them.

reinforcement learning will not replace option pricing models. If you say Black-Scholes because you believe that is what traders and quants actually use, then you are an idiot and have not understood how option pricing works since... 1987. machine learning will have its uses but it will not replace the existing structure of how quants work. Quants work in Q-world - risk neutral measure world - where the drift is given to you (risk free rate, collateral rate, whatever, etc) - and you calibrate vol to ensure E[f(S_T)] matches the market. Machine learning is used in P-world - physical measure world - drift and vol are unknown - and you must estimate them using statistical methods. using machine learning methods in Q-world is stupid, it is not needed.

here is a more detailed explanation as to why machine learning will not change what quants in investment banks, let us call them pricing quants. i ignore the simpletons - market risk people - credit people - legal risk - etc. post 1987 crash, pricing quants wanted to ensure the theoretical probability distributions for an asset matched the empirical distribution, which has fat tails and and non-linear variance, etc... this led to the rise of 'exotic' pricing models such as variance gamma, jump diffusion models, black scholes with time subordination, all kinds of funky stuff... in this area you would still not use any machine learning methods because you do need to estimate the drift of your asset, but it is a problem in P-world - you must ensure the theoretical distribution matches the empirical. PCA and other methods could be used to determine which is the 'best' theoretical distribution - hence some machine learning.

with the rise of exotic derivatives, which means that the hedge will be a vanilla derivative, pricing quants (and traders, etc) no longer care about the statistical or underlying features of the market data / asset, they care about the underlying features of the hedges - vanilla derivatives - and those will always be Black/Bachelier - and they are used just to convert vols to price and vice versa. Where does machine learning come into this? it does not. Rebonato has a good paper on this - review of interest rate derivatives - or something like that.

machine learning algorithms being able to beat humans at chess prove that.... machine learning algorithms can beat humans at chess. nothing more. using a few examples of successes of machine learning to convince people that quants will be made redundant is ... farcical.
 
Last edited:
"AI" seems to have become a 'catch-all' term. Many of its algorithms have been around for years in other fields.
just remember that whenever somebody shows talks about machine learning or AI or whatever these new buzz terms are, they are talking about statistics. that is all it is. applying statistics and mathematics to solve a problem used to be called applied statistics or applied mathematics, but for some reason it is now called machine learning. it is silly. it is like in physics when somebody mentions 'quantum mechanics' - rarely are they talking about a specific feature of quantum mechanics as opposed to classical mechanics or just physics in general.
 
"AI" seems to have become a 'catch-all' term. Many of its algorithms have been around for years in other fields.

AI has become another stale buzzword. You have to get into the details. The more an area of activity can be structured according to a set of rules, the more it's amenable to being taken over by computer-driven algorithms -- hardly a deep insight. This particularly holds for games like chess and Go but also (arguably) for much of humdrum quant work -- again, not an earth-shattering insight. The less it can be structured, the more scope there is for human-machine symbiosis, where the machine does the grunt work but the human points it in the right direction, mediates its work, and provides judgment and evaluation. Again, most quants know this.

It's when you get into the detail of the algorithms that drive chess engines like Stockfish and Komodo that things become interesting. Some of the basic stuff -- minimax and alpha-beta -- has been around for decades but newer ideas keep getting added and make all the difference between the top engines and those further down the list. Get into the detail; the rest is just idle talk.
 
just remember that whenever somebody shows talks about machine learning or AI or whatever these new buzz terms are, they are talking about statistics. ... it is like in physics when somebody mentions 'quantum mechanics' - rarely are they talking about a specific feature of quantum mechanics as opposed to classical mechanics or just physics in general.

I think to be more precise it's like when someone with very little or no physics training does this. You wouldn't catch people in my theoretical physics or math undergrad pulling that shit regarding quantum mechanics and, tbh, even those in my undegrad that I've kept in touch with and that didn't go on to work in stats or AI have spotted these issues regarding terminology.

Another issue in this regard is that often if a mathematician or physicist challenges these talks or buzz, it's easier for uninformed people to write it off as that mathematician or physicist "underestimating him/herself" and to go "who are you to question this big talker I look up to with a crick in my kneck?". Family don't seem to like when I ask if they have an actual opinion or idea that isn't borrowed from someone else. That's precisely the problem at the Christmas dinner table or wherever - people forget that whoever they are talking to actually works with this stuff and their lack of respect, fragile ego and inability to look before they leap all kick in.
 
Last edited:
Some of the basic stuff -- minimax and alpha-beta -- has been around for decades but newer ideas keep getting added and make all the difference between the top engines and those further down the list. Get into the detail; the rest is just idle talk.
I have touched on AI and see this - the actual maths isn't new and, tbh, some stretches back to Fourier. Even some work in designing efficient algorithms isn't new - it goes back to the 80s as you allude to. The actual work seems to be in experimenting with design of structures of something like a neural network or finding other structures that work, at least my experiences with it are.
 
Last edited:
just remember that whenever somebody shows talks about machine learning or AI or whatever these new buzz terms are, they are talking about statistics. that is all it is. applying statistics and mathematics to solve a problem used to be called applied statistics or applied mathematics, but for some reason it is now called machine learning. it is silly. it is like in physics when somebody mentions 'quantum mechanics' - rarely are they talking about a specific feature of quantum mechanics as opposed to classical mechanics or just physics in general.
That was my feeling all along. BTW nothing wrong with statistics but morphing it into AI/ML/DL is bordering on a kind of intellectual dishonesty.

An interesting one is that fancy learning rate parameter in gradient descent .. it is the same as that in steepest descent that you compute by a 1-d solver such as Brent.

And trying to solve PDEs by AI/DL is a solution looking for a problem IMO. Maybe I'm missing something.

(I am a veteran (cough, wrote my 1st PL/I program on IBM/360 in 1972) so many buzzwords have come and gone; funnily, AI was more like a sine wave, it keeps reappearing every so often).
 
Last edited:
I developed some basic expert and agent systems in the past. These fields did not take off.
Reinforcement learning - grosso modo - is trying to apply these fields. So, where does the enthusiasm come from?
 
Last edited:
I have touched on AI and see this - the actual maths isn't new and, tbh, some stretches back to Fourier. Even some work in designing efficient algorithms isn't new - it goes back to the 80s as you allude to. The actual work seems to be in experimenting with design of structures of something like a neural network or finding other structures that work, at least my experiences with it are.
I think, without Gausss, Lagrange and Laplace there would be no AI.
 
Back
Top