• C++ Programming for Financial Engineering
    Highly recommended by thousands of MFE students. Covers essential C++ topics with applications to financial engineering. Learn more Join!
    Python for Finance with Intro to Data Science
    Gain practical understanding of Python to read, understand, and write professional Python code for your first day on the job. Learn more Join!
    An Intuition-Based Options Primer for FE
    Ideal for entry level positions interviews and graduate studies, specializing in options trading arbitrage and options valuation models. Learn more Join!

Quantitative Interview questions and answers

If I can prove that the factorial moments of X (or Y for that matter) are of the form \(\lambda^k\) which is what one gets in case of Poisson, would that be considered a proof?

I'm not sure what you mean by "the factorial moments of X are of the form \(\lambda^k\)"?
 
I'm not sure what you mean by "the factorial moments of X are of the form \(\lambda^k\)"?
If X is Poisson distributed with parameter \(\lambda\) then \(E[X(X-1)(X-2)...(X-k+1)] = \lambda^k\). Could this somehow be tied to the probability generating function of X and conclude that it is Poisson distributed indeed, in other words would factorial moment property imply Poisson distribution?
 
Last edited:
Well done !
Very good to use the identity theorem that way.

Indeed it seems we need to prove that f is holomorphic outside the unit circle because we need an accumulation point inside an open connexe space on which f is holomorphic.

you can see that b=-a because all PGFs verify f(1)=1.

Maybe we can prove that the Function f is independant of 0<p<1.

Or we have to use the Identy theorem using your idea for other p, which, as you say doesnt seem easy...

Well done again.

Good point that b=-a is automatic.
 
I have a solution for p=1/2 (might work for other p, didnt Check), but sadly It doesnt use the Identity Theorem.

So we dont use complex analysis and consider f as a Function on I=]-1,1[ (infinitely différentiable and with f(1)=1)

Lets show that f is strictly positive on I.

f is clearly strictly positive on [0,1[ as a PGF of non zéro random variable. (strictly positive on ]0,1[ and f(0)>0 because f(0)=f(1/2)^2)

Assume there exist zo in ]-1,0[ such as f(zo)=o
Then f((z0+1)/2)=0 which is not possible because (z0+1)/2>0.

Thus there exist a function g on ]-1,1[ such as f=exp(g)

g(x)=2g((x+1)/2)

And g''(x)=1/2 g''((x+1)/2)

Consequently g''(x)=1/(2^n) g''((x+2^n-1)/2^n) (@)

f''(1) is a strictly positive number because P(X+Y>1)>0 by assomption on X and Y.

Thus g''(1) < infinity

And by (@) by letting n increasing to infinity
g''(x)=0

And we're done. What do you think ?

Didnt try for p!=1/2.
 
Last edited:
f''(1) < infinity is not so easy to get actually ...

It should be verified if and only if X+Y has a finite variance.

What do you guys think ?
 
As Milo said, f'/f is constant on {1/2,3/4,...}.

Lets say f'(1)/f(1)=a. Thus f'(1)=a (a>0)

We also have f''(1)=1/2 (f''(1) f(1) + f'(1)^2)

So f''(1)=2 a^2.

Then 0<g''(1)<infinty
 
I have a solution for p=1/2 (might work for other p, didnt Check), but sadly It doesnt use the Identity Theorem.

So we dont use complex analysis and consider f as a Function on I=]-1,1[ (infinitely différentiable and with f(1)=1)

Lets show that f is strictly positive on I.

f is clearly strictly positive on [0,1[ as a PGF of non zéro random variable. (strictly positive on ]0,1[ and f(0)>0 because f(0)=f(1/2)^2)

Assume there exist zo in ]-1,0[ such as f(zo)=o
Then f((z0+1)/2)=0 which is not possible because (z0+1)/2>0.

Thus there exist a function g on ]-1,1[ such as f=exp(g)

g(x)=2g((x+1)/2)

And g''(x)=1/2 g''((x+1)/2)

Consequently g''(x)=1/(2^n) g''((x+2^n-1)/2^n) (@)

f''(1) is a strictly positive number because P(X+Y>1)>0 by assomption on X and Y.

Thus g''(1) < infinity

And by (@) by letting n increasing to infinity
g''(x)=0

And we're done. What do you think ?

Didnt try for p!=1/2.

I can't understand why g"(1) must be defined, even for the left derivative (since 1 is not in the interior of the domain)? When we talk about the \(f^{( n )}(1)\) , this uses differentiation of the series defining f term by term.
 
I can't understand why g"(1) must be defined, even for the left derivative (since 1 is not in the interior of the domain)? When we talk about the \(f^{( n )}(1)\), this uses differentiation of the series defining f term by term.

Yes I must agree.

You cannot "switch" the limit as g" is not necessarily continuous in 1.

u_n(x)=g''((x-1+2^n)/2^n)

Then |u_n(x)|< exp(sum_k k(k-1)P(X+Y=k-2)) which should be finite if and only if X+Y has a finite variance.

And then you can conclude that g'' is equal to 0...

By the way, I tried to prove that f is entire but didnt manage to find a way so far.
 
As Milo said, f'/f is constant on {1/2,3/4,...}.

Lets say f'(1)/f(1)=a. Thus f'(1)=a (a>0)

We also have f''(1)=1/2 (f''(1) f(1) + f'(1)^2)

So f''(1)=2 a^2.

...

I think you are on the right track, and that @Hi Rez is possibly already there:

If X is Poisson distributed with parameter \(\lambda\) then \(E[X(X-1)(X-2)...(X-k+1)] = \lambda^k\). Could this somehow be tied to the probability generating function of X and conclude that it is Poisson distributed indeed, in other words would factorial moment property imply Poisson distribution?

If the pgf \(f\) is also entire, or at least has radius of convergence > 1, then \(f^{( k )}(1)=E[X(X-1)(X-2)...(X-k+1)] = \lambda^k\) would imply that \(f(x)=e^{\lambda(x-1)}\) because they have the same Taylor series about 1.

@Hi Rez seemed to be claiming that he proved \(f^{( k )}(1)=\lambda^k\), so if we can show f has radius of convergence > 1, then this would prove that \(X\) is Poisson distributed.
 
To finish what we started: Given \(0<p<1\), from @zlatancrew we have
\(f(z)=f(pz+q)f(qz+p)\qquad(1)\).

Assume that \(E[X+Y]=f'(1)<\infty\). @zlatancrew pointed out that we have already proved it for \(p=\frac12\).

Let \(\lambda=f'(1)\).

As @Hi Rez suggested:

Claim 1. \(f^{( n )}(1)=\lambda^n\) for all \(n\).
Proof. Take \(n\ge1\). Then using the General Leibniz Rule on (1) gives
\(f^{( n + 1 )}(z)=\sum_{k=0}^{n + 1}{n+1\choose k}p^k f^{(k)}\bigl(pz+(1-p)\bigr)(1-p)^{n+1-k}f^{(n+1 - k)}\bigl((1-p)z+p\bigr).\)
Plugging in 1 and applying induction:
\(f^{(n + 1)}(1)=p^{n + 1}f^{(n + 1)}(1)+\sum_{k=1}^n {n+1\choose k}p^k (1-p)^{n+1-k}\lambda^{n + 1}+(1-p)^{n+1}f^{(n + 1)}(1)\\
=(p^{n+1}+(1-p)^{n+1})f^{(n+1)}+\lambda^{n+1}(1-p^{n+1}-(1-p)^{n+1}) \),
proving that \(f^{(n+1)}(1)=\lambda^{n+1}.\) QED

Claim 2. \(f\) is entire.
Proof. Write \(f(z)=a_0+a_1 z + a_2 z^2+...\). Then using Stirling's formula/inequality which implies \(n!\ge\bigl(\frac n e\bigr)^n\),
\(a_n=\frac{f^{( n )}(0)}{n!}\le \frac{\lambda^n}{n!}\le \bigl(\frac{\lambda e}n\bigr)^n\).

Hence \(\lim_{n\to\infty}a_n^{\frac1n}=0\), proving that \(f\) has an infinite radius of convergence by the root test. QED

From my last post, it follows from these claims that \(f(z)=e^{\lambda(z-1)}\) proving that \(X+Y\) is Poisson distributed.

For a full solution we still need to show that \(E[X]<\infty\) for \(p\ne\frac12\). Any ideas?
 
Last edited:
Very well done @MiloRambaldi

I think my argument was not correct for f'(1)<infinty
because I implicitely used continuity of f in 1-, which is not guarenteed.
 
For a full solution we still need to show that \(E < \infty \) for \(p\ne\frac12 \). Any ideas?

f'(z)=pf'(pz+q)f(qz+p)+qf(pz+q)f'(qz+p).

Thus f'(1) which is well defined because f' is a power serie with non negative coefficients (real number or equal to infinity) is such that:

f'(1)=pf'(q)f(p)+qf(q)f'(p),

But f'(p) and f(p) are real numbers because f converges for |z|<1 and p belongs to ]0,1[. Furthermore, if z is a power serie with convergence for |z|<alpha then all of its derivatives converge for |z| < alpha (theorem).

Same for q.

Thus f'(1)< infinity.

Is it correct ?
 
Last edited:
f'(z)=pf'(pz+q)f(qz+p)+qf(pz+q)f'(qz+p).

Thus f'(1) which is well defined because f' is a power serie with non negative coefficients (real number or equal to infinity) is such that:

f'(1)=pf'(q)f(p)+qf(q)f'(p),
...
Is it correct ?

No, you plugged in z=0 on the RHS. Sorry, you need to be more careful than that ;)

We can actually use the same idea as for \(p=\frac12\): Let \(g(z)=\log(f(z))'=\frac{f'(z)}{f(z)}\), which is defined on \([0,1)\) since \(f(0)>0\) as you pointed out earlier. From
\(g(z)=pg(pz+q)+qg(qz+p)\),
we know that for all \(0\le z < 1\), either \(g(z)\ge g(pz+q)\) or \(g(z)\ge g(qz+p)\) since \(p+q=1\).
Let \(z_0=0\) and \(z_{n+1}=p z_n+q\) or \(z_{n+1}=q z_n+p\) so that
\(g(z_{n+1})\le g(z_n)\).
Since both \(z\mapsto pz+q\) and \(z\mapsto qz+p\) are strictly increasing on \([0,1)\), \(z_n\to 1\).

As you pointed out, \(f'(1)=\lim_{z\to 1^-}f'(z)\) either exists or equals infinity. But since
\(g(0)\ge\lim_{n\to\infty}g(z_n)=\lim_{z\to1^-}\frac{f'(z)}{f(z)}=\lim_{z\to1^-}f'(z)=f'(1)\), we have \(f'(1)<\infty\).
 
4. 3 points are randomly drawn on a circle. What is the probability of them being on the same semi-circle ?
Answer: 1/2 or 50%

5. A unit length is broken off into 3 pieces. What is the probability of them forming a triangle ?
Answer: 1/8 or 12.5%


--> I think there is a slight discreepancy here .... the solutions to question 4 and question 5 here must be related as ans Q5 = 1 - ans Q4

the way i think about it is as .... we know that 3 randomly choosen line segments will form a triangle IFF the sum of any 2 sides is greater than the 3rd side....

Think of choosing points on a circle as breaking up a line segment --- choosing 3 points on a semi circle gives you 3 line segments..... if 3 of them lie on the same side of a semi circle ==> the sum of 2 of them is smaller than the third one => they cannot make a triangle....

This is correct, Q4=3/4, and Q5=1/4.

For Q4, consider the first point at theta=0, and the second point at theta_1, where theta_1<180. Then the three points will lie on a semicircle if the the third point falls outside of the region (180, 180+theta_1). The probability of this is (360-theta_1)/360. The total probability is the integral of 1-theta_1/360 from theta_1 =0 to 180, all divided by the integration interval, which is 180. This gives us 3/4 but we must remember that theta_1<180 only half of the time. If we look at the other half of cases, where theta_1>180, we recognize the probability integrand changes to (540-theta_1)/360 and we will recover a total probability of 3/4 again. So the answer is (1/2)(3/4)+(1/2)(3/4)=3/4.

For Q5, consider the first break at the position x, where 0<x<1/2. Then a triangle requires the second break falls in the region (1/2, 1/2+x). The probability of this is x/L, or just x, since the rod has unit length. Then the probability is the integral of x from x=0 to 1/2 divided by the integration interval which is 1/2. This gives us 1/4, but like in Q4, only half of the possibilities satisfy 0<x<1/2. If we include cases where 1/2<x<1, the probability integrand becomes x-1/2, and we recover a probability of 1/4 again. So the answer is (1/2)(1/4)+(1/2)(1/4)=1/4.
 
1. Find \(x\) if \(x^{x^{x^{\ldots}}}=2\)

2. Find all real and complex root of \(x^6=64\)

3. The hour and minute hands of a clock meet at 12'oclock. When will be the first time they meet again ?

4. 3 points are randomly drawn on a circle. What is the probability of them being on the same semi-circle ?

5. A unit length is broken off into 3 pieces. What is the probability of them forming a triangle ?

6. Calculate \(\sqrt{2+\sqrt{2+\sqrt{2+\sqrt{2+\ldots}}}} \)

7. There are 14 identical-looking balls. 13 of them have the same weight while one of them is heavier than the rest. What is the minimum times you can weight to identify the heaviest ball ? How do you generalize for n balls ?
I'm sure these have already been done, but I thought I'd chime in to practice my latex.

1. \(x\) if \(x^{x^{x^{\ldots}}}=2\)
then taking log of both sides gives
\(x^{x^{x^{\ldots}}}logx=log2\)
so \(logx=log(2)/2\)
\(x=exp(log(2)/2)\).

2. If \(x^{6}=64\), then the complex solutions are \( x=2^{\frac{5}{6}}, -2^{\frac{5}{6}}, 2^{\frac{5}{6}}e^{\frac{\pi i}{3}}, -2^{\frac{5}{6}}e^{\frac{\pi i}{3}}, 2^{\frac{5}{6}e^{\frac{-\pi i}{3}}}, -2^{\frac{5}{6}e^{\frac{-\pi i}{3}}} \). Perhaps a better question would be how to approximate \(2^{\frac{5}{6}}\)

3. Let t denote the number of minutes past 12'oclock. Then let x(t) denote the position of the hour hand , and y(t) denote the position of the minute hand
note that \(x(t)=t/60 \mod 720\) and \(y(t) = t \mod 60\).
Solving \(5x=y\) gives the hour hand and the minute hand lining up, solving this for \(0\leq t \leq 60\) gives the solution \(t=0\), which corresponds to the hands meeting at 12'oclock. The next solution comes in the range \(60 \le t \le 720\). The solution is \(t=65.45...\), which is 1 hour, 5 minutes and 27 seconds.

5. You can formulate this problem as \(x^2 - 2 = x\) which has solutions \(x = \frac{1+\sqrt 5}{2}, \frac{1-\sqrt 5}{2}\).
 
Last edited:
There is a tough question (IMO):
Given 2 sets of numbers with 9999 elements. Set 1 has average is 2.5, deviation is 0.25. Set 2 has average is 2.6, deviation is 0.25. What is the difference between two sets?
 
1. Find \(x\) if \(x^{x^{x^{\ldots}}}=2\)

2. Find all real and complex root of \(x^6=64\)

3. The hour and minute hands of a clock meet at 12'oclock. When will be the first time they meet again ?

4. 3 points are randomly drawn on a circle. What is the probability of them being on the same semi-circle ?

5. A unit length is broken off into 3 pieces. What is the probability of them forming a triangle ?

6. Calculate \(\sqrt{2+\sqrt{2+\sqrt{2+\sqrt{2+\ldots}}}} \)

7. There are 14 identical-looking balls. 13 of them have the same weight while one of them is heavier than the rest. What is the minimum times you can weight to identify the heaviest ball ? How do you generalize for n balls ?
Andy, can you recommend a good book with these kind of math brainteasers? Thanks a mil..
 
There is a tough question (IMO):
Given 2 sets of numbers with 9999 elements. Set 1 has average is 2.5, deviation is 0.25. Set 2 has average is 2.6, deviation is 0.25. What is the difference between two sets?
If you think about this from a stats perspective, ignoring the number 9999, the answer is comes to mind straight away. If you have that x1~N(x,y), x2~N(x+a,y), where I've used a normal because its easy to visualise, then you have that the only difference is that the distribution of x2 is shifted to the right by a>0. So a valid answer would be that each value in the second data set is 0.1 greater than that in the first data s
 
There is a tough question (IMO):
Given 2 sets of numbers with 9999 elements. Set 1 has average is 2.5, deviation is 0.25. Set 2 has average is 2.6, deviation is 0.25. What is the difference between two sets?
What do you mean by 'difference'?
 
Back
Top