Skip to content

Latest commit

 

History

History
77 lines (71 loc) · 2.83 KB

README.md

File metadata and controls

77 lines (71 loc) · 2.83 KB

Asian Option Pricing

Goal: estimate $$C = \mathbb{E}\left[e^{-rT} \cdot max\left(\frac{1}{k} \sum_{i=1}^kS(t_i)-K, 0 \right)\right]$$

  • $t_0 = 0 < t_1 < ... < t_k = T$
  • r > 0, r a known constant
  • K > 0, K a known constant
  • S: a CIR stochastic process, following the stochastic differential equation $dS_t=\alpha(b-S_t)dt+\sigma\sqrt{S_t}dW_t$
  • $W_t$: Bromnian motion

Monte-Carlo methods

  • Standard Monte-Carlo
  • Multi-Level Monte-Carlo
  • Randomized Quasi-Monte-Carlo
  • Multi-Level Randomized Quasi-Monte-Carlo

Multilevel Monte-Carlo

From the paper Multilevel Monte Carlo Path Simulation of Michael B. Giles, Professor of Scientific Computing, University of Oxford.
Let P denote the payoff for f(S(T)) and $\hat P_l$ denote the approximations to P using a numerical discretisation with timestep $h_l$.
Then: $\mathbb{E}[\hat P_L] = \mathbb{E}[\hat P_0] + \sum\limits_{l=1}^L\mathbb{E}[\hat P_l - \hat P_{l-1}]$
Let $\hat Y_0$ be an estimator of $\mathbb{E}[\hat P_0]$ and $Y_l$ an estimator of $\mathbb{E}[\hat P_l - \hat P_{l-1}], l > 0$
The idea of Multilevel Monte-Carlo is to estimate option price as $$\hat Y=\sum\limits_{l=0}^{L}\hat Y_l= \sum\limits_{l=0}^{L}\frac{1}{N_l}\sum\limits_{i=1}^{N_l}(\hat P_l^{(i)} - \hat P_{l-1}^{(i)})$$
This methods keeps the absence of bias of the Standard Monte-Carlo method, while inducing a smaller variance of the estimator.
As we show in this project, this variance can decrease even further by combining the Multilevel Monte-Carlo method with the Randomized Quasi Monte-Carlo.

Comparison

ϵ Variance CPU Time (s)
MC MLMC QMLMC MC MLMC QMLMC
$1.10^{-4}$ $1,1.10^{-4}$ $5,1.10^{-8}$ $1,4.10^{-8}$ 8,1 19110,2 38678,6
$1.10^{-3}$ $1,2.10^{-4}$ $1,2.10^{-6}$ $2,6.10^{-7}$ 4,9 16,7 59,7
$4.10^{-3}$ $1,7.10^{-4}$ $1,7.10^{-5}$ $1,0.10^{-6}$ 3,9 6,9 43,1
  • MC = Standard Monte-Carlo
  • MLMC = Multilevel Monte-Carlo
  • QMLMC = Multi-Level Randomized Quasi-Monte-Carlo

It becomes evident that Multilevel Monte-Carlo makes it possible to greatly reduce the variance, but at the cost of much higher complexity and computational time.
It is then necessary to make a compromise between CPU time and desired variance.