Christos Delivorias1
Aberdeen Asset Management, 40 Princes Street, Edinburgh, EH2 2BY, UK
Christos Delivorias
Email: Christos.Delivorias@aberdeen-asset.com Abstract
We present a comparative insight of the performance of implementations of the Heston stochastic volatility model on different acceleration platforms. Our implementation of this model uses Quasi- random variates, using the Numerical Algorithms Group (NAG) random number library to reduce the simulation variance, as well as Leif Andersen’s Quadratic Exponential discretisation scheme. The implementation of the model was in Matlab, which was then ported for Graphics Processor Units (GPUs), and then Techila platforms. The Field Programmable Gate Array (FPGA) code was based on C++. The model was tested against a 2.3 GHz Intel Core i5 Central Processing Unit (CPU), a Techila grid server hosted on Microsoft’s Azure cloud, a GPU node hosted by Boston Ltd, and an FPGA node hosted by Maxeler Technologies Ltd. Temporal data was collected and compared against the CPU baseline, to provide quantifiable acceleration benefits for all the platforms.
3.1 Introduction
The computational complexity requirements of calculating financial derivatives’ prices and risk values have increased dramatically following the most recent financial crisis in 2008. The
requirements of counterparty risk assessment have also introduced taxing calculations that impose an additional levy in computational time. There is a two-fold need for this rapid calculation; first the ability to price the option value on a given underlying is essential in a fast-paced market environment that relies on dynamic hedging for portfolio immunisation on large books. The second use is in a relatively new sector of the financial market that deals with High-Frequency Trading (HFT). This sector relies on extremely fast computations in order to make algorithmic decisions based on the current market information. This chapter will focus on the pricing of options rather than the aspects of HFT. The informed reader could care to look into some of the negative aspects of HFT as explained by [3].
This work was a joint project between the University of Edinburgh and Scottish Widows Investment Partnership (SWIP)1 and aimed at exploring the possibilities in accelerating financial
engineering models, with real life applications in the markets. The goal was to evaluate the same model on different platforms, and assess the benefits of acceleration that each platform provided.
This chapter is organised as follows. Section 3.2 introduces the model used in order to evaluate the computational performance of a well known model of the evolutions of equities prices. This model has a known analytical solution which can serve as a cross-check of correctness. This kind of model can have a numerical solution simulated via Markov Chain Monte Carlo (MCMC) simulation, which is explained in Sect. 3.3. The variance reduction using quasi-random numbers, as well as the discretisation scheme are also expanded in this section. Section 3.4 goes into more details on the acceleration platforms of FPGA, GPU, and the Techila Cloud. Section 3.5 details the efficiency and the accuracy of the implementation of the Heston model in Matlab, and finally Sect. 3.6 presents the experimental results and the conclusion.
3.2 Heston’s Stochastic Volatility Model
The Heston model extends the Black-Scholes (BS) model by taking into account a stochastic volatility that is mean-reverting and is also correlated with the asset price. It assumes that both the asset price and its volatility are determined by a joint stochastic process.
Definition 3.2.1.
The Heston model that defines the price of the underlying S t and its stochastic volatility v t at time t is given by
(3.1) where the two standard Brownian Motion (BM) are correlated by a coefficient , is the rate of reversion of the volatility to – the long-term variance –, is the volatility of volatility, and
is the expected rate of return of the asset.
The Heston model extends the BS model by providing a model that has a dynamic stochastic
volatility, as described in Eq. (3.1). This model has a semi-analytic formula that can be exploited to derive an integral solution. Additionally if the [5] condition is upheld, this process will produce strictly positive volatility with probability 1; this is described in Lemma 3.2.1.
Lemma 3.2.1.
If the parameters of the Heston model obey the condition
then the stochastic process v t will produce volatility such that , since the upward drift is large enough to strongly reflect away from the origin.
The stochastic nature of this model provides two advantages for evaluating it with a Monte Carlo simulation. The first is that in its original form, it possesses a closed-form analytical solution, that can be used to evaluate the bias of the numerical solution. The second benefit is that its path-dependent nature can accommodate more complex path behaviors, e.g. barrier options, and Affine Jump
Diffusion (AJD).
3.3 Quasi-Monte Carlo Simulation
The reference to Monte Carlo (MC), is due to the homonym city’s affiliation with games of chance.
The premise of chance is utilised within the simulation in order to provide a random sample from the overall probability space. If the random sample is as truly random as possible, then the random sample is taken as an estimate across the entire probability space. The law of large numbers
guarantees [8] that this estimation will converge to the true likelihood as the number of random draws tends to + ∞. Given a certain number of random draws, the likely magnitude of the error can be
derived by the central limit theorem.
The Feynman-Kač theorem is the connective tissue between the Partial Differential Equation (PDE) form of the stochastic model, and its approximation by a Monte Carlo simulation. By this theorem it is possible to approximate a certain form Stochastic Differential Equation (SDE), by simulating random paths and deriving the expectation of them as the solution of the original PDE.
As an example we can return to the BS model, where the price of the option depends on the expected value of the payoff. In order to calculate the expected value of f it is possible to run a MC simulation with paths,2 in order to approximate the actual price of the call option with the simulated price ,
(3.2) where is the risk-free interest rate, is the time to maturity of the option, is the volatility, is the strike price at maturity date , is the spot price at , and are Gaussian Random Variables (RVs). By the strong law of large numbers we have,
(3.3)
3.3.1 Variance Reduction with Quasi-Random Numbers
There are two major avenues to take in order to reduce variance in a MC simulation, one is to take advantage of specific features of the problem domain to adjust or correct simulation outputs, the other by directly reducing the variability of the simulation inputs. In this section we’ll introduce the
variance reduction process of quasi-random numbers. This is a procedure to reduce the variance of the simulation by sampling from variates of lower variance. Such numbers can be sampled from the so called “low discrepancy” sequences. A sequence’s discrepancy is a measure of its uniformity and is defined by Definition 3.3.1 (see [6]).
Definition 3.3.1.
Given a set of points x 1 , x 2 , ⋯ , x N ∈ I S s-dimensional unit cube and a subset G < I S , define the counting function S N (G) as the number of points x i ∈ G. For each x = (x 1, x 2, ⋯ , x S ) ∈ I S , let G
x be the rectangular s-dimensional region,
with volume x 1 x 2⋯x N . Then the discrepancy of the points x 1, x 2, ⋯ , x N is given by,
The discrepancy value of the distribution compares the sample points found in the volume of a multi- dimensional space, against the points that should be in that volume provided it was a uniform
distribution.
There are a few sequences that are used to generate quasi-random variates. The NAG libraries provide three sequence generators. The [7, 9], and [4] sequences are implemented in MATLAB with the NAG functions g05yl and g05ym (Fig. 3.1).
Fig. 3.1 The Niedereitter quasi-random uniform variates on the left, versus the pseudo-random unifrom variates on the right. The NAG library was used for the quasi-random variates
3.3.2 Descritisation Scheme
In 2005 [1] proposed a new scheme to discretise the stochastic volatility and the price of an
underlying asset. This scheme takes advantage of the fact that a non-central χ 2 sampled variate can be approximated by a related distribution, that’s moment-matched to the conditional first and second moments of the non-central χ 2 distribution.
As Andersen points out, the cubic transformation of the Normal RV although a more accurate representation of the distribution closer to 0, it introduces negative values of variance. Thus the quadratic representation is adopted as a special case for when we have low values of V (t).
Therefore when V (t) is sufficiently large, we get,
(3.4) where Z V is an N (0, 1) Gaussian RV, and a, b scalars that will be determined by moment-
matching. Now for the complementary low values of V (t) the distribution can – asymptotically – be approximated by,
(3.5) where δ is the, Dirac delta-function, strongly reflective at 0, and p and β are positive scalars to be calculated. The scalars a, b, p, β depend on the parameters of the Heston model and the time
granulation Δ t, and will be calculated by moment-matching the exact distribution.
To sample from these distributions there are two distributions to take into account:
Sample from the normal N (0, 1) Gaussian RV and calculate from Eq. (3.4).
To sample for the small values of V the inverse of Eq. (3.5) will be used. The inverse of the distribution function is,
(3.6) The value of V can then be sampled from
(3.7) where U V is a uniform RV. The rule on deciding which discretisation of V to use depends on the non-centrality of the distribution, and can be triaged based on the value of Ψ. The value of Ψ is,
(3.8) where m, s 2 are the conditional mean and variance of the exact distribution we are matching.
What Andersen showed was that the quadratic scheme of Eq. 3.4 can only be moment-matched for Ψ
≤ 2 and similarly the exponential scheme of Eq. 3.7 can only be moment-matched for Ψ ≥ 1. It follows then, that there is an overlap interval for Ψ ∈ [1, 2] where the two schemes overlap. Appropriately Andersen chooses the midpoint of this interval as the cut-off point between the schemes; thus the cut- off Ψ c = 1. 5.
Since we’ve defined the discretisation process for the Quadratic Exponential (QE) scheme, with Eqs. 3.4 and 3.7, and the cut-off discriminator, what is left is to calculate the remaining parameters a, b, p, β for each case. The algorithm for this process is detailed in Algorithm 1.
3.4 Implementations of the Algorithm on Different Platforms 3.4.1 CPU Baseline Model in Matlab
The algorithm described in Algorithm 1 is implemented in MATLAB, and is used for numerical comparisons of acceleration. It will be used hencforth as the baseline calculation cost. All the implementations that are explained below will be compared against this implementation.