The purpose of this handbook is to provide a survey of the important concepts and methods of computational finance. A glance at the table of contents reveals a wide range of articles written by experts in various subfields. The articles are expository, taking the reader from the basic concepts to the current research trends.
1.2.1 Organization
After this introductory part, this handbook is divided into four parts: “Pricing Models”, “Statistical Inference in Financial Models”, “Computational Methods”, and “Software Tools”. The chapters in each part generally range from more basic topics to more specialized topics, but in many cases there is may be no obvious sequence of topics. There often considerable interrelationships of a chapter in one part with chapters in other parts of this handbook.
1.2.2 Asset Pricing Models (Part II)
The second part begins with an article by Gentle and H¨ardle that surveys the general approaches to modeling asset prices. The next three chapters address specific approaches. First, Detemple and Rindisbacher consider general diffusion models, and then Figueroa-L´opez discusses diffusion models with a superimposed jump component, which also allows for stochastic volatility and clustering of volatility.
Next, Hafner and Manner discuss multivariate time series models, such as GARCH and linear factor models, that allow for stochastic volatility.
The next two chapters in Part II address pricing of derivatives. Fengler reviews the basic Black-Scholes-Merton (BSM) option pricing formula for stock options, and then discusses the concept of implied volatility, which derives from an inverse of the formula using observed prices of options. Especially since 1987, it has been observed that a plot of implied volatility versus moneyness exhibits a convex shape, or “smile”. The volatility smile, or volatility surface when a term structure dimension is introduced, has been a major impetus for the development of option pricing models. For other derivatives, the term structure of implied volatility or its relationship to moneyness has not been as thoroughly investigated. Li explores the
“smile” of implied volatility in the context of interest rate derivatives.
Financial markets can be built on anything that varies. If volatility varies, then it can be monetized. In the final chapter of Part II, H¨ardle and Silyakova discuss the market in variance swaps.
1.2.3 Statistical Inference in Financial Models (Part III)
While Part II addressed the descriptive properties of financial models, the chapters in Part III consider issues of statistical inference, estimation and testing, with these models. The first chapter in this section, by Kan, develops criteria for evaluating the correspondence of asset pricing models to actual observed prices, and the discusses statistical methods of comparing one model with another. The next two chapters consider the general problem of estimation of the probability density of asset option prices, both under the assumption that the prices conform to the risk- neutral valuation principle. The first of these chapters, by Kraetschmer and Grith, uses parametric models, and the other chapter, by H¨ardle, Grith, and Schienle, uses nonparametric and semiparametric models.
A topic that is receiving a great deal of attention currently is value at risk (VaR). Chen and Lu provide a survey of recent developments in estimation of VaR and discuss the robustness and accuracy of the methods, comparing them using simulated data and backtesting with real data.
An important parameter in any financial model is the volatility, whether it is assumed to be constant or stochastic. In either case, data-based estimates of its magnitude or of its distribution are necessary if the model is to be used. (As noted above, the model can be inverted to provide an “implied volatility”, but this is not of much value for the primary purpose for which the model was developed.) The basic statistic for estimation of the volatility is “realized volatility”, which is just the standard deviation of a sample of returns. The sample is actually a sequence, and hence cannot be considered a random sample. Furthermore, the sampling interval has a very large effect on the estimator. While certain statistical properties of the realized volatility require ever-increasing frequencies, other effects (“noise”) become confounded with the volatility at high frequencies. Christian Pigorsch, Uta Pigorsch, and Popov address the general problem of estimation of the volatility using realized volatility at various frequencies.
Bjursell and Gentle discuss the problem of identifying jumps in a jump-diffusion model. Their focus is energy futures, particularly in brief periods that include the release of official build statistics.
Several of the chapters in Parts II and III use simulation to illustrate the points being discussed. Simulation is also one of the most useful tools for statistical inference in financial models. In the final chapter of Part III, Yu discusses various simulation-based methods for use with financial time series models.
1.2.4 Computational Methods (Part IV)
Many financial models require extensive computations for their analysis. Efficient numerical methods have thus become an important aspect of computational finance.
As we indicated above, statistical models generally consist of systematic com- ponents (“signals”) and random components (“noise”), and a primary aspect of statistical analysis is to identify the effects of these components. An important method of doing this is filtering. In the first chapter of Part IV, Fulop describes filtering techniques in the setting of a hidden dynamic Markov process, which underlies many financial models.
The stochastic components of financial models are often assumed to have some simple parametric form, and so fitting the probability model to empirical data merely involves the estimation of parameters of the model. Use of nonparametric models often results in greater fidelity of the model to observational reality. The greatest problem in fitting probability models to empirical data, however, occurs when multiple variables are to be modeled. The simple assumption of independence of the variables often leads to gross underestimation of risk. Simple variances and covariances do not adequately capture the relationships. An effective method of modeling the relationships of the variables is by use of copulae. These are not as simple to fit to data as are variances and covariances, especially if the number of variables is large. Okhrin discusses the use of copulae and numerical methods for fitting high-dimensional copulae to data.
The next two chapters in Part IV discuss numerical methods of solution of partial differential equations (PDEs) in finance. Forsyth and Vetzal describe numerical solution of nonlinear deterministic PDEs, and Sauer discusses numerical methods for stochastic partial differential equations (SDEs). Both of these chapters are focused on the financial applications of PDEs.
One of the most important problems in computational finance is the development of accurate and practical methods for pricing derivatives. Seydel discusses lattice or tree-based methods, and Kwok, Leung, and Wong discuss the use of discrete Fourier transforms implemented by the fast Fourier transform (FFT) of course.
Some of the earliest studies in computational finance led to the development of dynamic programming. This continues to be an important tool in computational finance. Huang and Guo discuss its use in hedging strategies, and Breton and Frutos describe the use of approximation of dynamic programs for derivative pricing.
An important concern about any model or inference procedure is the robustness to unusual situations. A model that serves very well in “normal” times may be completely inappropriate in other regimes. Evaluation of models involves “stress testing”; that is, assessment of the validity of the model in unusual situations, such as bubble markets or extended bear markets. Overbeck describes methods of stress testing for risk management.
One of the earliest problems to which modern computational methods were addressed is that of selection of an optimal portfolio, given certain characteristics of the available securities and restrictions on the overall risk. Rindisbacher and Detemple discuss portfolio optimization in the context of modern pricing models.
As Yu discussed in Part III, simulation-based methods have widespread applica- tions in financial models. The efficiency of these computationally-intensive methods can be greatly increased by the use of better methods of covering the sample space.
Rather than simulating randomness of sampling, it is more efficient to proceed
through the sample space deterministically in a way that guarantees a certain uniformity of coverage. In the next chapter of Part IV, Niederreiter describes the concepts of low discrepancy simulation, and discusses how quasi Monte Carlo methods can be much more efficient than ordinary Monte Carlo.
An important tool in computational finance is statistical learning; that is, the identification of rules for classification of features of interest. There are various approaches to statistical learning, and in the last chapter of Part IV, Lee, Yeh, and Pao discuss support vector machines, which is one of the most useful of the methods of classification.
1.2.5 Software Tools (Part V)
Financial modeling and analysis require good software tools. In Part V Gentle and Martinez briefly discuss the various types of software available for financial applications and then proceed to discuss one specific software package, Matlab.
This flexible and powerful package is widely used not only for financial analyses but for a range of scientific applications. Another software package, which is open source and freely distributed, is Nolan discusses R, and gives several examples of its use in computational finance.