MARKET RISK MEASUREMENT: VALUE AT RISK

Một phần của tài liệu Simple tools and techniques for enterprise risk management second edition by robert j chapman phd (Trang 507 - 513)

25.12.1 Definition of Value at Risk

Value at risk (VaR) calculates the worst loss that might be expected at a given confidence level over a given time period under normal market conditions. VaR is, essentially, an expansion and application of modern portfolio analysis as developed over the last half century by Harry Markowitz and many others. There is now a wealth of information on VaR and a recommended reading list is given in Appendix 15.

25.12.2 Value at Risk

VaR is one of the most common measurements of market risk in the financial sector. It gives a fixed probability (or confidence level) that any losses suffered by the portfolio over the holding period will be less than the limit established by VaR. Hence it should also be said that there is also a fixed probability that the losses might be worse. Critically, the VaR limit does not give an indication of how severe the losses could be or specify the worst possible loss. VaR simply states how likely (or unlikely) it is that the VaR measure will be exceeded. VaR is now

recognised as a standard measure of market risk, expressed in terms of the money that might actually be lost. For instance, a bank might report that the daily VaR of its trading portfolio is

£50 million at the 95% confidence level. Described another way, there is only a 5% chance that a daily loss greater than £50 million will occur, under normal market conditions. The appeal of VaR relates to its ability to provide a consistent and comparable measure of risk across all products and business streams. Cossin (2005) makes the valid observation that as risk models become more complex, board members sometimes unfortunately treat them as black boxes.

The purpose of the modelling is to inform decision makers, but if the model builder is unable to communicate how the model arrives at its answers, the board members cannot grasp the statistical basis of the model, or if both description and comprehension are poor, the value of modelling is either diluted or lost altogether, and board members revert to instinctive beliefs and judgement.

25.12.3 VaR Model Assumptions

Before the financial crisis VaR models were based on seven assumptions.

1. Short observation periods (of, say, 12 months) provide a sufficient time span to be able to make robust defendable predictions about likely future events.

2. Robust inferences can be drawn from past asset-price volatility to guide thinking about the probability of future events.

3. The distribution of likely values will most likely be normally distributed (shaped like a bell curve), as values will be grouped equally above and below the mean and then tail off symmetrically from the mean.

4. Risk can be reliably projected to enable businesses to make informed investment decisions and implement controls according to an investor’s or firm’s appetite for risk.

5. The actions of one firm are independent of the actions of other firms in the same market, and a single player is incapable of affecting market equilibriums by inducing similar and simultaneous behaviour.

6. Top management and boards understand how VaR models are constructed and the reliance that can be placed on the results, and are able to assess and appropriately exercise judgement over the risks being taken.

7. The increasing sophistication of VaR models matches and makes safe participation in the increasingly complex securitised credit market through the ability to both measure and manage risk.

Unfortunately, in times of crisis none of the assumptions are appropriate. The distribution of asset-price volatility has much fatter tails – that is, the likelihood of extreme events in asset-price swings is much higher than the normal distribution models, including VaR, would predict. In addition, as has became painfully apparent, volatility today can exceed anything that has occurred in the past. The limitations of banks’ risk models were painfully exposed in August 2007. As recorded in theFinancial Timesat the time, the Chief Financial Officer of Goldman Sachs, David Viniar, stated that the bank witnessed a vast shift in risk exposure repeatedly, one day after another. In other words, the unthinkable was happening on a daily basis. Alan Greenspan stated, when testifying to the Congressional Committee for Oversight and Government Reform in October 2008, that while the modern risk management paradigm had held sway for some time, “the whole intellectual edifice. . . collapsed in the summer

492 Simple Tools and Techniques for Enterprise Risk Management

[of 2007] because the data inputted into the risk management models generally covered only the past two decades, a period of euphoria”. Nocera (2009) wrote that while VaR was very popular and heavily relied upon by risk managers and banks, it exacerbated the crisis by giving a false sense of security to bank executives and regulators. He portrayed VaR as both easy to misunderstand and dangerous when misunderstood. The FSA identified four of the problem areas of the application of VaR in theTurner Review(FSA 2009), described in Box 25.6. These failed assumptions mean that reliance on such models can lead to calamitous consequences.

Box 25.6 Contribution of VaR to the global financial crisis.

The financial crisis has revealed, however, severe problems with these [sophisticated math- ematical] techniques. They suggest at very least the need for significant changes in the way that VAR-based methodologies have been applied: some, however, pose more fundamental questions about our ability in principle to infer future risk from past observed patterns.

Four categories of problem can be distinguished:

Short observation periods. Measures of [VaR] were often estimated using relatively short periods of observation e.g. 12 months. As a result they introduced significant procyclicality, with periods of low observed risk driving down measures of future prospective risk, and thus influencing capital commitment decisions which were for a time self-fulfilling. At very least much longer time periods of observations need to be used.

Non-normal distributions. However, even if much longer time periods (e.g. ten years) had been used, it is likely that estimates would have failed to identify the scale of risks being taken. Price movements during the crisis have often been of a size whose probability was calculated by models (even using longer term inputs) to be almost infinitesimally small. This suggests that the models systematically underestimated the chances of small probability high impact events. Models frequently assume that the full distribution of possible events, from which the observed price movements are assumed to be a random sample, is normal in shape. But there is no clear robust justification for this assumption and it is possible that financial market movements are inherently characterized by fat-tail distributions. This implies that any use of VAR models needs to be buttressed by the application of stress test techniques which consider the impact of extreme movements beyond those which the model suggests are at all probable. Deciding just how stressed the stress test should be, is, however, inherently difficult, and not clearly susceptible to any mathematical determination.

Systemic versus idiosyncratic risk. One explanation of fat-tail distributions may lie in the importance of systemic versus idiosyncratic risk i.e. the presence of “network externalities”. The models used implicitly assume that the actions of the individual firm, reacting to market price movements, are both sufficiently small in scale as not themselves to affect the market equilibriums, and independent of the actions of other firms. But this is a deeply misleading assumption if it is possible that developments in markets will induce similar and simultaneous behaviour by numerous players. If this is the case, which it certainly was in the financial crisis, [VaR] measures of risk may not only fail adequately to warn of rising risk, but may convey the message that risk is low

and falling at the precise time when systemic risk is high and rising. According to [VaR]

measures, risk was low in spring 2007: in fact the system was fraught with huge systemic risk. This suggests that stress tests may need (i) to be defined as much by regulators in the light of macro-prudential concerns, as by firms in the light of idiosyncratic concerns;

and (ii) to consider the impact of second order effects i.e. the impact on one bank of another bank’s likely reaction to the common systemic stress.

Non-independence of future events; distinguishing risk and uncertainty. More funda- mentally, however, it is important to realise that the assumption that past distribution patterns carry robust inferences for the probability of future patterns is methodologi- cally insecure. It involves applying to the world of social and economic relationships a technique drawn from the world of physics, in which a random sample of a definitively existing universe of possible events is used to determine the probability characteristics which govern future random samples. But it is unclear whether this analogy is valid when applied to economic and social relationships, or whether instead, we need to recognise that we are dealing not with mathematically modellable risk, but with inherent “Knight- ian” uncertainty. This would further reinforce the need for a macro-prudential approach to regulation. But it would also suggest that no system of regulation could ever guard against all risks/uncertainties, and that there may be extreme circumstances in which the backup of risk socialisation (e.g. of the sort of government intervention now being put in place) is the optimal and the only defence against system failure.

Source:FSA (2009, Section 1.4(iii)).

25.12.4 Use of VaR to Limit Risk

Even after the heavy criticism that VaR received after the financial crisis it continues to be used today. The approach adopted depends on the lessons learned and boards receptiveness to risk management

Financial institutions continue to manage market risk in different ways, depending on how risk management has evolved over time and the working practices of risk management special- ists employed to support its implementation. At the time of writing, market risk management at JPMorgan Chase is an independent risk management function, aligned primarily with each of the firm’s business segments. Market Risk works in partnership with the business seg- ments to identify and monitor market risks throughout the firm as well as to define market risk policies and procedures. The risk management function is headed by the firm’s chief risk officer. Market Risk seeks to facilitate efficient risk/return decisions, reduce volatility in operating performance and make the firm’s market risk profile transparent to senior man- agement, the board of directors and regulators. Market Risk is responsible for the following functions:

• establishing a comprehensive market risk policy framework;

• independent measurement, monitoring and control of business segment market risk;

• definition, approval and monitoring of limits;

• performance of stress testing and qualitative risk assessments.

Its application of VaR is described in Box 25.7.

494 Simple Tools and Techniques for Enterprise Risk Management

Box 25.7 Application of VaR: an example

JPMorgan Chase’s primary statistical risk measure, VaR, estimates the potential loss from adverse market moves in a normal market environment and provides a consistent cross- business measure of risk profiles and levels of diversification. VaR is used for comparing risks across businesses, monitoring limits, and as an input to economic capital calculations.

Each business day, as part of its risk management activities, the Firm undertakes a compre- hensive VaR calculations that includes the majority of its market risks. These VaR results are reported to senior management.

To calculate VaR, the Firm uses historical simulation, based on a one-day time hori- zon and an expected tail-loss methodology, which measures risk across instruments and portfolios in a consistent and comparable way. The simulation is based on data for the previous 12 months. This approach assumes that historical changes in market values are representative of future changes; this assumption may not always be accurate, particularly when there is volatility in the market environment. For certain products, such as lending facilities and some mortgage-related securities for which price-based time series are not readily available, market-based data are used in conjunction with sensitivity factors to estimate the risk. It is likely that using an actual price-based time series for these products, if available, would impact the VaR results presented. In addition, certain risk parameters, such as correlation risk among certain instruments, are not fully captured in VaR. In the third quarter of 2008, the Firm revised its reported IB Trading and credit portfolio VaR measure to include additional risk positions previously excluded from VaR, thus creating a more comprehensive view of the Firm’s market risks. In addition, the Firm moved to calculating VaR using a 95% confidence level to provide a more stable measure of the VaR for day-to-day risk management. The Firm intends to present VaR solely at the 95%

confidence level commencing in the first quarter of 2010, as information for two complete year-to-date periods will then be available.

Source:JPMorgan Chase & Co, Annual Report (2009).

25.12.5 Calculating Value at Risk

There are three common methods of calculating VaR, and each has its own benefits and drawbacks. They are known as the historical method, variance–covariance (or analytical) method and Monte Carlo simulation.

Historical Simulations Method

Historical simulation is intuitive, easy to understand and is the simplest and most transparent method of calculation. The fundamental assumption of the historical simulations method is that the past performance of a portfolio is a good indicator of the near-future. In other words, the recent past will reproduce itself in the near-future. This involves running the current portfolio across a set of historical price changes to yield a distribution of changes in portfolio value, and computing a percentile (the VaR). There is also no need to estimate the volatilities (and to some degree the correlations) between the various assets as they are implicitly captured by the

actual daily realisations of the assets. The fat tails of the distribution and other extreme events (it is argued by proponents) are captured (as long as they are contained in the dataset and the dataset covers a sufficiently long enough period). The main disadvantages of this method are that it relies completely on a particular historical dataset and its idiosyncrasies and it cannot handle sensitivity analyses easily. In addition, if a simulation is run in a bull market the VaR may be underestimated; and conversely, if a simulation is run after a crash, the falling returns which the portfolio has experienced may distort VaR. Also it may not always be computa- tionally efficient when the portfolio contains complex securities or a very large number of instruments.

Variance–Covariance or Analytical Method

This method assumes a normal distribution of portfolio returns, which requires estimating the expected return and standard deviation of returns for each asset. A distribution is described as normal if there is a high probability that any observation from the population sample will have a value that is close to the mean, and a low probability of having a value that is far from the mean. This method assumes that asset returns follow a normal pattern. The VaR model uses the normal curve to estimate the losses that an institution may suffer over a given time period. The main benefits of the variance–covariance method are that it requires very few parameters, it is easy to implement and it is quick to run computations (with an ap- propriate mapping of the risk factors). However, as the number of securities in a portfolio increases, these calculations can become unwieldy. As a result, a simplifying assumption of zero expected return is sometimes made. This assumption has little effect on the outcome for short-term (daily) VaR calculations but is inappropriate for longer-term measures of VaR. The advantage of this method is its simplicity. The disadvantage is that the significant assump- tion that price changes in the financial markets follow a normal distribution, which can be unrealistic.

Monte Carlo Method

Monte Carlo simulation involves developing a spreadsheet-based model for future stock price returns and running multiple hypothetical trials or simulations based on the model. As with historical simulation, Monte Carlo simulation allows the risk manager to use actual historical distributions for risk factor returns rather than having to assume normal returns. It is a flexible tool that can be quickly updated. Each simulation is one potential outcome. A number of trials are run and it is the statistical analysis of this group of aggregated trials that enables predictions to be made about price volatility. Due to the speed of current computers, more complex models populated with extensive data can be run moderately quickly. Each time the simulation is run, the result is different, as it will be the summation of a new set of random numbers drawn from the distribution of each variable. This method is more realistic than the previous two models and therefore is more likely to estimate VaR more accurately. However, for many uninformed users, Monte Carlo simulation appears to have an inherently opaque or “black box” nature and hence they are sceptical of its merits. Those who are conversant with the method, depending on the software used, are able to view the inputs and the statistical analysis undertaken and understand how the results were derived.

496 Simple Tools and Techniques for Enterprise Risk Management

Một phần của tài liệu Simple tools and techniques for enterprise risk management second edition by robert j chapman phd (Trang 507 - 513)

Tải bản đầy đủ (PDF)

(642 trang)