1. Trang chủ
  2. » Luận Văn - Báo Cáo

An application of alternative risk measures to trading porfolios

37 465 0
Tài liệu đã được kiểm tra trùng lặp

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 37
Dung lượng 441,35 KB

Nội dung

An application of alternative risk measures to trading porfolios

Trang 1

An Application of Alternative Risk Measures to

practice the tools of extreme value theory EVT is applied to a varied sample

of trading portfolios across different sectors and sensitive to one and multiple

risk factors A detailed analysis of the tail of the profit & loss empirical

distribution is performed with an emphasis on the estimates of value-at-risk

and expected shortfall The concept of expected shortfall is also used as a

measure of sensitivity of the portfolio to risk factors, thus allowing to

determine the main drivers of risk

Being involved with the market directly and on a daily basis, as well

as considering the recent events in the Russian market - more specifically, the

Yukos case, provided the opportunity to observe a real example when

historical VaR fails to be coherent

Cornelia Glavan

Supervisor: Prof Dr Uwe Schmock Supervisor: Dr Andreas Bitz

Institute for Financial and Head Market Risk Control Actuarial Mathematics UBS Investment Bank

Vienna University of Technology Switzerland

Trang 2

Acknowledgements

A master thesis is often perceived as a result of an individual effort This is hardly the case here, as the following study is a result of a team work The paper was written while doing an internship with the Market Risk Control at UBS Investment Bank Zurich

I am highly appreciative of the guidance and insights into the business from my supervisor Andreas Bitz

in particular, and the help from the whole Market Risk Control team in general Sincere thanks to

Roberto Frassanito and Michael Rey for their most useful explanations and remarks

I am very grateful to Prof Uwe Schmock for all the help with the mathematical theory

And last but not least, many thanks to all my friends and family for their help and support

Trang 3

Contents

Abstract……… 1

Overview……… 4

1 Mathematical Theory……… ……… 6

1.1 Expected Shortfall……… ……….6

1.2 Generalized Pareto Distribution…….……… … 7

1.3 Method of Block Maxima……… 9

1.4 Historical Approach …… ….……….…10

2 Coherence and VaR……… …11

2.1 Academic Example……… ……….11

2.2 Practical Example: … ………11

2.3 Exploring the Tail……….…13

3 Equity Portfolios……….15

3.1 One Risk Factor………15

3.1.1 UBS stocks.……… ……….…15

3.1.2 Comparative analysis……….………… 17

3.2 Two Risk Factors……….19

4 Currency and Fixed Income Portfolios: Multiple Risk Factors……… …21

4.1 Currency portfolios……….….……….21

4.2 Fixed Income Portfolios……… 21

4.2.1 One Risk Factor: the Yield……… 22

4.2.2 Multiple Risk Factor ……… 23

5 Mixed Multiple Risk Factor Portfolio……… ……….….25

Conclusions: Why Expected Shortfall?……… ……….….…………29

References……… ……….……….….30

Appendix B……… ……… …31

Appendix C……….………… 32

Appendix D……….……… … 34

Appendix E……… ……… … 36

Trang 4

Overview

Following Basel I rules, value-at-risk (VaR) has been established as one of the main risk measures Although widely used by financial institutions, the risk measure is heavily criticized in the academic world for not being sub-additive, i.e the risk of a portfolio as a whole can be larger then the sum of the stand-alone risks of its components when measured by VaR Consequently, VaR may fail to justify diversification and does not take into account the severity of an incurred damage event As a response to these deficiencies, the notion of coherent risk measures was introduced The most well known coherent risk measure is expected shortfall (ES), which is the expected loss provided that the loss exceeds VaR A more detailed theoretical explanation is given in the first chapter, which comprises the mathematical theory that is used in this paper The current method at most financial institutions in estimating VaR is based on a historical framework In order to challenge this method, extreme value theory (EVT) is used to estimate both value-at-risk and expected shortfall The method of EVT focuses

on modelling the tail behaviour of a loss distribution using only extreme values rather than all the data This method, generalized Pareto distribution (GPD), has the theoretical background which allows fitting the tail of the losses to a certain class of distributions

The second chapter provides two examples demonstrating the advantages of expected shortfall over VaR: the first is an artificial example, while the second is taken from practice and follows exactly the framework currently used in estimating VaR at most financial institutions The last example in this chapter represents a practical case where extreme value theory can be used for financial data using a different method, the one of block Maxima This method is applied in order to give a better understanding of event risk and is able to provide answers to questions like: "How rare is an event of obtaining a return as low or lower than a certain loss?"

Further the paper contains the results of applying EVT to equity portfolios, currency portfolios and fixed income portfolios The third chapter discusses the results of applying EVT to equity portfolios consisting of a single position: long stock and short stock In these cases, we have only one risk factor - the price log returns To facilitate a better understanding of the behaviour of generalized Pareto distribution when applied to equity portfolios, a representative collection of a wide range of stocks was chosen: SMI and DAX, the well diversified European indices, Nasdaq - the much more volatile than its European counter-parties American index And the stocks of a highly liquid financial institution, and the highly volatile stocks of ABB, Disetronic and Yukos, which have undergone a lot of distress in the time period considered

Next the analysis is done on positions of being long, short option on SMI Two risk factors are involved in these cases: the price log returns of the underlying and the absolute returns of the implied volatility EVT is applied in estimating the risk measure for the profit and loss function (P&L) when risk factors are considered individually, as well as for the aggregated P&L Given the multitude of risk factors involved, historical P&L is preferred because it makes no assumptions about the correlation between them

The forth chapter discusses the results of estimating VaR and ES for some representative

portfolios from the currency and fixed income sectors

In an attempt to cover a different gamut of currency portfolios, two low volatile cases (JPY/USD and USD/EURO) and a highly volatile low liquidity case, characterized by an emerging market currency USD/TRL, are considered Generalized Pareto distribution is fitted to both the upper and the lower tail of the distributions

The chapter continues with a summary of the results of fitting the tail to GPD for fixed income portfolio; two different bonds are considered The main risk factors to which the P&L is mapped in the first case is the spread to the Treasury curve, and the LIBOR curve and the spread in the second one The idea behind expected shortfall was used to measure the sensitivity of the aggregated P&L to the moves of different risk factors

Trang 5

The fifth chapter considers a hypothetical portfolio containing a variety of financial instruments covering all the different businesses discussed in the paper The historical approach is used to compute the P&Ls mapped on different risk factors and expected shortfall is used as the main risk measure, which allows us to make no assumptions about the correlation between risk factors and to make sure that cases of incoherence are being avoided The concept of expected shortfall is successfully used to determine the main drivers of risk in the portfolio This tool is employed to measure the sensitivity of the overall portfolio to individual risk factors, thus allowing us to have a clear view of the risk and potentially point to hedging strategies

Trang 6

Chapter 1 Mathematical Theory

In this chapter the basic mathematical definitions necessary in understanding the paper are given and also a detailed description of the methods used One can skip this chapter now, provided that while going through the paper you can then come back to check the notions used, given that links are provided to find easier the parts of interest

of this paper

Let X be a real-valued random variable (r.v.) on a fixed probability space X is considered the

random profit of the portfolio, so we are mainly interested in losses, i.e low values of X

The exact mathematical definitions of some risk measures are given bellow:

α - the confidence level (usually 0.95 or 0.99)

Definition 1: Value-at-Risk: gives the maximum loss such that with a (1-α) probability we

exceed it

} 1 ] [

: inf{

)

α X = − xR P Xx ≥ −

Observation: this is actually the lower (1-α)-quantile of X, taken with minus

Definition 2: Tail Conditional Loss: The expected loss provided that the loss exceeds VaR

) ( (

Expected shortfall is coherence, but using the definition as in the tail conditional loss (TCL) case, the

property of coherence is lost, as it is shown in the first example from the next chapter

Before proceeding with the definition of expected shortfall, it is good to give the exact characteristics of the property of coherence, especially that intuitively this property is expected to be fulfilled by any function which gives us a number as a measure of risk Before proceeding with the concept of coherence we give the strict definition for a risk measure:

Definition 3: Risk Measure

Let V be a set of real random variables on some probability space such that E[X-]1< ∞ for all

XV Then a function ρ, which a gives a real number for any random variable in V is called a risk

1 <

− = − X X

X

Trang 7

The property of Coherence: let ρ be a risk measure

(i) monotonous: XV , X ≤ 0, then ρ(X) ≥ 0

Given that a loss is certain to occur the risk function should exhibit this

(ii) sub-additive: X, Y, X+YV then ρ(X+Y) ≤ ρ(X) + ρ(Y)

The diversification effect: risk of a portfolio as a hole should be smaller then the sum of the stand-alone risks of its components

(iii) positively homogeneous: XV, h > 0, hXV then ρ(hX) = hρ(X)

The risk is increasing proportionally with the magnitude of the portfolio, given the weights of the assets stay the same

(iv) translation invariant: XV, a – real number then ρ(X + a) = ρ(X) - a

If a certain gain (loss) is added to the portfolio, then its risk decreases (increases) with the same amount as the gain (loss)

Definition 4: Expected Shortfall:

)]))([

)1)((

(]

1[(1

1)

Expected Shortfall is always coherent An example showing us that VaR and TCL are not

coherent is provided in the second chapter VaR on the other hand satisfies most of the properties of being a coherent risk measure, except for the case of the sub-additive property VaR is sub-additive for

example in the case when the distribution of X is elliptical, as it is the case of the normal and t-student

distributions For more details on the subject about the coherence of VaR for elliptical distributions the reader is referred to [8]

1.2 Generalized Pareto Distribution

Extreme value theory goes back to the late 1920 But only recently it gained recognition as a practical and useful tool in estimating tail risk In the middle of the last century the econometricians already discovered the non-normal behavior of financial markets, but this assumption is still widely used in the financial industry, provided it is easy to implement Using the EVT method we look only at extreme losses and under the assumption that the losses occur independently we are also given the theoretical background to use it Here is discussed the generalized Pareto distribution used in estimating the tail, this method is also referred to in many articles as peaks over threshold A small theory is given and also the Balkema and De Haan theorem on which the extreme value theory is constructed

A threshold u is chosen, and then the losses, which lie beyond it are fitted to a GPD The tools

of better choosing the threshold and the method to estimate the GPD used in this paper are discussed further

Let X1 , X2 , X3 , …, Xn be the losses We make the assumption of them being independent and

identically distributed, and let F(x) = P (X1 ≤ x) be their distribution function

Definition 5: Let xF be the right end point of the distribution F

Trang 8

Definition 6: For any u < xF, we define the distribution function of the excesses over the

threshold u by

) ( 1

) ( ) ( ) (

) (

u F

x F u x F u X x u X P x

Fu

− +

Comment: the choice of the threshold u being smaller then the right end point of the distribution

of the loss insures the fact that the probability of having a loss that exceeds it, is positive

We can now discuss the maximum domain of attraction (MDA) conditions

MDA conditions: Using all of the assumptions before, let

Mn = max {X1, X2,…, Xn } Suppose that there exist the sequences of strictly positive numbers( an)nNand a sequence of real numbers ( bn)nN such that the sequence of transformed maxima n

n

n na

b M

)

converges in distribution

) ( )

F x a

b M

n

n n

ξ

 →

 +

,for every continuity point x of Hξ, (1.2.3)

where Hξ is a non-degenerate distribution function

Comm ent: Reflecting on the MDA conditions the following question arises: “How does the normalizing sequences ( an)nNand ( bn)nN influence the limiting distribution Hξ, ?” The answer is that the limit law is uniquely determined up to affine transformations The proof is provided in the book

“Modelling Extremal Events for Insurance and Finance” by P Embrechts, C Klüppelberg and T

Mikosch, theorem A1.5

Now we are ready to write the fundamental theorem

Theorem Balkema and De Haan (1974)

Under the MDA conditions, the generalized Pareto distribution is the limiting distribution of

the excesses, as the threshold tends from below to the right-end point

That is, there exists β(u) – positive function of u such that:

0)()

(sup

F F

0 , ) ) ( 1 ( 1 ) (

) (

1

) ( ,

ξ

ξ β

ξ

β

ξ β

ξ

u x u

e u

x x

where: x ≥ 0 when ξ ≥ 0, and 0 ≤ x ≤ -β (u)/ ξ when ξ < 0

We say that F belongs to the maximum domain of attraction of Hξ, where Hξ is the generalized extreme value distribution (GEVD):

0 }, ) 1 ( exp{

) (

1

ξ

ξ

ξ ξ ξ

xe

x x

where: 1+ξx > 0

In fact this is the limiting distribution for transformed maxima from (1.2.3) The parameter ξ is called the shape parameter, and it is an indicator of the fatness of the tail

Trang 9

1.2.1 Properties of the tail of the distribution F:

If ξ>0, then we say that F belong to the MDA of the Fréchet distribution Gnedenko showed

(1943) that the tail decays like a power function In this class are heavy tail distributions like Pareto, log-Gamma, Cauchy and t-student

If ξ=0, then F is the MDA of the Gumbel distribution, this is the case of distributions where the

tail decays like the exponential function, examples: normal, exponential, and lognormal

If ξ<0, this is the class of the Weibull distribution, in this case F has a support bounded above,

like the beta-distribution

1.2.2 Tail fitting

For x > u the upper tail distribution is the following:

) ( ) ( )) (

1 ( )

It can be shown that F(x), for x > u can also be approximated with a GPD with the same shape

parameter ξ as for Gξ,β(u) , the fitted GPD to the distribution of the excesses

Method used to fit the tail:

We have a sample of iid (independent and identically distributed) losses x1 , x2 , x3 , …, xn , then

we choose a threshold u and we look at extreme losses, those for which x > u Let’s assume there are k losses bigger then u, and we write them as: x’1 , x’2 , …, x’ k Define y i = x’ i – u, for every i from 1 to k Now we think of y1 , y2 ,…, y3 as being a sample from a GPD with parameters ξ, β which we want to estimate

We use the log-likelihood function in estimating the parameters Under the assumption that

ξ≠ 0, the log-likelihood function is:

l

1

)]

1ln(

)[

11()ln(

),(

β

ξ ξ

β β

0,

,))(1(1)

(

1

ξ ξ

β

when u

u x

when u x for u

x n

k x

X - the profit The obtained estimate for VaRα is:

n

k provided

k

n u

)]

( )

( [

) ( )]

( [

=

11

ˆ

E , for the case ξ > 1 (1.2.10)

Trang 10

1.2.3 Mean Excess Function

As can been seen from the description above, we need to choose a threshold What is the best method in choosing it? On one hand the more points we have above the threshold, the more points we have in estimating the tail, but on the other hand the theorem tells us the threshold should tend to the right-end point

In this case a good indicator in choosing the threshold is the mean excess plot Suppose the

threshold excess X-u follows a GPD with parameters ξ and β, under necessary restrictions as in 1.2.5, then

(1.2.11) For any u’>u we define the mean-excess plot as:

y u

u u u u

X u X E u

e

ξ

ξ ξ

β ξ

) ( 1

) ' ( ) ( ] ' '

[ )

'

from the equality (1.2.7), it is seen that the mean-excess plot is a linear function of

y = u’-u If we take the empirical mean-excess plot:

1 )'

) ' (

n

u

e , then, after plotting it for each u’, we choose u so that for

u’ > u it should look like a line In most of the cases studied the conclusion was to choose u’ so that 95% of the sample points lie above u’

1.3 Method of Block Maxima

In this paper a wide range of stocks are used and there will be cases in which the tail estimates obtained seem to be sensible to some very extreme losses And in this case a natural question arises: is this loss an event risk? How rare is this loss and what is the probability of this happening again? EVT can be successfully used to answer this question And it even allows us to loosen the assumption of the losses being independent The general idea is to group the losses in blocks, like by month or quarter And then we fit the maximum losses from each block to the generalized extreme value distribution

(1.2.6) This is due to the Fisher-Tippett theorem (which tells us that if F (the distribution of losses)

belongs to the maximum domain of attraction then the block maxima follow a generalized extreme value distribution.) The example when this method is used is provided in the second chapter, while here

we continue with a small description of the theory applied in it

One of the important assumptions allowing for a fitting of block maxima2 to a generalized extreme value distribution (see 1.2.6) is the fact that the losses should be independent and identically distributed Further, the effects of relaxing the assumption of iid to consider just stationary processes3 is considered With additional assumptions it can be shown that normalized block maxima indeed follow

a GEV distribution asymptotically

We assume (X n ) to be our stationary process, F the marginal distribution of X i , while (Y n) is the

associated iid process with the same marginal distribution F The conditions necessary to be fulfilled by (X n ) such that the maxima of (X n) have exactly the same limiting behaviour as maxima of (Yn ) are

as follows:

i) If the stationary series (X n) shows only weak long-range dependence, so that we can assume that block maxima are independent

ii) If it shows no inclination to form clusters of large values

Then maxima of the two series have identical limiting behaviour

2 in our cases monthly minimums will be considered as block maxima

3 a stationary process is one which is time invariant: for any h1 < h2 < … hn, and t > 0 we have that: (X h1 , X h2, …,

X hn ) = (in distribution)= (X h1+t , X h2+t , …, X hn+t )

Trang 11

Condition (i) is defendable for financial time series, while the anti-clustering condition is not Since the above theory is not applicable, a more detailed analysis of the issue of clustering is required For this purposes an extremal index of a stationary process is defined

Definition 7: Extremal index of a stationary process (The intuition is that the maximum of n

observations from a stationary series with extremal index θ behaves like the maximum of nθobservations from the associated iid series.)

Let 0 ≤θ≤ 1 and suppose that for every τ > 0 there is a sequence of real numbers (u n(τ))n>0 such that:

Then we say that the stationary series (X n)n>0 has the extremal index θ

For more details, please see the paper by A J McNeil "Calculating Quantile Risk Measures for Financial Return Series using Extreme Value Theory" The basic result used in the analysis follows:

Theorem 8: Let (M i ) the maximums if the stationary series (X i ), and (L i ) the maximums of (Y i ) (the associated iid series), and let θ >0 the extremal index of the stationary series, then:

)(}

/)

for a non-degenerate H(x) if and only if

)(}

/)

with Hθ(x) also non-degenerate

This is the justification required to fit the GEVD to the block maxima of a stationary time series, which shows the tendency of large values to form clusters

We divide our series into k blocks with n observations in each block Let (Mn) be the

maxima of each block Using the assumption that the maximas are independent, we fit them to

a generalized extreme value distribution And using the maximum likelihood method we estimate the parameters: ξ, µ, σ of the GEV.Where µ and σ are the location and scale parameters of the extreme value distribution:

)()(

, , µ σ ξ σ µ

H x

H , where Hξ (x) is defined in 1.2.5 For more information about the maximum likelihood function used in estimating the parameters, the reader is advised to see the book “Modelling Financial Time Series with S-PLUS” pag.130-143

To avoid any confusions, we will use the following notations for the stationary series:

X-the loss, M-the maxima and F the marginal distribution And for the associated iid series:

Y-the loss, L-the maxima and G the marginal distribution

In the case when the sample is iid, to estimate Rn,k , the level we expect to exceed in one block every k blocks, (the probability of having a return as low or lower then Rn,k once in k blocks is 1/k), we use that fact that Rn,k is just the (1-1/k)-quantile of the distribution of block maxima:

)(1

}{

1}{

1

, , ,

L P

The probability of obtaining a return as low or lower then Rn,k is 1-(1-1/k)1/n Taking Y the random variable representing the losses and G its marginal distribution:

n k

n n k

k k

k R

L P R

G R

Y P R

Y

Trang 12

However our losses are not iid, and in this case we use the extremal index and theorem 8 to obtain an approximation of the probability of exceeding the return level Rn,k , where F is the marginal

distribution of our stationary series:

) (

1 ) (

1 ) ( 1 ) (

1 )

, /

1 ,

,

n k

n n k

1 ,

) /(

1 ( ) 1 ( 1 1 / )

k n n

k R

The major advantages of the historical simulation is that the method is completely nonparametric and does not depend on any assumptions about the distribution and correlations of the risk factors The method is robust and intuitive and, as such, forms the basis for the Basle 1993 proposals on market risks and is the most commonly used method in practice But still the historical-simulation method can be subject to some criticisms: only one path is used and the assumption is that the past is a good estimate for what may happen in the immediate future

The reader interested in detailed description of the method used in calculating the historical simulated P&L is referred to book “Risk Management” by M Grouphy, D Galai and R Mark, pag.206-

212

The empirical distribution of the loss is taken as the simulated historical P&L In all the cases considered, generalized Pareto distribution is fitted to the tail of the historical P&L and from it the estimates for VaR and expected shortfall are drawn

Historical VaR(α) is the (1-α)-quantile of the empirical distribution of the profit and loss For example when historical P&L is simulated over a time period of 5 years, then VaR(99%) is the 14th

worse case And historical ES(99%) is the average of the 14 losses bigger or equal to VaR(99%)

Trang 13

Chapter 2 Coherence and VaR

In order to demonstrate some of the affirmations mentioned in the previous chapter about coherence, examples showing the advantages of expected shortfall over other risk measures are provided below This is also necessary as to better understand the intuition behind it

2.2 A Practical Example

The following real example is taken from emerging markets, although similar situations can be encountered in developed markets as well Given the recent events in the Russian markets, more specifically - the Yukos affair, the following example was noticed

4 a number for VaR of 100 means a loss of this magnitude

5 Value-At-Risk at 99% confidence level

6 Tail Conditional Loss at 99% confidence level

7 Expected Shortfall at 99% confidence level

Trang 14

Let us consider the following two portfolios: the first one consists of 24 Yukos stocks, and the other contains one stock of Sberbank The method employed in calculating the P&L is the historical approach and here we talk about historical VaR and historical ES The time horizon considered is February 7, 2003 to November 4, 2003 From February 7, 2003 forward the RTS – Russian Stock Exchange Index - is calculated using the current methodology)

The graph bellow, plot 2.2.1 shows the scatter plot of the returns of Yukos's and Sberbank's share prices Looking closely to the plot, the returns from October 27, 2003 represents the VaR99 for Sberbank returns and the worst case for Yukos returns (the graphical representation for it is the square), while the one from October 30, 2003 is the VaR99 for the Yukos returns and the worst case for the Sberbank returns (represented on the plot by a circle)

This is an unusual situation when the same date represents the date of the worst case for one company and the VaR date for the other one, especially when considered that they are from different industries: one is an oil giant and the second one is a commercial bank But in this case we are dealing with an amalgam of different factors On October 27, 2003 the CEO of Yukos, Mikhail Khodorovsky was arrested and on October 30, 2003 the Russian government has seized a controlling stake in Yukos These events, besides having a very strong negative impact on Yukos stock prices, sent strong shock waves through the Russian stock market This is a highly volatile and very poorly diversified market Out of the 59 companies listed on RTS (as from November 4, 2003) the largest six ones are all fuel, metallurgy or energy companies, where Yukos has the biggest weight of about 21.69% as of November 4, 20038 and the seventh company (according to market value) is Sberbank with a market weight of 3.68%, all the other ones having a weight of less then 2% In fact, the correlation coefficient between Yukos and the RTS index over the same time horizon is 0.88, an extremely high one, illustrating the weak diversification of the Russian market

Yukos Log Returns

Figure 2.2.1: scatter plot of Yukos and Sberbank log returns

8 The other five companies with the biggest weights are:

Surgutneftegas, fuel industry (13.69%)

LUKOIL, fuel industry (13.37%)

MMC “Norilsk Nickel”, metallurgy (9.11%)

Sibneft, fuel industry (8.17%)

United Energy System of Russia, electric energy (7.97%)

Trang 15

For the aggregated portfolio, the VaR date (at 99% confidence level) is October 30, 2003

Please see bellow the risk figures for these portfolios

1 Portfolio: 2 Portfolio: The sum of the risks Risk of the 24*Yukos share 1 Sberbank share of the individual portfolios aggregated portfolio

at 99% c.l (date: 30.10.2003) (date: 27.10.2003) (date: 30.10.2003)

at 99% c.l

Table 2.2.2: Historical risk measures for each individual portfolio and the aggregated portfolio

Even though this example arose out of endogenous conditions, caution should be exercised

when using historical VaR to measure risk The factors that had the most influence and should be on

ones' radar are high volatility and low diversification This is more likely to occur in emerging markets,

especially when they are highly sensible to events, hence particular attention should be given when

dealing with all these features

The main conclusion drawn from the above examples is that expected shortfall shows clear

advantages over VaR Moreover, since the techniques of calculating VaR are already widely used in

practice - it would not be difficult to enhance VaR with expected shortfall; especially since these

examples help to indicate the fact that VaR is not correctly reflecting the diversification effect

2.3 Exploring the Tail

In this chapter we apply the method of block maxima to Yukos stock prices This method is

able to provide us the answer to the question: How rare is the event of obtaining a return as low as 15%

from October 27, 2003, which is due to the fact that the CEO of Yukos, Mikhail Khodorovsky was

arrested? Before continuing, it may be useful for those interested to read the theoretical part of the

method of block maxima which is summarised in the first chapter, section 1.3 Because the events

related to Yukos where most recent from all those considered in the paper, the choice was to apply the

method of block maxima in estimating how rare is the event of obtaining a return as low as the one from

October 27, 2003 for Yukos stock prices

Data covers the time period from May 4, 2000 to November 28, 2003 The graphs representing

the stock prices and the log-returns can be seen in appendix B, figure B1 and B2 Before proceeding

with the fitting of GEV, the stationarity of the process needs to be checked For this purpose, the test

KPSS9 was used and the Yukos returns passed the test10 For a detailed description of the testing

procedure we refer the reader to [5] The Autocorrelation Function can also provide a better picture

about the dependence, as for the Yukos log returns it displays low long-range dependence But in this

case the dependence is mostly up to lag 11, so we can assume that monthly maxima are independent

In order to fit the GEV distribution, data was split into blocks by months A total of 43 blocks

were obtained These are the parameters of the generalized extreme value distribution: ξ= 0.107, σ=

0.022 and µ= 0.03511 For more details the reader is advised to see 1.3

Before going further in estimating the extremal index for Yukos log returns a small

description of the method used is given for the general case After dividing the data into k blocks

of observations of length n, a high threshold is set, let it be u, and then according to Embrechts, the

natural asymptotic estimator of θ (the extremal index) is

9 Kwiatovski, Phillips, Schmidt and Shin test check for stationarity of a process It is based on the KPSS statistic

10 it gave a KPSS test statistics of 0.414, and in this case the null hypothesis (stationary around a constant ) is not

rejected at 5% level

Trang 16

) 1 ln(

) 1 ln(

* 1

kn N k K

exceed it:

theta 0.84 0.77 0.79 0.77 0.7 0.73 0.69 0.72

Table 2.3: Estimates of Extremal Index

We assume that the extremal index is 0.73, and so the average cluster size is about 1.37

Under the assumption of the returns being iid we get that a return of - 15% or lower is a once in 5.25 years event12 (63 months, k = 64), the graph with monthly minimums is in the appendix B, figure

B3 This means that the 63-month return level corresponds to the 0.9992313 quantile of the marginal distribution when we assume that returns are iid, and so we have a 0.0009914 probability of exceeding the return level using the extremal index Because we have 1371 trading days in 5,25 years, then we expect roughly 0.00099*1371 = 1.38 actual days when the returns of this magnitude occur

Result: in five years and three months one stress month with daily returns as low or lower then

-15% is expected Also, in this stress month we expect clusters of large returns of an average size of 1.37 days when these returns occur

Or we can say that in 5.25 years we expect approximately 1.38 actual days on average when losses as big or bigger then 15% occur

Conclusion: As can be seen, the method of Block Maxima tells us that this is not an extremely

unlikely event As a comparison, if we use the assumption of the returns being normally distributed15, then the result is that a return lower then -15% is once in 40000 years event,16 which is not at all a reliable number The normal assumption heavily underestimates the risk, it does not take into account the fact that the empirical tail is much more fatter then the normal one While the Method of Block Maxima has the theoretical background and also the assumption of the returns being independent is not used, thus giving a good and more precise tool in estimating event risk Also considering that we had three and a half years of data, and in this time a return as low as -15% was noticed, we are able to compare the obtained result with the observed values

12 the 95% confidence interval of 63-month the return level is [0.11, 0.27] computed using the likelihood ratio

13 0.99923 = (1-1/k)1/n, where n = 22 (trading days in one month) and k = 63

14 0.00099579=1-0.99900421=1-(1-1/k) 1/(n θ )

15 In fact the series fail both Shapiro-Wilks and Jarque-Bera tests for normality

16 The standard deviation for the sample is 0.029 The return –15% is 5.2 standard deviations and the probability

of obtaining such a return or lower, assuming the distribution of the returns is normal is 0.000000097

Trang 17

Chapter 3 Equity Portfolios

This chapter explains the findings obtained as a result of using different methods of calculating risk measures for equity A comparative analysis to show the advantages of applying Extreme value theory is included as well The chapter is divided into two parts The first part considers a simple case, where only one risk factor is present and the positions in the portfolio are long and short stock The analysis in the second part is extended to two risk factors, while the portfolio consists of vanilla options

on stocks

3.1 One Risk Factor

To facilitate a better understanding of the behaviour of generalized Pareto distribution when applied to equity portfolios, a representative collection a wide range of stocks was chosen:

1 SMI and DAX, the European indices with moderate volatility

2 Nasdaq, the highly diversified but much more volatile than its European counter-parties American index

3 The stocks of a highly liquid financial institution - UBS, that managed to maintain its high ratings notwithstanding the many mergers and crises it has gone through

4 The highly volatile stocks of ABB, Disetronic and Yukos, which have undergone a lot of distress in the time period considered

UBS Financial Mergers/ LTCM crisis Medium

ABB Industry Liquidity crisis High/cyclical

Disetronic Med Tech Tech bubble High

Yukos Energy Political events High

Table 3.1.1: Equity stocks

The results of the analysis, a short history of the major events to have had a significant impact

on the UBS stock price in the analysed time period needs to be considered UBS AG became reality on July 1, 1998 after a merger between Union Bank of Switzerland (Zurich) and the Swiss Bank Corporation (Basel), initially announced on December 8, 1997 In the last decade of the twentieth century, SBC took over foreign finance firms (1995: O'Connor & Associates, Chicago; Brinson

Trang 18

Partners, Inc., Chicago; S.G Warburg plc, London) UBS AG merged with PaineWebber Inc on November 3, 2000 The most significant crisis for UBS AG in the time period considered happened on September 25, 1998, when its share price dropped 15% due to the loss of 1.2 billion Swiss francs mostly as a result of a near collapse of the U.S hedge fund Long-Term Capital Management

The portfolio considered is long stock The numbers are given in relative terms: the loss is a

percentage of the value of the portfolio

In fitting the tail to generalized Pareto distribution, definition (1.2.5) from chapter 1, a threshold=0.02 was chosen, mostly by looking at mean excess plot, see definition (1.2.12) from chapter

1, (see appendix C, figure C3) and this threshold was chosen so that beyond the threshold 0.02 the mean-excess plot looks like a line As a result, out of a total of 3000 data, 281 of the returns (9.36%) exceed the threshold By fitting the points that exceed the threshold using the likelihood method, the following parameters were obtained:

xi = 0.18, with a standard error of 0.074 (42%)

beta = 0.012, with a standard error of 0.0011 (9.5%)

Before proceeding further with GPD, it is imperative to mention that VaR was also estimated using other methods, more specifically the historical approach, using the assumption of the returns following a normal distribution, and also as the entire distribution is fitted to a t-student distribution (this class of distributions was chosen considering the fact that according to most articles, financial time series are more likely to follow a t-student distribution rather than a normal one, given that they

are more fat tailed)

When fitting the distribution to a t-student one, the estimate of the degrees of freedom (dof) using the maximum likelihood method was 3 The estimates for VaR are shown in the table bellow As you can see, the results obtained from GPD give a closer estimate to the historical ones than the

results obtained by fitting the entire distribution to a t-student or normal distribution

Table 3.1.1.1: Estimates of VaR using different methods

Further, in order to test stability to the outliers, as well as how stable the risk measures obtained this way are, a similar analysis was done on the same time series representing the log returns However, this time the worst cases (the 15% drop in the stock price on September 25, 1998 due to the LTCM crisis), as well another two more worst cases were excluded The table bellow (3.1.1.2) shows the shape

parameters (xi) and the risk figures, as well as the 95% statistical confidence intervals

estimate l bound estimate u bound l bound estimate u bound

all data 0.18 4.8 % 5.17 % 5.7 % 6.3 % 7.2 % 9 % 1.4118without min 0.13 4.7 % 5 % 5.5 % 6.1 % 6.9 % 8.3 % 1.38 5

without 3 mins 0.08 4.7 % 4.9 % 5.3 % 5.8 % 6.5 % 7.8 % 1.33 5

Table 3.1.1.2: Estimates of risk measures using GPD Portfolio: long stock UBS

As can be seen from the table, the estimate of the xi parameter declines, as a sign that the tail

becomes less fat as the most extreme points are not being considered The table also includes the ratio between expected shortfall and value-at-risk This is done in order to emphasize the fatness of the tail, and as a benchmark we can take the ratio between ES and VaR in the case of the normal distribution,

17 The estimate for standard deviation used was the unbiased estimator and it is equal to 0.018

18 For normal distribution this ratio is 1.146

Ngày đăng: 27/04/2013, 11:17

Nguồn tham khảo

Tài liệu tham khảo Loại Chi tiết
[1] Acerbi C. &amp; Tasche D. (2002), “On the Coherence of Expected Shortfall“ Sách, tạp chí
Tiêu đề: On the Coherence of Expected Shortfall
Tác giả: Acerbi C. &amp; Tasche D
Năm: 2002
[2] A. J. McNeil (1996), “Estimating the Tail of Loss Severity Distributions using Extreme Value Theory” Sách, tạp chí
Tiêu đề: Estimating the Tail of Loss Severity Distributions using Extreme Value Theory
Tác giả: A. J. McNeil
Năm: 1996
[3] A. J. McNeil (1998), “Calculating Quantile Risk Measures for Financial Return Series using Extreme Value Theory” Sách, tạp chí
Tiêu đề: Calculating Quantile Risk Measures for Financial Return Series using Extreme Value Theory
Tác giả: A. J. McNeil
Năm: 1998
[4] F. Delbaen (2002), “Coherent Risk Measures on General Probability Spaces” Sách, tạp chí
Tiêu đề: Coherent Risk Measures on General Probability Spaces
Tác giả: F. Delbaen
Năm: 2002
[5] Zivot E. &amp; Wang J., Springer 2003, “Modeling Financial Time Series with S-PLUS“ Sách, tạp chí
Tiêu đề: Modeling Financial Time Series with S-PLUS
[6] P. Embrechts, C. Kluppelberg &amp; T. Mokosch, Springer 1997, “Modelling Extremal Events for Insurance and Finance“ Sách, tạp chí
Tiêu đề: Modelling Extremal Events for Insurance and Finance
[7] M. Crouhy, D. Galai &amp; R. Mark, McGraw-Hill 2000, “Risk Management” Sách, tạp chí
Tiêu đề: Risk Management
[8] P. Embrechts, A. J. McNeil &amp; D. Straumann (1999), “Correlation: Pitfalls and Alternatives“ Sách, tạp chí
Tiêu đề: Correlation: Pitfalls and Alternatives
Tác giả: P. Embrechts, A. J. McNeil &amp; D. Straumann
Năm: 1999

TỪ KHÓA LIÊN QUAN

TÀI LIỆU CÙNG NGƯỜI DÙNG

TÀI LIỆU LIÊN QUAN

w