1. Trang chủ
  2. » Ngoại Ngữ

Nonparametric estimation of copulas of financial time series

103 241 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 103
Dung lượng 2,56 MB

Nội dung

NONPARAMETRIC ESTIMATION OF COPULAS CAO JIANFEI NATIONAL UNIVERSITY OF SINGAPORE 2003 NONPARAMETRIC ESTIMATION OF COPULAS CAO JIANFEI (B.Sc Nankai University) A THESIS SUBMITTED FOR THE DEGREE OF MASTER OF SCIENCE DEPARTMENT OF STATISTICS AND APPLIED PROBABILITY NATIONAL UNIVERSITY OF SINGAPORE 2004 Acknowledgements For the completion of this thesis, I would like to express my heartfelt gratitude to my supervisor Associate Professor Chen Song Xi I deeply appreciate his patient guidance, help and encouragement throughout these two years His careful reading and valuable comments are necessary to this thesis I wish to contribute the thesis to my dearest girlfriend and my family, who give me the enough support and understanding to complete the study in NUS I would also like to thank all my friends for their help and support i Contents Summary iv Introduction 1.1 Problem and Purpose 1.2 Methodology 1.3 Main Results and Contents Distribution Copulas and Extreme Value Theory 2.1 Copulas 2.2 Archimedean Copulas 13 2.3 Method of Moment to Estimate θ 23 2.4 Extreme Value Theory 27 Kernel Smoothing Method 31 3.1 Kernel Estimator 31 3.2 Joint Distribution Function Estimation 34 3.3 Estimate KC (t), λ(t) and ϕ(t) 35 3.4 Copulas Estimation 36 ii 3.5 Bandwidth Selection 38 Simulation and Discussion 43 Case Study and Conclusion 55 5.1 5.2 Case Study 55 5.1.1 Case 1: Daily Returns of Dow Jones and Hang Seng 57 5.1.2 Case 2: Daily Returns of S&P 500 and NASDAQ 59 Conclusion 60 Bibliography 81 Appendix 85 iii Summary Copulas, especially Archimedean copulas, are fit for modelling non-elliptically distributed multivariate data with different dependent structure, such as multivariate financial return series They are very useful tools for Integrated Risk Management This thesis considers using the kernel smoothing method to estimate bivariate copulas and apply them for financial time series In the thesis, we present the theoretical inference on how to use the kernel smoothing method to estimate the joint distribution functions, the generator functions and the copulas functions Bandwidth selection is important for the kernel estimators, we use plug-in method to select optimal global bandwidths Numerical simulations are presented to verify the theoretical results Applications concern modelling the daily return of S&P 500 composite index and NASDAQ composite index and the daily return of Dow Jones Industrials index and Hang Seng index with the Gumbel family of Archimedean copulas Extending bivariate Archimedean copulas to multivariate cases is discussed at the end iv List of Tables 2.1 One parameter families of Archimedean copulas 17 5.1 τˆX,Y and θˆ under the Gumbel and Frank families with different sample sizes and different pairs of data 57 v List of Figures 2.1 An example of a copula C(u, v) 29 2.2 Contour plots of C(u, v) = Π(u, v), C(u, v) = W (u, v) and C(u, v) = M (u, v) 29 2.3 An example of strictly decreasing function and the pseudo-inverse function 30 4.1 ˆ with the 95% variability bands under the Gumbel family 47 λ(t) 4.2 ˆ under the Gumbel family 48 Square bias and variance of λ(t) 4.3 ϕ(t) ˆ with the 95% variability bands under the Gumbel family 49 4.4 Square bias and variance of ϕ(t) ˆ under the Gumbel family 50 4.5 The true value and the parametric estimator of C(u, v) under the Gumbel family, the kernel estimator of C(u, v) 51 4.6 The true value of C(u, v) under the Gumbel family, the kernel estimator of C(u, v) and F (x, y) 52 4.7 One dimensional plots: the true value of C(u, v) under the Gumbel family and the kernel estimator of C(u, v) 53 4.8 Square bias and variance of the one dimensional kernel estimator C(u, v) 54 5.1.1 The kernel estimator of λ(t) with the 95% variability bands 64 5.1.2 The kernel estimator of ϕ(t) with 95% variability bands 65 5.1.3 The kernel estimators of C(u, v) and F (x, y) 66 vi 5.1.4 The one dimensional estimators of C(u, v): the method of moment estimator of C(u, v) under the Gumbel family and the kernel estimator of C(u, v) with 95% variability bands 67 5.1.5 The one dimensional estimators of C(u, v): the method of moment estimator of C(u, v) under the Frank family and the kernel estimator of C(u, v) with 95% variability bands 68 5.1.6 The one dimensional estimators of C(u, v): the method of moment estimator of C(u, v) under the Clayton family and the kernel estimator of C(u, v) with 95% variability bands 69 5.1.7 The method of moment estimator of C(u, v) under the Gumbel family and the kernel estimator of C(u, v) 70 5.1.8 The method of moment estimator of C(u, v) under the Frank family and the kernel estimator of C(u, v) 71 5.2.1 The kernel estimator of λ(t) with the 95% variability bands 72 5.2.2 The kernel estimator of ϕ(t) with 95% variability bands 73 5.2.3 The kernel estimators of C(u, v) and F (x, y) 74 5.2.4 The one dimensional estimators of C(u, v): the method of moment estimator of C(u, v) under the Gumbel family and the kernel estimator of C(u, v) with 95% variability bands 75 5.2.5 The one dimensional estimators of C(u, v): the method of moment estimator of C(u, v) under the Frank family and the kernel estimator of C(u, v) with 95% variability bands 76 5.2.6 The one dimensional estimators of C(u, v): the method of moment estimator of C(u, v) under the Clayton family and the kernel estimator of C(u, v) with 95% variability bands 77 5.2.7 The method of moment estimator of C(u, v) under the Gumbel family and the kernel estimator of C(u, v) 78 5.2.8 The method of moment estimator of C(u, v) under the Frank family and the kernel estimator of C(u, v) 79 vii 5.2.9 The kernel estimator C(u, v) of Dow Jones and Hang Seng and the kernel estimator C(u, v) of S&P and NASDQ 80 viii CHAPTER CASE STUDY AND CONCLUSION 79 1.2 N=249, C(u,v) 1.2 N=128, C(u,v) 0.5 0.8 0.7 0.6 0.6 0.8 0.6 0.5 0.4 0.6 0.7 v (NASDAQ) 0.8 0.4 v (NASDAQ) 0.8 1.0 parametric estimator of Frank family nonparametric estimator 1.0 parametric estimator of Frank family nonparametric estimator 0.4 0.4 0.3 0.2 0.2 0.3 0.2 0.2 0.0 0.1 0.0 0.1 0.4 0.6 0.8 0.2 0.4 N=495, C(u,v) N=1000, C(u,v) 0.8 1.2 u (S&P) 0.6 0.4 0.5 0.4 0.8 0.8 0.7 0.6 0.6 0.7 v (NASDAQ) 0.8 0.6 0.5 0.4 0.8 1.0 parametric estimator of Frank family nonparametric estimator 1.0 parametric estimator of Frank family nonparametric estimator v (NASDAQ) 0.6 u (S&P) 1.2 0.2 0.4 0.2 0.3 0.2 0.2 0.2 0.2 0.3 0.2 0.1 0.0 0.0 0.1 0.2 0.4 0.6 u (S&P) 0.8 0.2 0.4 0.6 0.8 u (S&P) Figure 5.2.8: The method of moment estimator of C(u, v) under the Frank family and the kernel estimator of C(u, v) CHAPTER CASE STUDY AND CONCLUSION 80 1.4 N=239 1.4 N=123 1.2 Independent NP estimator of C(u,v) from Dow Jones and Hang Seng NP estimator of C(u,v) from S&P and NASDAQ Completely positive association 1.0 1.0 1.2 Independent NP estimator of C(u,v) from Dow Jones and Hang Seng NP estimator of C(u,v) from S&P and NASDAQ Completely positive association 1 0.9 0.8 0.8 0.9 0.8 0.7 0.6 0.6 v 0.7 v 0.8 0.6 0.6 0.5 0.5 0.1 0.4 0.4 0.1 0.4 0.4 0.3 0.2 0.2 0.3 0.2 0.2 0.0 0.1 0.0 0.1 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 u u N=468 N=948 0.8 1.0 1.4 0.2 1.4 0.0 1.2 Independent NP estimator of C(u,v) from Dow Jones and Hang Seng NP estimator of C(u,v) from S&P and NASDAQ Completely positive association 1.0 1.0 1.2 Independent NP estimator of C(u,v) from Dow Jones and Hang Seng NP estimator of C(u,v) from S&P and NASDAQ Completely positive association 1 0.8 0.9 0.6 v 0.7 0.6 0.8 0.7 v 0.8 0.6 0.8 0.9 0.6 0.5 0.4 0.5 0.1 0.4 0.4 0.1 0.4 0.2 0.3 0.2 0.2 0.3 0.2 0.1 0.0 0.0 0.1 0.0 0.2 0.4 0.6 u 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 u Figure 5.2.9: The kernel estimator C(u, v) of Dow Jones and Hang Seng and the kernel estimator C(u, v) of S&P and NASDQ 81 Bibliography [1] Cai, Z.W., and G.G.Roussas(1998): “Efficient estimation of a distribution function under quadrant dependence”, Scandinavian journal of Statistics, 25, 211-224 [2] Chavez-Demoulin, V., and P Embrechts(2001): “Smooth Extremal Models in Finance and Insurance”, www.math.ethz.ch/∼baltes/ftp/papers.html [3] Embrechts, P.(2000): “Extreme Value Theory:Potential and Limitations as an Integrated Risk Management Tool”, www.math.ethz.ch/∼baltes/ftp/papers.html [4] Embrechts, P.(2001): “Extremes in Economics and the Economics of Extremes”, www.math.ethz.ch/∼baltes/ftp/papers.html [5] Embrechts, P.(2003): “Dynamic copula models for multivariatehigh-frequency data in financs”, www.math.ethz.ch/∼baltes/ftp/papers.html [6] Embrechts, P., A.H¨oing, A.Juri,(2003): “Using Copulae to bound the Value-at-Risk for functions of dependent risks ”, Finance & Stochastics, 7(2), 145-167 [7] Embrechts, P., C.Klppelberg, and T.Mikosch(1997): Modelling Extremal events for Insurance and Finance, Springer:Berlin BIBLIOGRAPHY 82 [8] Embrechts, P., F.Lindskog, and A.McNeil(2001): “Modelling Dependence with Copulas and Applications to Risk Management”, in Handbook of Heavy Tailed Distributions in Finance, ed by S.T.Rachev, Elsevier/North-Holland(2003), pp.329-384 [9] Embrechts, P., A.McNeil, and D.Straumann(1999): “Correlation and Dependence in Risk Management: Properties and Pitfalls”, in Risk Management: Value at Risk and Beyond, ed by M.Dempster and H.K.Moffatt, Cambridge University Press(2001), pp.176-223 [10] Embrechts, P., S.I.Resnick, and G.Samorodnitsky(1999): “Extreme Value Theory as a Risk Management Tool”, www.math.ethz.ch/∼baltes/ftp/papers.html [11] Epanechnikov, V.(1969): “Nonparametric estimates of a multivariate probability density”, Theory of Probability and its Applications,14, 153-158 [12] Fermanian, J.D., D.Radulovic, and M.Wegkamp(2002): “Weak convergence of empirical copula process”, http://www.crest.fr/doctravail/document/2002-06.pdf [13] Fermanian, J.D., and O.Scaillet(2003): “ Nonparametric estimation of copulas for time series”, www.hec.unige.ch/professeurs/SCAILLET− Olivier/pages− web/pdfs/copula.pdf [14] Genest, C., K.Ghoudi and L.P.Rivest(1993): “A semiparametric estimation procedure of dependence parameters in multivariate families of distributions”, Biometrika, 82, 543-552 BIBLIOGRAPHY 83 [15] Genest, C., and L.P.Rivest(1993): “Statistical Inference Procedures for Bivariate Archimedean copulas”, Journal of the American Statistical Association, Vol 88, Issue 423 [16] Hollander, M., and D A Wolfe(1999): Nonparametric statistical methods, Wiley: New York [17] Joe, H.(1997): Multivariate Models and Dependence Concepts, Chapman&Hall [18] Johnson, R.A., and D.W.Wichern(2002): Applied multivariate statistical analysis, fifth edition, Prentice hall [19] Kimberling, C.(1974): “A probalilistic interpretation of complete monotonicity”, Aequationes Mathematicae, 10, 152-164 [20] Longin, F.M.(2000): “From value at risk to stress testing:The extreme value approach”, Journal of Banking and Finance, 24, 1097-1130 [21] Marshall, A., and I Olkin(1967): “A multivariate exponential distribution”, Journal of the American Statistical Association, 62, 30-44 [22] Nelsen, R.B.(1999): An Introduction to Copulas, Springer:New York [23] Press, W.H., S.A.Teukolsky, W.T.Vetterling, and B.P.Flannery(1992): Numerical recipes in C: the art of scientific computing , second edition, Cambridge University Press [24] Reiss, R.D., and M.Thomas(2001): Statistical analysis of extreme values:with application to insurance, finance,hydrology and other fields, second edition, Birkh¨auser Verlarg BIBLIOGRAPHY 84 [25] Sheather, S.J., and M.C.Jones(1991): “A reliable data-based bandwidth selection method for kernel density estimation”, Journal of the Royal Statistical Society, Series B, volume 53, p898-916 [26] Shi, J., and T.Louis(1995): “Inferences on the association parameter in copula models for bivariate survial data”, Biometrics, 51, 1384-1399 [27] Silverman, B.W.(1986): Density estimation for statistics and data analysis, Chapman&Hall [28] Simonoff, J.S.(1996): Smoothing methods in statistics, Springer:New York [29] Wand, M.P., and M.C.Jones(1995): Kernel smoothing, Chapman&Hall/CRC 85 Appendix Appendix 1: Proof of lemma 2.1 From definition 2.3, since u, v ∈ I, then ϕ(u) + ϕ(0) ≥ ϕ(0) So, ϕ[−1] (ϕ(u) + ϕ(0)) = 0, which implies that C(u, 0) = ϕ[−1] (ϕ(u) + ϕ(0)) = Similarly, we can show that C(0, v) = ϕ[−1] (ϕ(0) + ϕ(v)) = Since ϕ(1) = 0, then ϕ(u) + ϕ(1) = ϕ(u) For u ∈ I, ϕ[−1] (ϕ(u)) = u, which imply that C(u, 1) = ϕ[−1] (ϕ(u) + ϕ(1)) = ϕ[−1] (ϕ(u)) = u Similarly, we can show that C(1, v) = ϕ[−1] (ϕ(1) + ϕ(v)) = ϕ[−1] (ϕ(v)) = v So the function C satisfies the first condition of copula, that is C is grounded (see definition 2.1) Suppose C satisfies (2.4), select any two points (u1 , v1 ), (u2 , v2 ) ∈ I and u1 ≤ u2 , v1 ≤ APPENDIX 86 v2 Since C(0, v2 ) = ≤ v1 ≤ v2 = C(1, v2 ), there must be t ∈ I such that C(t, v2 ) = v1 ⇒ ϕ[−1] (ϕ(t) + ϕ(v2 )) = v1 ⇒ ϕ(ϕ[−1] (ϕ(t) + ϕ(v2 ))) = ϕ(v1 ) Since v1 ∈ I, then ϕ(v1 ) ≤ ϕ(0) If ϕ[−1] (ϕ(t) + ϕ(v2 )) = 0, we will get that ϕ(v1 ) = ϕ(0), this is contradiction So, ϕ[−1] (ϕ(t) + ϕ(v2 )) = ϕ−1 (ϕ(t) + ϕ(v2 )) Thus, ϕ[−1] (ϕ(t) + ϕ(v2 )) = v1 ⇒ ϕ(t) + ϕ(v2 ) = ϕ(v1 ) Then C(u2 , v1 ) − C(u1 , v1 ) = ϕ[−1] (ϕ(u2 ) + ϕ(v1 )) − ϕ[−1] (ϕ(u1 ) + ϕ(v1 )) Because ϕ(t) + ϕ(v2 ) = ϕ(v1 ), we get that C(u2 , v1 ) − C(u1 , v1 ) = ϕ[−1] (ϕ(u2 ) + ϕ(v2 ) + ϕ(t)) − ϕ[−1] (ϕ(u1 ) + ϕ(v2 ) + ϕ(t)) = ϕ[−1] (ϕ(ϕ[−1] (ϕ(u2 ) + ϕ(v2 ))) + ϕ(t)) − ϕ[−1] (ϕ(ϕ[−1] (ϕ(u1 ) + ϕ(v2 ))) + ϕ(t)) = C(C(u2 , v2 ), t) − C(C(u1 , v2 ), t) ≤ C(u2 , v2 ) − C(u1 , v2 )(because (2.4)) ⇒ C(u2 , v2 ) − C(u1 , v2 ) − C(u2 , v1 ) + C(u1 , v1 ) = VC ([u1 , u2 ] × [v1 , v2 ]) ≥ Thus, C is grounded 2-increasing Conversely, suppose C is grounded 2-increasing, then VC ([u1 , u2 ] × [v, 1]) ≥ ⇒ C(u2 , 1) − C(u2 , v) − C(u1 , 1) + C(u1 , v) ≥ ⇒ C(u2 , v) − C(u1 , v) ≤ u2 − u1 APPENDIX 87 Appendix 2: Proof of theorem 2.3 Let t ∈ I and w = ϕ(t) ⇒ w ≤ ∞, I is divided into two intervals: [0, t] and [t, 1] Suppose {t = t0 , t1 , · · · , tn = 1} be a partition of [t, 1] The partition is induced by the partition {0, w kw ,···, , · · · , w} of [0, w], where n is a fixed positive integer and k = 0, 1, · · · , n n n Because ϕ(t) is continuous, strictly decreasing function, thus ϕ : [t, 1] → [0, w], tk = ϕ[−1] ( (n − k)w ) n Then we have tj = ϕ[−1] ( (n − j)w (n − j)w ) ⇒ ϕ(tj ) = n n C(tj , tk ) = ϕ[−1] (ϕ(tj ) + ϕ(tk )) (n − j)w (n − k)w + ) n n n−j−k w) = ϕ[−1] (w + n = ϕ[−1] ( Because C(u, v) = ϕ[−1] (ϕ(u) + ϕ(v)), ϕ(t) is continuous, strictly decreasing function, C(u, v) ≤ t ⇒ ϕ(u) + ϕ(v) ≥ ϕ(t) Set AC (t) = {(u, v) ∈ I |C(u, v) ≤ t} means the area of on and below curve ϕ(u)+ϕ(v) = ϕ(t) If ≤ u ≤ t, then ϕ(u) ≥ ϕ(t), v can be any point in I Thus, AC (t) contains the area of A1 = ([0, t] × [0, 1]) If t ≤ u ≤ 1, then ϕ(u) ∈ [0, w] Suppose u = tk , ϕ(u) + ϕ(v) ≥ ϕ(t) ⇒ v ≤ ϕ[−1] (ϕ(t) − ϕ(u)) APPENDIX 88 = ϕ[−1] (ϕ(t0 ) − ϕ(u)) = ϕ[−1] ( n−0 w − ϕ(u)) n = ϕ[−1] (w − ϕ(u)) = ϕ[−1] (w − ϕ(tk )) = ϕ[−1] (w − n−k w) n = tn−k Suppose u = tk−1 , similarly, v ≤ ϕ[−1] (ϕ(t0 ) − ϕ(tk−1 )) = tn−k+1 This means, when u ∈ [tk−1 , tk ], v ∈ [0, tn−k+1 ], and the area on and below curve ϕ(u) + ϕ(v) = ϕ(t) is approximate rectangle [tk−1 , tk ] × [tn−k , tn−k+1 ] and rectangle [tk−1 , tk ] × [0, tn−k+1 ], that is the rectangle [tk−1 , tk ] × [0, tn−k+1 ], k = 0, 1, · · · , n LetRk = [tk−1 , tk ] × [0, tn−k+1 ], Sn = n Rk , Rk is disjointed each other KC (t) is the k=1 area of A2 = n→∞ lim VC (Sn ) under the condition t ≤ u ≤ 1, because VC (Rk ) = VC ([tk−1 , tk ] × [0, tn−k+1 ]) = C(tk , tn−k+1 ) − C(tk−1 , tn−k+1 ) − C(tk , 0) + C(tk−1 , 0) n − k − (n − k + 1) n − (k − 1) − (n − k + 1) w) − ϕ[−1] (w + w) n n w = ϕ[−1] (w − ) − ϕ[−1] (w); n n n ϕ[−1] (w − wn ) − ϕ[−1] (w) VC (Rk ) = −w{ } VC (Sn ) = VC ( Rk ) = − wn k=1 k=1 = ϕ[−1] (w + ⇒ = ∂ϕ[−1] (w) ∂w −ϕ(t) −ϕ(t) = ∂ϕ(t) ϕ (t) [−1] lim VC (Sn ) = −w n→∞ −w ∂w ∂ϕ[−1] (w) = ∂ϕ (ϕ(t)) APPENDIX 89 Then, KC (t) = VC {(u, v) ∈ I |C(u, v) ≤ t} = VC (A1 ) + VC (A2 ) lim VC (Sn ) = VC ([0, t] × [0, 1]) + n→∞ = t− ϕ(t) ϕ (t) Appendix 3: Derivations of (3.2), (3.3) and (3.4) n x − Xi )} G( n i=1 h ∞ x − X1 x−y = E{G( G( )} = )f (y)dy h h −∞ E{Fˆ (x)} = E{ Because = G(u)f (x − hu)hdu = K(u)F (x − hu)du K(u)udu = 0, and suppose f (x) is bounded near x, then we have: E{Fˆ (x)} = K(u){F (x) − huf (x) + h2 u f (x) + o(h2 )}du 2 f (x) + o(h2 ), = F (x) + h2 σK 2 where σK = K(u)u2 du Bias{Fˆ (x)} = E{Fˆ (x)} − F (x) = 2 h σK f (x) + o(h2 ) x − Xi x − X1 n G( ) = Var G( ) n i=1 h n h x − X1 x − X1 = E G2 ( ) − E2 G( ) n h n h Var{Fˆ (x)} = Var (A.1) APPENDIX E{G2 ( 90 x − X1 )} = h G2 ( = − x−y )f (y)dy h u= x−y h = G2 (u)f (x − hu)du G2 (u)dF (x − hu) = −G2 (u)F (x − hu)|∞ −∞ + G(u)K(u)F (x − hu)du Because G(∞) = 1, G(−∞) = 0, F (∞) = 1, F (−∞) = 0, and G2 (u)F (x − hu)|∞ −∞ = 0, then E{G2 ( x − X1 )} = h = Because G(u)K(u){F (x) − huf (x) + O(h2 )}du 1 G(u)dG(u) = G2 (u)|∞ −∞ = , then 2 G(u)K(u)du = E{G2 ( G(u)K(u)F (x − hu)du x − X1 )} = F (x) − 2hf (x) h G(u)K(u)udu + o(h) (A.2) Plug (A.1) and (A.2) in (A.1), we have {F (x) − 2hf (x) G(u)K(u)udu + o(h)} n 1 f (x) + o(h2 )}2 {F (x) − h2 σK − n 2 = [F (x) − F (x)] − hf (x)bK + o(n−1 h), n n Var{Fˆ (x)} = where bK = G(u)K(u)udu MSE{Fˆ (x)} = Bias2 {Fˆ (x)} + Var{Fˆ (x)} = MISE{Fˆ (x)} = 1 + o(h4 ) + o(n−1 h); [F (x) − F (x)] − hf (x)bK + h4 (f (x))2 σK n n MSE{Fˆ (x)}dx 1 + o(h4 ) + o(n−1 h); [F (x) − F (x)]dx − hbK + h4 R(f (x))σK n n 1 2bK ∂AMISE ) n− , = ⇒ h∗ = ( ∂h R(f (x))σK = where, R(f (x)) = (f (x))2 dx APPENDIX 91 Appendix 4: Derivations of(3.7), (3.8), (3.9) and (3.10) The following give the details on the bias and variance of Fˆ (x, y), the optimal global bandwidths h∗x and h∗y for Fˆ (x, y) x − Xi y − Yi n G( )G( )} n i=1 hx hy x − X1 y − Y1 E{G( )G( )} hx hy x−s y−t G( )G( )f (s, t)ds dt hx hy y−t x−s )G( )F (s, t)|t=[−∞,∞], s=[−∞,∞] G( hx hy x−s y−t F (s, t)K( )K( ) ds dt hx hy hx hy x−s y−t F (s, t)K( )K( ) ds dt hx hy hx hy E{Fˆ (x, y)} = E{ = = = − = Let x−s y−t = u, = v, h = max(hx , hy ) Note that hx hy K(u)u du = 0, K(u)K(v)uv du dv = 0, E{Fˆ (x, y)} = K(u)K(v)du dv = 1, u2 K(u) du = σK , then F (x − hx u, y − hy v)K(u)K(v)du dv ∂ F (x, y) 2 ∂ F (x, y) = F (x, y) + h2x σ + σK + o(h2 ), h K ∂x2 y ∂y where, according to Taylor series, F (x − hx u, y − hy v) = F (x, y) − ∂F (x, y) ∂F (x, y) hx u − hy v ∂x ∂y    ∂ F (x, y) ∂ F (x, y)   −h u   x    ∂x ∂x∂y    + o(h2 ) + −h u −h v   x y   ∂ F (x, y) ∂ F (x, y)  −h v y ∂x∂y ∂y ∂F (x, y) ∂F (x, y) ∂ F (x, y) hx u − hy v + h2x u2 = F (x, y) − ∂x ∂y ∂x2 ∂ F (x, y) 2 ∂ F (x, y) + hx hy uv + o(h2 ) + hy v ∂x∂y ∂y APPENDIX 92 Bias{Fˆ (x, y)} = E{Fˆ (x, y)} − F (x, y) = Let bK = ∂ F (x, y) 2 ∂ F (x, y) hx h σ + σK + o(h2 ) K ∂x2 y ∂y G(u)K(u)u du, u n x − Xi y − Yi G( )G( )} n i=1 hx hy y − Y1 x − X1 )G( )} Var{G( n hx hy x − X1 y − Y1 x − X1 y − Y1 E{G2 ( )G ( )} − E2 {G( )G( )} n hx hy n hx hy x−s y−t G2 ( )G ( )f (s, t)ds dt − E2 {Fˆ (x, y)} n hx hy n x−s y−t )G ( )F (s, t)|t=[−∞,∞],s=[−∞,∞] G2 ( hx hy x−s x−s y−t y−t G( )K( )G( )K( )F (s, t)ds dt − E2 {Fˆ (x, y)} nhx hy hx hx hy hy n ˆ G(u)K(u)G(v)K(v)F (x − hx u, y − hy v)du dv − E {F (x, y)} n n 2hx ∂F (x, y) G(u)K(u)u du F (x, y)(1 − F (x, y)) − n n ∂x 2hy ∂F (x, y) G(v)K(v)v dv + o(n−1 h) n ∂y v Var{Fˆ (x, y)} = Var{ = = = = − = = − = 2hx ∂F (x, y) 2hy ∂F (x, y) F (x, y)(1 − F (x, y)) − bK − bK + o(n−1 h) n n ∂x n ∂y MSE{Fˆ (x, y)} = Var{Fˆ (x, y)} + bias2 {Fˆ (x, y)} 2hy ∂F (x, y) 2hx ∂F (x, y) ∂ F (x, y) F (x, y)(1 − F (x, y)) − bK − bK + h4x n n ∂x n ∂y ∂x2 2 ∂ F (x, y) ∂ F (x, y) ∂ F (x, y) + σK + h2x h2y σK + o(h4 ) + o(n−1 h); hy 2 ∂y ∂x ∂y = MISE{Fˆ (x, y)} = ˆ y)}dxdy MSE{F(x, 2hx bK ∂F (x, y) F (x, y)(1 − F (x, y))dxdy − dxdy n n ∂x ∂F (x, y) ∂ F (x, y) h4 σ 2hy bK dxdy dxdy + x K − n ∂y ∂x2 = σK APPENDIX + 93 h4y σK ∂ F (x, y) ∂y 2 ∂ F (x, y) ∂ F (x, y) dxdy ∂x2 ∂y dxdy + h2x h2y σK + o(h4 ) + o(n−1 h); Let a1 = 2bK a2 = σK c = σK ∂F (x, y) dxdy, b1 = 2bK ∂x ∂ F (x, y) dxdy, b2 = σK ∂x2 ∂ F (x, y) ∂ F (x, y) dxdy ∂x2 ∂y ∂F (x, y) dxdy, ∂y ∂ F (x, y) dxdy, ∂y Then F (x, y)(1 − F (x, y))dxdy n b1 a2 b2 a1 hx − hy + h4x + h4y + ch2x h2y + o(h4 ) + o(n−1 h) − n n 4 MISE{Fˆ (x, y)} = a1 ∂AMISE(Fˆ (x, y)) = = − + a2 h3x + chx h2y ; ∂hx n b1 ∂AMISE(Fˆ (x, y)) = = − + b2 h3y + ch2x hy ∂hy n (A.3) (A.4) Let (A.3) × b1 − (A.4) × a1 , we get that a2 b1 h3x − a1 ch2x hy + b1 chx h2y − a1 b2 h3y = (A.5) From (A.5), without lose of generality, we assume that the relationship between hx and hy is hx = c1 hy , (A.6) where c1 is a positive constant Plug (A.6) in (A.3) and (A.4), we can obtain that c a1 c a1 = a2 h3x + h3x ⇒ h∗x = n c1 a2 c + c b1 b1 = b2 h3y + cc21 h3y ⇒ h∗y = n b2 + cc21 1/3 1/3 n−1/3 = O(n−1/3 ); n−1/3 = O(n−1/3 ) [...]... Marshall-Olkin copulas, elliptical copulas and Archimedean copulas Marshall-Olkin copulas focus on survival data(Marshall, Olkin, 1967; Nelsen, 1999) Elliptical copulas are the functions of elliptical distributed random vectors, such as Gaussian copulas and t -copulas The marginal distributions of elliptical copulas are elliptical and of the same type For example, the marginal distributions of Gaussian copulas. .. smoothing estimation In Chapter 4, we display the simulations on the kernel smoothing estimation to the Gumbel family of Archimedean copulas and explain the results In Chapter 5, we apply the kernel smoothing method for estimating copulas functions of financial series, which including the daily return of S&P 500 composite index, the daily return of NASDAQ composite index, the daily return of Dow Jones... families of Archimedean copulas are all strict Archimedean copulas The following are some properties of Archimedean copulas Properties of Archimedean copulas Let C be an Archimedean copula with generator function ϕ Then (1)C is symmetric, i.e C(u, v) = C(v, u); ∀u, v ∈ I (2)C is associative, i.e C(C(u, v), w) = C(u, C(v, w)); ∀u, v, w ∈ I CHAPTER 2 COPULAS AND EXTREME VALUE THEORY 20 Proof: C(u, v)... chapters As mentioned above, Archimedean copulas are fit for non-elliptically distributed data with different dependent structure Common Archimedean copulas have close form and many parametric families of copulas CHAPTER 2 COPULAS AND EXTREME VALUE THEORY 14 are Archimedean copulas( Nelsen, 1999; Embrechts, Lindskog and McNeil, 2001) The disadvantage of Archimedean copulas is that there are some difficult... return of Hang Seng index Then we give the conclusions and discuss how to extend bivariate Archimedean copulas to multivariate cases Most of proofs are given in Appendix 6 Chapter 2 Copulas and Extreme Value Theory This chapter is divided into three parts In the first part, we describe the definition and properties of copulas Sklar theorem is introduced, which is the most important theorem for copulas. .. properties of Archimedean copulas The general expressions of Archimedean copulas and their generator functions are discussed At the end, we give an algorithm for generating data set from a given Archimedean copula In the third part, we give a brief description on the extreme value theory Throughout the thesis, we use DomF to denote the domain of a function F and RanF for the range of F 2.1 Copulas An... and Archimedean copulas We also discuss the moment method estimation on parameters of Archimedean copulas and describe what is EVT briefly In Chapter 3, we give the details on kernel smoothing method to estimate bivariate copulas, which include estimating the marginal distribution functions, the joint distribution function, KC (t), λ(t), ϕ(t) and the copulas functions We discuss the issue of band- CHAPTER... bivariate Archimedean copulas to multivariate Archimedean copulas, which we will discuss briefly in Chapter 5 The following two theorems are very useful to kernel smoothing estimation of copulas They are the theoretical foundations of the thesis proposed in the next chapter Theorem 2.3 Let C be an Archimedean copula generated by ϕ For any t ∈ I, let KC (t) denote the C-measure of the set {(u, v) ∈ I... formulate the method of moment estimator of θ For example, the generator function of the Gumbel family is ϕ(t) = (−lnt)θ , thus, λ(t) = τX,Y ϕ(t) tlnt = ; ϕ (t) θ = 1+4 1 0 λ(t) dt θ−1 tlnt dt = θ θ 0 1 ⇒ θ= 1 − τX,Y = 1+4 1 (2.14) Hence, the method of moment estimator of θ is: θˆ = 1 , 1 − τˆX,Y where τˆX,Y is the empirical estimator of the Kendall’s tau given in (2.12) (2.15) CHAPTER 2 COPULAS AND EXTREME... structure For multivariate non-elliptical financial data, copulas are good tools for risk assessment and dependence evaluation For example, we can use copulas to bound the VaR of dependent risk(Embrechts, H¨oing and Juri, 2003) The purposes of this thesis is to apply Archimedean copulas and kernel smoothing method for estimating the distribution functions of multivariate non-elliptically distributed data without .. .NONPARAMETRIC ESTIMATION OF COPULAS CAO JIANFEI (B.Sc Nankai University) A THESIS SUBMITTED FOR THE DEGREE OF MASTER OF SCIENCE DEPARTMENT OF STATISTICS AND APPLIED... Elliptical copulas are the functions of elliptical distributed random vectors, such as Gaussian copulas and t -copulas The marginal distributions of elliptical copulas are elliptical and of the same... three one parameter families of Archimedean copulas are all strict Archimedean copulas The following are some properties of Archimedean copulas Properties of Archimedean copulas Let C be an Archimedean

Ngày đăng: 26/11/2015, 23:08

TỪ KHÓA LIÊN QUAN