Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống
1
/ 150 trang
THÔNG TIN TÀI LIỆU
Thông tin cơ bản
Định dạng
Số trang
150
Dung lượng
1,59 MB
Nội dung
Universit e Paris 13 Nord Institut Galil ee Laboratoire Analyse, G eom etrie et Applications UMR 7539 N attribue par la biblioth`eque ` THESE pour obtenir le grade de PARIS 13 DOCTEUR DE LUNIVERSITE Discipline : Mathộmatiques presentee et soutenue publiquement par Ngoc Khue Tran le 18 septembre 2014 Titre : LAN property for jump-diffusion processes with discrete observations via Malliavin calculus Directrice de th`ese : Eulalia Nualart Co-directeur de th`ese : Arturo Kohatsu-Higa JURY President : Jean-Stephane Dhersin Universite Paris 13-Nord Rapporteurs : Emmanuel Gobet Jean Jacod Ecole Polytechnique Universite Pierre et Marie Curie-Paris Examinateurs : Emmanuelle Clement Yueyun Hu Arturo Kohatsu-Higa Eva Lă ocherbach Eulalia Nualart Anthony Reveillac Universite Paris-Est Marne-la-Vallee Universite Paris 13-Nord Ritsumeikan University Universite de Cergy-Pontoise Universitat Pompeu Fabra and Barcelone GSE INSA de Toulouse Mis en page avec la classe thloria i Remerciements En premier lieu, je tiens remercier infiniment ma directrice de thốse, Eulalia Nualart, de mavoir acceptộ comme ộtudiant malgrộ mes connaissances complốtement faibles sur le sujet cet ộpoque-l et de mavoir fait confiance tout au long de ces quatre annộes de thốse Elle a guidộ mes premiers pas dans le calcul de Malliavin ainsi que dans le monde de la recherche Je lui suis extrờmement reconnaissant de sa gentillesse, sa patience, sa persộvộrance, et toute son ộnergie quelle a investi dans laccompagnement de cette thốse Je dois aussi la remercier de mavoir financộ plusieurs longs sộjours Barcelone l oự une grande partie de cette thốse sest rộalisộe Cest grõce Eulalia que jai eu loccasion de rencontrer et travailler avec mon co-directeur de thốse, Arturo Kohatsu-Higa Cest avec profonde gratitude que je voudrais le remercier pour sa gộnộrositộ, sa grande disponibilitộ et sa rigueur scientifique Jai grandement apprộciộ autant son expertise scientifique que son talent pộdagogique Une partie de cette thốse sest dộroulộe dans son ộtablissement au Japon grõce aux deux sộjours financộs par son projet de recherche Je voudrais remercier encore une fois mes deux directeurs davoir consacrộ leur temps maider la rộdaction de ce texte Leurs conseils avisộs, leur soutien prộcieux, et leurs encouragements dans les moments difficiles et parfois dộprimộs mont permis daboutir ce travail Cest un trốs grand honneur pour moi dờtre leur ộtudiant, et un privilốge que je ne saurai jamais oublier Je tiens remercier tout particuliốrement mes rapporteurs Jean Jacod, Emmanuel Gobet et Nakahiro Yoshida pour leurs nombreuses remarques constructives et suggestions sur la version prộcộdente de ce manuscrit, qui mont permis damộliorer considộrablement la qualitộ de ce texte Je leur en suis trốs reconnaissant Je voudrais remercier vivement Jean-Stộphane Dhersin qui me fait lhonneur de prộsider ce jury de thốse et Yueyun Hu, Eva Lửcherbach, Emmanuelle Clộment et Anthony Rộveillac qui me font lhonneur de participer au jury de cette thốse Je naurais pu poursuivre mes ộtudes supộrieures Paris et cette thốse naurait pu se rộaliser sil ny avait pas eu le financement du gouvernement vietnamien Je profite de cet instant pour remercier lensemble du personnel du "Projet 322", CROUS de Crộteil et Campus France de mavoir toujours accompagnộ dans les nombreuses dộmarches administratives Je tiens remercier ộgalement Lionel Schwartz pour le financement supplộmentaire via le programme LIA CNRS Formath Vietnam et le projet ARCUS MAE/IDF Vietnam, ainsi que pour un voyage au Vietnam Je suis reconnaissant envers les membres de la Facultộ des Sciences Fondamentales de lUniversitộ Pham Van Dong pour les soutiens prộcieux Tout au long de ces quatre annộes, je me sens redevable au Laboratoire dAnalyse, Gộomộtrie et Applications oự jai beaucoup apprộciộ lambiance chaleureuse et lexcellent environnement scientifique Mes remerciements vont tous les membres du LAGA, plus particuliốrement lộquipe de Probabilitộs et Statistiques au sein de laquelle jai ộtộ accueilli Je tiens dire un ộnorme merci Isabelle, Yolande, Jean-Philippe et Philippe pour les dộmarches administratives ainsi que leur gentillesse Je tiens remercier lộquipe de Probabilitộs et Statistiques et lecole doctorale Institut Galilộe de mavoir fourni plusieurs opportunitộs formidables pour que je puisse participer lẫcole dẫtộ de Probabilitộs de Saint-Flour, lẫcole CIMPA en Tunisie, et aux autres confộrences Je suis ộgalement redevable envers Maria Rosa Laorden i Dels, Yui Makino et Dai Taguchi pour mavoir beaucoup aidộ organiser mes voyages Barcelone et au Japon Jai une pensộe toute particuliốre Josefina Dexeus qui ma accueilli chez elle et ma prộparộ les dợners dộlicieux lors de mes premier et dernier sộjours Barcelone Je noublie pas de remercier tous mes amis vietnamiens en France dont la liste est trốs longe : Vu, The Cuong, Tran Cuong, Phong, Quang, Diep, Thi, Tien, Phuong, Nguyen, Nga, Ng, Chien, Hoan, Duong, Vi, Mai, Dung, , et avec qui jai eu le plaisir de partager de trốs bons moments dans la vie quotidienne pendant ces annộes Je pense tout particuliốrement Quoc Hung, Anh Tuan et Van Tuan pour mavoir beaucoup aidộ dốs mes premiers jours Paris Enfin, je ne peux terminer cette page sans remercier du fond du cur mes parents, mon frốre et ma sur qui mont constamment soutenu et encouragộ tout au long de mon parcours Je souhaite leur dộdier cette thốse Rộsumộ Dans cette thốse nous appliquons le calcul de Malliavin afin dobtenir la propriộtộ de normalitộ asymptotique locale (LAN) partir dobservations discrốtes de certains processus de diffusion uniformộment elliptique avec sauts Dans le Chapitre nous rộvisons la preuve de la propriộtộ de normalitộ mixte asymptotique locale (LAMN) pour des processus de diffusion avec sauts partir dobservations continues, et comme consộquence nous obtenons la propriộtộ LAN en supposant lergodicitộ du processus Dans le Chapitre nous ộtablissons la propriộtộ LAN pour un processus de Lộvy simple dont les paramốtres de dộrive et de diffusion ainsi que lintensitộ sont inconnus Dans le Chapitre 4, laide du calcul de Malliavin et des estimộes de densitộ de transition, nous dộmontrons que la propriộtộ LAN est vộrifiộe pour un processus de diffusion sauts dont le coefficient de dộrive dộpends dun paramốtre inconnu Finalement, dans la mờme direction nous obtenons dans le Chapitre la propriộtộ LAN pour un processus de diffusion sauts oự les deux paramốtres inconnus interviennent dans les coefficients de dộrive et de diffusion Mots-clộs: Calcul de Malliavin ; Estimateur asymptotiquement efficace ; Estimation paramộtrique ; Normalitộ asymptotique locale ; Normalitộ mixte asymptotique locale ; Processus de Lộvy ; Processus de diffusion avec sauts Abstract In this thesis we apply the Malliavin calculus in order to obtain the local asymptotic normality (LAN) property from discrete observations for certain uniformly elliptic diffusion processes with jumps In Chapter we review the proof of the local asymptotic mixed normality (LAMN) property for diffusion processes with jumps from continuous observations, and as a consequence, we derive the LAN property when supposing the ergodicity of the process In Chapter we establish the LAN property for a simple Lộvy process whose drift and diffusion parameters as well as its intensity are unknown In Chapter 4, using techniques of the Malliavin calculus and the estimates of the transition density, we prove that the LAN property is satisfied for a jump-diffusion process whose drift coefficient depends on an unknown parameter Finally, in the same direction we obtain in Chapter the LAN property for a jump-diffusion process where two unknown parameters determine the drift and diffusion coefficients of the jump-diffusion process Keywords: Malliavin calculus ; Asymptotically efficient estimator ; Parametric estimation ; Local asymptotic normality ; Local asymptotic mixed normality ; Lộvy process ; Diffusion process with jumps iii Table des matiốres Introduction 1.1 Some basics on parametric statistical inference 1.2 Motivation and model setting 1.3 Goal of the thesis and its main results 12 1.3.1 LAMN property for continuous observations of jump-diffusion processes 12 1.3.2 LAN property for a simple Lộvy process 13 1.3.3 LAN property for a jump-diffusion process : drift parameter 13 1.3.4 LAN property for a jump-diffusion process : drift and diffusion parameters 14 1.4 Main techniques 16 1.4.1 Malliavin calculus approach 16 1.4.2 Large deviation principle and Girsanovs theorem 16 1.4.3 Central limit theorem for triangular arrays 17 LAMN property for continuous observations of diffusion processes with jumps 19 2.1 Introduction 19 2.2 LAMN property for jump-diffusion processes 21 2.3 Girsanovs Theorem 30 2.4 LAN property for ergodic diffusion processes with jumps 30 2.5 Examples 33 2.5.1 Example 33 2.5.2 Example 34 LAN property for a simple Lộvy process 41 3.1 Introduction and main result 41 3.2 Preliminaries 43 3.3 Proof of Theorem 3.1.1 50 3.3.1 Expansion of the log-likelihood ratio 50 3.3.2 Negligible contributions 52 3.3.3 Main contribution : LAN property 57 3.4 Conclusion and Final Comments 60 3.5 Maximum likelihood estimator 61 v vi Table des matiốres LAN property for a jump-diffusion process : drift parameter 65 4.1 Introduction and main result 65 4.2 Preliminaries 68 4.3 Proof of Theorem 4.1.1 79 4.3.1 Expansion of the log-likelihood ratio 80 4.3.2 Negligible contributions 81 4.3.3 Main contributions : LAN property 90 Maximum likelihood estimator for Ornstein-Uhlenbeck process with jumps 91 4.4 LAN property for a jump-diffusion process : drift and diffusion parameters 95 5.1 Introduction and main result 95 5.2 Preliminaries 98 5.3 Proof of Theorem 5.1.1 121 5.3.1 Expansion of the log-likelihood ratio 121 5.3.2 Negligible contributions 124 5.3.3 Main contributions : LAN property 140 Bibliographie 145 Chapitre Introduction This thesis is concerned with the application of the Malliavin calculus to the study of the local asymptotic normality (LAN) property from discrete observations for a class of uniformly elliptic diffusion processes with jumps We will also go over the proof of the local asymptotic mixed normality (LAMN) property from continuous observations for jump-diffusion processes The importance of such property in parametric estimation is characterized by the convolution theorem allowing us to define the asymptotically efficient estimators, and the minimax theorem giving the lower bound for the asymptotic variance of estimators The aim of this introduction is to recall several concepts on asymptotic statistical inference, to provide some motivation for the study of such property, to give the main results contained in the thesis, and to explain in detail the main techniques used to obtain such results 1.1 Some basics on parametric statistical inference Consider an Rn -valued random vector X n = (X1 , , Xn ) defined on a probability space (, F, P), whose structure will be described later, such that the probability law of X n depends on a parameter = (1 , , k ) , where the parameter space is an open subset of Rk We are interested in the case where X n corresponds to the discrete observations of a stochastic process X = (Xt )t0 defined on (, F, P) and adapted to a filtration (Ft )t0 , which satisfies a stochastic differential equation with jumps having a Brownian component We then denote by P the probability law of X on the Skohorod space (D(R+ , R), B(R+ , R)), where D(R+ , R) denotes the space of cdlg functions from R to R, and B(R+ , R) its associated Borel -algebra, and P L(P ) by E and E the expectation with respect to P and P, respectively Let and denote P L(P) the convergence in P -probability and in P -law, respectively Similarly, and denote the convergence in P-probability and in P-law, respectively Let Xn be the sample space containing all the possible values of X n , and B(Xn ) the Borel -algebra of observable events Let (Pn ) be the family of probability laws defined on (Xn , B(Xn )), and induced by X n : Xn Rn The triplet (D(R+ , R), B(Xn ), (Pn ) ) is called a parametric statistical model, which we denote by (Pn ) The parametric statistical model (D(R+ , R), B(R+ , R), (P ) ) is defined similarly, which we denote by (P ) We denote by En the expectation with respect to Pn Our objective is to estimate the parameter on the basis of the observations X n For this, let us introduce the following concepts A statistic is any measurable function T : Xn Rm , which does not depend on Moreover, any statistic T : Xn is called an estimator of the parameter The bias of an estimator T (X n ) is defined as b (T (X n )) = E [T (X n )] An estimator T (X n ) of is said to be unbiased if b (T (X n )) = The asymptotic bias of an estimator T (X n ) is defined as limn b (T (X n )) Moreover, if limn b (T (X n )) = 0, the estimator T (X n ) is said to be asymptotically unbiased Chapitre Introduction The definition of the Fisher information matrix depends on the notion of the score function which plays a central role in parametric statistical inference In order for this notion to be welldefined, it is necessary to impose certain conditions on the Radon-Nikodym density pn (x; ), x Xn of Pn with respect to a dominating measure àn We shall utilize the following definition Definition 1.1.1 The parametric statistical model (Pn ) is regular if i) There exists a -finite positive measure àn on (Xn , B(Xn )) such that for all , the probability measures Pn are absolutely continuous with respect to àn , and the Radon-Nikodym density dPn pn (x; ) = (x) dàn is continuous on for àn -almost all x Xn ii) The function pn (x; ) is differentiable in in L2 (àn ) for all iii) The L2 (àn )-derivative of pn (x; ) is continuous in L2 (àn ) Note that the regularity of the parametric statistical model (P ) is defined similarly Definition 1.1.2 Let (Pn ) be a regular parametric statistical model The likelihood function and log-likelihood function based on X n are defined as random functions of as follows Ln () = pn (X n ; ), and The score function is given by the gradient n () n () = log pn (X n ; ) = log pn (X n ; ) Let us now give some consequences of this regularity condition (see Lemmas 7.1 and 7.2 of [28, Chapter I]) Lemma 1.1.1 Let (Pn ) be a regular parametric statistical model The Fisher information matrix of the model, defined as In () = En n () n () T = En log pn (X n ; ) log pn (X n ; )T exists and is continuous on Let T : Xn Rm be a statistic such that En [|T (X n )|2 ] is bounded in a neighborhood of Then, the function En [T (X n )] is continuously differentiable in this neighborhood, and En [T (X n )] = T (x)pn (x; )àn (dx) = Xn T (x) pn (x; )àn (dx) Xn Taking T (X n ) = in 2., we obtain that En [ n ()] = Therefore, In () = Varn ( n ()) Furthermore, if the second order derivative n () exists, then In () = En [2 n ()] Let (P ) be a regular parametric statistical model, and let be the measure on the space (D(R+ , R), B(R+ , R)) from Definition 1.1.1 such that for all , p(x; ) = dP (x) d The likelihood function and log-likelihood function based on X are defined as random functions of as follows L() = p(X; ), and () = log p(X; ) The score function is given by the gradient () = log p(X; ) The Fisher information matrix of this model is defined as I() = E () ()T = E log p(X; ) log p(X; )T log p(x; ) log p(x; )T p(x; )(dx) = D(R+ ,R) 1.1 Some basics on parametric statistical inference We are now interested in using the Malliavin calculus in order to write the score function as a conditional expectation involving the Skorohod integral (see [13, Theorem 3.3] for k = 1) We refer to Nualart [57] for a detailed exposition of the classical Malliavin calculus on the Wiener space We now recall briefly the Malliavin calculus for Lộvy processes developed by Leún et al in [50] and Petrou in [61], which will be applied in the thesis In all what follows, the observed process is defined by X = (Xt )t0 , which is driven by a Brownian motion B and a compensated Poisson random measure N In Chapters 3-5, to avoid confusion with the observed process X , we introduce an independent copy of X , denoted by Y which is driven by a Brownian motion W and a compensated Poisson random measure M , where the Malliavin calculus with respect to W will be applied Therefore, Y can be considered as the theoretical process Definition 1.1.3 Let us define a Brownian motion B = (Bt )t0 on the canonical probability space (1 , F , P1 ) with its natural filtration {Ft1 }t0 , a Poisson random measure N (dt, dz) on the canonical probability space (2 , F , P2 ) with intensity measure (dz)dt and its natural filtration {Ft2 }t0 , another Brownian motion W = (Wt )t0 on the canonical probability space (3 , F , P3 ) with its natural filtration {Ft3 }t0 , and another Poisson random measure M (dt, dz) on the canonical probability space (4 , F , P4 ) with intensity measure (dz)dt and its natural filtration {Ft4 }t0 Then, (, F, P) is defined as the canonical product probability space, where = ì2 ì3 ì4 , F = F F F F , P = P1 P2 P3 P4 , and Ft = Ft1 Ft2 Ft3 Ft4 Therefore on this space the canonical process represents (B, N, W, M ) which are therefore mutually independent We denote by = ì , F = F F , P = P1 P2 , Ft = Ft1 Ft2 , and = ì , F = F F , P = P3 P4 , Ft = Ft3 Ft4 We denote by E, E and E the expectation with respect to P, P and P, respectively Observe that = ì , F = F F, P = P P, Ft = Ft Ft , and E = E E On the filtered probability space (, F, {Ft }t[0,T ] , P), consider a two-dimensional centered square integrable Lộvy process Z = (Z , Z ) = (Zt )t[0,T ] given by t Zt1 = Bt + z (N (dt, dz) (dz)dt) , R0 t Zt2 = Wt + z (M (dt, dz) (dz)dt) , R0 where , > are constant, R0 := R \ {0} The compensated Poisson random measures are denoted by N (dt, dz) := N (dt, dz) (dz)dt, and M (dt, dz) := M (dt, dz) (dz)dt Here, the intensity measures satisfy that R0 (1 |z|2 )(dz) < and R0 (1 |z|2 )(dz) < Remark that the filtration {Ft }t[0,T ] is the same as the one generated by the Lộvy process Z The main idea of Leún et al in [50] is to represent random variables on (, F, P) via iterated integrals, from which a Mallivin calculus can be defined as in the Gaussian setting in Nualart [57] To simplify the exposition, we introduce the following unified notation for the Brownian motions and the Poisson random measures U1 = U2 = [0, T ], and U3 = U4 = [0, T ] ì R0 , dQ1 (ã) = dBã , dQ2 (ã) = dWã , and Q3 (ã, ) = N (ã, ), Q4 (ã, ) = M (ã, ), d Q1 ã = d Q2 ã = dã, and d Q3 ã = d ã ìd(), d Q4 ã = d ã ìd(), and for the variables tk [0, T ] and z R0 , uk := tk , = 1, 2, (tk , z) , = 3, Chapitre Introduction Set Sn = {1, 2, 3, 4}n For (j1 , , jn ) Sn , define an expanded simplex of the form Gj1 , ,jn = (uj11 , , ujnn ) ni=1 Uji : < t1 < ã ã ã < tn < T We next define the iterated integral of the form Jn(j1 , ,jn ) (gj1 , ,jn ) = Gj1 , ,jn gj1 , ,jn (uj11 , , ujnn )Qj1 (duj11 ) ã ã ã Qjn (dujnn ), where gj1 , ,jn is a deterministic function in L2 (Gj1 , ,jn ) = L2 (Gj1 , ,jn , ni=1 d Qji ) Theorem 1.1.1 For every random variable F L2 (, FT , P), there exists a unique sequence of deterministic functions {gj1 , ,jn } n=0 , (j1 , , jn ) Sn , where gj1 , ,jn L (Gj1 , ,jn ) such that Jn(j1 , ,jn ) (gj1 , ,jn ), F = E[F ] + n=1 (j1 , ,jn )Sn and we have the isometry F L2 (P) = EF + Jn(j1 , ,jn ) (gj1 , ,jn ) L2 (Gj1 , ,jn ) n=1 (j1 , ,jn )Sn Using this chaotic representation property, the directional derivatives can be defined with respect to Brownian motion and Poisson random measure For this, denote j j k+1 k1 , , ujnn ) Gj1 , ,jk1 ,jk+1 , ,jn : ,u jkk , uk+1 (uj11 , , uk1 Gkj1 , ,jn (t) = < t1 < ã ã ã < tk1 < t < tk+1 < ã ã ã < tn < T , where u means that we omit the u element Definition 1.1.4 Let gj1 , ,jn L2 (Gj1 , ,jn ) and n ( ) Du Jn(j1 , ,jn ) (gj1 , ,jn ) = {1, 2, 3, 4} Then (j , ,ji1 , ji ,ji+1 , ,jn ) 1{ji = } Jn1 gj1 , ,jn ( , u , )1Gi j1 , ,jn (t) i=1 (j , ,jn ) is called the derivative of Jn Definition 1.1.5 Let D( in the -th direction, then ) (gj1 , ,jn ) in the -th direction be the space of the random variables in L2 () that are differentiable D( ) = F L2 (), F = E[F ] + Jn(j1 , ,jn ) (gj1 , ,jn ) : n=1 (j1 , ,jn )Sn n 1{ji = n=1 (j1 , ,jn )Sn i=1 gj1 , ,jn ( , u , ) } Uji L2 (Gij , ,jn ) d Q (u ) < Definition 1.1.6 Let F D( ) Then the derivative in the -th direction is defined as n ( ) (j , , ji , ,jn ) 1{ji = } Jn1 Du F = n=1 (j1 , ,jn )Sn i=1 gj1 , ,jn ( , u , )1Gi j1 , ,jn (t) 5.3 Proof of Theorem 5.1.1 133 with Lemmas 5.2.10, 5.2.6, and (5.10), we get that n1 k=0 n1 k=0 v2 E n2n ,( ) k ,( ) E EXnt k ,( ) ,( ) n H Ytk+1 E EXnt + k j=1 (a1 , ,aj )I n1 k=0 ,( ) ,( ) n H Ytk+1 n = Xtk+1 1J0,k ,0 q(0) n ,( ) q(0) Ftk = Xtk+1 d v2 n2n ,( ) n H n ,( ) Ytk+1 EXnt v2 2q1 E |H| n2n ,( ) Ytkn ,( ) n = Xtk+1 = Xtk 2q1 E |H| + ,( ) Ytkn q1 q1 = Xtk 1{Jj,k ,a1 , ,aj } E J0,k ,( ) Xtkn C0 n ,( ) ,0 q(0) q2 ,( ) Xtkn n ,( ) q(j) E {Jj,k ,a1 , ,aj } j=1 (a1 , ,aj )I 2q ,0 q(j) n q(0) = Xtk = Xtk d q2 ,( ) Xtkn ,0 q(j) q2 ,( ) n q(j) = Xtk q2 ,( ) Xtkn = Xtk n1 (1 + |Xtk |q ) n k=0 ì + j=1 (a1 , ,aj )I q2 (n n )j C (2q2 1)(j+1) pa1 ã ã ã paj en n j! By hypothesis (A8), this converges to zero in P0 ,0 -probability as n Thus, the result follows Lemma 5.3.8 Under conditions (A1)-(A4) and (A6)-(A8), as n , n1 v n2n k=0 (( ), Xtk ) (H11 + H12 + H13 )2 + 2(0 , Xtk ) Btk+1 Btk ,( ) ì (H11 + H12 + H13 ) EXnt ,( ) EXnt ,( ) + H9 n ,( ) + H10n H8 n k ,( ) ì H8 n k + H9 n ,( ) n 1{ >nn } Ytk+1 ,( ) ,( ) + H10n = Xtk+1 1{ >nn } ,( ) + 2(( ), Ytkn ,( ) ,( ) n 1{ >nn } Ytk+1 ) Wtk+1 Wtk P0 ,0 = Xtk+1 1{ >nn } d Proof Using the fact that 1{ >nn } = 1{ nn } and 1{ >nn } = 1{ nn } , it suffices to show that as n , n1 k=0 v n2n (( ), Xtk ) (H11 + H12 + H13 )2 + 2(0 , Xtk ) Btk+1 Btk ,( ) ì (H11 + H12 + H13 ) EXnt ì Wtk+1 Wtk ,( ) H8 n ,( ) H8 n k ,( ) + H9 n ,( ) + H9 n ,( ) + H10n ,( ) + H10n ,( ) n Ytk+1 ,( ) + 2(( ), Ytkn = Xtk+1 d P0 ,0 ) d 134 Chapitre LAN property for a jump-diffusion process : drift and diffusion parameters For this, we split the term inside the conditional expectation as ,( ) H8 n ,( ) + H9 n ,( ) ì H8 n ,( ) + H10n ,( ) + H9 n ,( ) + 2(( ), Ytkn ,( ) ) Wtk+1 Wtk = K n ,( ) + F n ,( ) , + H10n where K n ,( ) tk+1 := b(n , Ysn ,( ) )ds tk+1 ), Ysn ,( ) )dWs (( + tk tk ,( ) (( ), Ytkn tk+1 ì )(Wtk+1 Wtk ) tk+1 b(n , Ysn ,( ) )ds +2 tk (( ), Ysn ,( ) )dBs + tk+1 z M (ds, dz) , tk tk I and F n ,( ) tk+1 := z M (ds, dz) tk tk+1 tk+1 z M (ds, dz) +2 tk I tk I (( ), Ysn ,( ) )dWs Similarly, (H11 + H12 + H13 )2 + 2(0 , Xtk ) Btk+1 Btk (H11 + H12 + H13 ) = K ,0 + F ,0 , where tk+1 K ,0 : = b(0 , Xs0 ,0 )ds tk+1 + tk tk+1 tk tk+1 +2 b(0 , Xs0 ,0 )ds (0 , Xs0 ,0 )dBs (0 , Xtk )(Btk+1 Btk ) tk+1 (0 , Xs0 ,0 )dBs + z N (ds, dz) , I tk tk tk and tk+1 F ,0 := z N (ds, dz) tk tk+1 tk+1 +2 z N (ds, dz) tk I I (0 , Xs0 ,0 )dBs tk Therefore, it suffices to show that as n , n1 k=0 v n2n ,( (( ), Xtk ) K ,0 EXnt k ) ,( ) n K n ,( ) Ytk+1 = Xtk+1 d P0 ,0 0, (5.44) n1 k=0 v n2n ,( (( ), Xtk ) F ,0 EXnt k ) ,( ) n F n ,( ) Ytk+1 = Xtk+1 d P0 ,0 (5.45) We first show (5.44), by proving that conditions (i) and (ii) of Lemma 1.4.1 hold under the measure P0 ,0 We start showing (i) Applying Lemma 5.2.10 to the conditional expectation, we get that ,( ) E K ,0 EXnt k ,( ) n K n ,( ) Ytk+1 = Xtk+1 Ftk = S1 + S2 , 5.3 Proof of Theorem 5.1.1 135 where ,( ) S1 = E EXnt ,( ) n K n ,( ) Ytk+1 k ,( ) n = Xtk+1 1J0,k ,0 q(0) ,( ) n ,( ) q(0) Xtkn = Xtk ,( ) E EXnt k j=1 (a1 , ,aj )I ì 1{Jj,k ,a1 , ,aj } ,0 q(j) n ,( ) q(j) S2 = E K ,0 Xtk E K n ,( ) ,( ) ,( ) n K n ,( ) Ytk+1 n = Xtk+1 ,( ) Xtkn ,( ) Ytkn = Xtk , = Xtk Applying Lemma 5.2.9 and Jensens inequality, we conclude that n1 v k=0 n2n (( ), Xtk ) S1 d 2q n C0 n n1 (1 + |Xtk |q ) k=0 C (p1 q3 )(j+1) pa1 ã ã ã paj en n ì + j=1 (a1 , ,aj )I (n n j! )j + q1 p1 q2 By hypothesis (A8), this converges to zero in P0 ,0 -probability as n On the other hand, observe that EXtk K ,0 Xtk = tk+1 tk+1 tk s E b(0 , Xs0 ,0 )b(0 , Xu0 ,0 ) Xtk duds tk E (0 , Xs0 ,0 ) (0 , Xtk ) Xtk ds + tk tk+1 +2 s E b(0 , Xs0 ,0 ) tk s (0 , Xu0 ,0 )dBu + z N (du, dz) tk tk+1 tk Xtk ds I s E b(0 , Xs0 ,0 )b(0 , Xu0 ,0 ) Xtk duds =2 tk tk tk+1 + E (0 , Xs0 ,0 ) (0 , Xtk ) Xtk ds tk tk+1 +2 s E b(0 , Xs0 ,0 ) Xs0 ,0 Xtk tk tk+1 = b(0 , Xu0 ,0 )du Xtk ds tk E (0 , Xs0 ,0 ) (0 , Xtk ) Xtk ds + tk tk+1 E b(0 , Xs0 ,0 ) (Xs Xtk ) Xtk ds, tk which implies that S2 = S2,1 + 2S2,2 , where tk+1 S2,1 = E (0 , Xs0 ,0 ) (0 , Xtk ) Xtk tk ,( ) E (( ), Ysn ,( ) ) (( ), Ytkn tk+1 S2,2 = E b(0 , Xs0 ,0 ) Xs0 ,0 Xtk ,( ) ) Ytkn = Xtk ds, Xtk tk ,( ) E b(n , Ysn ,( ) ) Ysn ,( ) Ytkn ,( ) Ytkn = Xtk ds 136 Chapitre LAN property for a jump-diffusion process : drift and diffusion parameters By the mean value theorem, there exists r (0, 1) such that S2,1 = S2,1,1 S2,1,2 , where tk+1 E (0 , Xs0 ,0 ) (0 , Xtk ) Xtk S2,1,1 = tk ,( ) E (0 , Ysn ,( ) ) (0 , Ytkn v S2,1,2 = n tk+1 tk ,( ) ) Ytkn ds, = Xtk vr vr ,( ) ,( ) E (0 + , Ysn ,( ) ) (0 + , Ytkn = Xtk ds ) Ytkn n n Using hypotheses (A1), (A3)(e) and Lemma 5.2.3, we get that for some constants C, q > 0, n1 k=0 v n2n (( ), Xtk ) S2,1,2 d n C n n1 (1 + |Xtk |q ) k=0 Next, using Lemma 5.2.10, we have that tk+1 ,( ) E (0 , Xsn ,( ) ) (0 , Xtkn ) 1J0,k S2,1,1 = tk ,0 q(0) ,( ) ,( ) , Xsn ,( ) ) Xtkn = Xtk ì ,( ) (s tk , Xtkn n q(0) + ,( ) (0 , Xsn ,( ) ) (0 , Xtkn E ) j=1 (a1 , ,aj )I ì 1{Jj,k ,a1 , ,aj } ,0 q(j) n ,( ) q(j) ,( ) (s tk , Xtkn ,( ) , Xsn ,( ) ; a1 , , aj ) Xtkn = Xtk ds, where in this case we denote {Jj,k , a1 , , aj } = {Ns Ntk = j} {[tk ,s] = {a1 , , aj }} Applying Lemmas 5.2.9, 5.2.3 and (A1) to conclude that for some constants C0 , q > 0, n1 v n2n k=0 (( ), Xtk ) W2,1,1 d q n1 C0 n n1 (1 + |Xtk |q ) k=0 C (p1 q3 )(j+1) pa1 ã ã ã paj en n ì + j=1 (a1 , ,aj )I (n n j! )j + q1 p1 q2 , which, by hypothesis (A8), converges to zero in P0 ,0 -probability as n We next treat S2,2 By the mean value theorem, there exists r (0, 1) such that S2,2 = S2,2,1 S2,2,2 , where tk+1 S2,2,1 = E b(0 , Xs0 ,0 ) Xs0 ,0 Xtk Xtk tk ,( ) E b(0 , Ysn ,( ) ) Ysn ,( ) Ytkn S2,2,2 = and (r) = + u nn ur nn tk+1 tk ,( ) Ytkn = Xtk ,( ) E b((r), Ysn ,( ) ) Ysn ,( ) Ytkn ds, ,( ) Ytkn = Xtk ds, Proceeding as for the term S2,1,1 , we conclude that as n , n1 k=0 v n2n (( ), Xtk ) S2,2,1 d P0 ,0 5.3 Proof of Theorem 5.1.1 137 ,( ) We next add and substract the term b((r), Ytkn S2,2,2 = S2,2,2,1 + S2,2,2,2 , where S2,2,2,1 = tk+1 u nn E tk ,( ) b((r), Ysn ,( ) ) b((r), Ytkn ,( ) ì Ysn ,( ) Ytkn S2,2,2,2 = ) and use equation (5.3) to get that tk+1 u b((r), Xtk ) nn tk ,( ) Ytkn = Xtk ds, s tk ) ,( ) E b(n , Yun ,( ) ) Ytkn = Xtk duds Therefore, using (A3)(e) and Lemma 5.2.3, we get that for some constants C, q > 0, n1 k=0 v n2n (( ), Xtk ) S2,2,2 d n C n n1 (1 + |Xtk |q ) , k=0 which concludes that as n , n1 k=0 v n2n (( ), Xtk ) S2 d P0 ,0 This finishes the proof of Lemma 1.4.1 (i) Next, applying Jensens and Hửlders inequalities with q1 , q2 conjugate, with q2 close to 1, 138 Chapitre LAN property for a jump-diffusion process : drift and diffusion parameters together with Lemmas 5.2.10, 5.2.6, we get that n1 v2 E n2n k=0 n1 k=0 2v n2n K ,0 ì E ,( (( ), Xtk ) K ,0 EXnt k ) ,( ) n K n ,( ) Ytk+1 = 2v n2n k=0 ,( ) + E EXnt ,( ) K n ,( Xtk + E EXnt 2 ) ) k ,( ) n = Xtk+1 ì 1{Jj,k ,a1 , ,aj } k=0 2v n2n + E K n ,( ) ,0 q(j) n ,( ) q(j) ,( ) Xtkn 2 1J0,k ,( ) n Ytk+1 ,0 q(0) ,( ) n ,( ) q(0) Xtkn = Xtk ,( ) n = Xtk+1 = Xtk d (( ), Xtk ) Cn2 (1 + |Xtk |q ) 2q1 ,( ) Ytkn = Xtk + ) k j=1 (a1 , ,aj )I n1 K n ,( E EXnt d Xtk = Xtk+1 Xtk ,( ) ,( ) n Ytk+1 ,( ) n Ytk+1 K ,0 (( ), Xtk ) E Ftk (( ), Xtk ) K n ,( + d k n1 = Xtk+1 E K 2q1 n ,( ) q1 E ,( ) Ytkn J0,k = Xtk ,0 q(0) q2 ,( ) n q(0) q2 ,( ) Xtkn = Xtk q1 j=1 (a1 , ,aj )I 2q ì E C0 n {Jj,k ,a1 , ,aj } ,0 q(j) q2 ,( ) n q(j) q2 ,( ) Xtkn d = Xtk n1 (1 + |Xtk |q ) n k=0 ì + j=1 (a1 , ,aj )I (n n )j C (2q2 1)(j+1) pa1 ã ã ã paj en n j! q2 , By hypothesis (A8), this converges to zero in P0 ,0 -probability as n This finishes the proof of (5.44) Now it remains to treat (5.45) Using equation (5.1), it suffices to show that as n , n1 k=0 v n2n ,( ) EXnt k (( ), Xtk ) zN (ds, dz) tk I (5.46) tk+1 zM (ds, dz) tk tk+1 I n ,( ) Ytk+1 = Xtk+1 d P0 ,0 0, 5.3 Proof of Theorem 5.1.1 139 and n1 v n2n k=0 tk+1 (( ), Xtk ) Xtk+1 Xtk tk+1 ,( ) EXnt k zM (ds, dz) I tk zN (ds, dz) I tk n ,( ) Ytk+1 (5.47) P0 ,0 d = Xtk+1 First, we treat (5.46) by showing that conditions (i) and (ii) of Lemma 1.4.1 hold under the measure P0 ,0 We start showing (i) Recall that the events Ak,r , Ak,r and Ack,r are introduced just before Lemma 5.2.15 Using 1Ac = 1Ak,r , we have that k,r tk+1 E zN (ds, dz) tk I tk ,( ) EXnt k ,( ) = Xtk+1 Ftk I tk rA n = M1,2 I zN (ds, dz) E 1Ak,r ,( ) n Ytk+1 zM (ds, dz) tk+1 = tk+1 ,( ) EXnt k tk+1 (1Ak,r + 1Ac ) ,( ) n M2,2 ,( ) n Ytk+1 zM (ds, dz) k,r I tk Ftk = Xtk+1 , which, together with Lemma 5.2.15 and hypotheses (A2), (A3)(b), implies that for any (, 21 ), n1 v n2n k=0 (( ), Xtk ) E n2n k=0 ,( ) n Ytk+1 I = Xtk+1 n ,( ) n ,( (( ), Xtk ) M1,2 + M2,2 C I tk |v| tk zM (ds, dz) k n1 zN (ds, dz) tk+1 ,( ) EXnt tk+1 n C0 21 n e n n ) Ftk d d n1 (1 + |Xtk |q ) , k=0 for some constants C, C0 , q > This shows Lemma 1.4.1 (i) Next, Jensens inequality gives tk+1 E tk I E 1Ak,r rA ,( ) EXnt k ,( ) n M1,4 zM (ds, dz) k tk I ,( ) n Ytk+1 = Xtk+1 Ftk tk+1 = tk+1 ,( ) EXnt zN (ds, dz) zN (ds, dz) tk I tk+1 (1Ak,r + 1Ac ) k,r ,( ) n + M2,4 zM (ds, dz) tk I ,( ) n Ytk+1 = Xtk+1 Ftk , which, together with Lemma 5.2.15 and hypotheses (A2), (A3)(b), implies that for any 140 Chapitre LAN property for a jump-diffusion process : drift and diffusion parameters (, 21 ), n1 k=0 v2 E n2n (( ), Xtk ) n1 k=0 I tk ,( ) n Ytk+1 zM (ds, dz) k 2v n2n zN (ds, dz) I tk tk+1 ,( ) EXnt C C0 21 n e n n tk+1 ,( ) n (( ), Xtk ) M1,4 d = Xtk+1 ,( ) n + M2,4 Ftk d n1 (1 + |Xtk |q ) , k=0 for some constants C, C0 , q > This finishes the proof of (5.46) Finally, using Cauchy-Schwarz inequality and Lemma 5.2.3 (i), and proceeding as for (5.46), we conclude (5.47) Thus, the desired result follows 5.3.3 Main contributions : LAN property Proof Using the fact that 1{ >nn } = 1{ nn } and 1{ >nn } = 1{ nn } , we write k,n = k,n,1 k,n,2 k,n,3 , where u nn u = nn b(( ), Xtk ) (0 , Xtk ) Btk+1 Btk + (b(0 , Xtk ) b(( ), Xtk )) n d , (0 , Xtk ) b(( ), Xtk ) (0 , Xtk ) Btk+1 Btk + (b(0 , Xtk ) b(( ), Xtk )) n (0 , Xtk ) k,n,1 = k,n,2 ì k,n,3 = 0 ( ),0 EXt k u nn ì ( ),0 EXt k ( ),0 1{ >nn } Ytk+1 = Xtk+1 1{ nn } d , b(( ), Xtk ) (0 , Xtk ) Btk+1 Btk + (b(0 , Xtk ) b(( ), Xtk )) n (0 , Xtk ) ( ),0 1{ nn } Ytk+1 = Xtk+1 d Similarly, we write k,n = k,n,1 k,n,2 k,n,3 , where k,n,1 = k,n,2 = k,n,3 = v n2n v n2n ,( ) ì EXnt k v n2n ,( ) ì EXnt k (( ), Xtk ) (0 , Xtk ) Btk+1 Btk (( ), Xtk ) n d , (( ), Xtk ) (0 , Xtk ) Btk+1 Btk (( ), Xtk ) n (( ), Xtk ) n ,( ) n 1{ >nn } Ytk+1 = Xtk+1 1{ nn } d , (( ), Xtk ) (0 , Xtk ) Btk+1 Btk ,( ) n 1{ nn } Ytk+1 = Xtk+1 d 1, 2, Proceeding as for the terms Zk,n and Zk,n , we get that as n , n1 P0 ,0 (k,n,2 + k,n,3 ) k=0 5.3 Proof of Theorem 5.1.1 141 2, Proceeding as for the terms Q1, k,n and Qk,n , we get that as n , n1 P0 ,0 (k,n,2 + k,n,3 ) k=0 Next, applying Lemma 1.4.3 to k,n = k,n,1 + k,n,1 , it suffices to show that as n , n1 E k,n,1 |Ftk k=0 n1 E k,n,1 |Ftk P0 ,0 u2 b (0 , ), (5.48) P0 ,0 v2 (0 , ), (5.49) k=0 n1 E k,n,1 |Ftk E k,n,1 |Ftk k=0 n1 E k,n,1 |Ftk E k,n,1 |Ftk P0 ,0 u2 b (0 , ), P0 ,0 v (0 , ), k=0 n1 E k,n,1 k,n,1 |Ftk E k,n,1 |Ftk E k,n,1 |Ftk k=0 n1 |Ftk E k,n,1 P0 ,0 E k,n,1 |Ftk P0 ,0 k=0 n1 (5.50) (5.51) P0 ,0 0, (5.52) 0, (5.53) 0, (5.54) k=0 where b(0 , x) (0 , x) b (0 , ) = R ,0 (dx), and (0 , ) = R (0 , x) (0 , x) ,0 (dx) Proof of (5.48) Using E[Btk+1 Btk |Ftk ] = 0, and the mean value theorem, we get n1 E k,n,1 |Ftk k=0 u2 = n = Tk,n := T2 := u2 n n k=0 2n k=0 b(( ), Xtk ) ur , Xtk )d b(0 + (0 , Xtk ) nn b(0 , Xtk ) (0 , Xtk ) T1 T2 , and b(( ), Xtk ) (0 , Xtk ) n1 u2 k=0 n1 u2 n1 k=0 Tk,n , for some r (0, 1), T1 = n1 b(0 + ur , Xtk ) b(0 , Xtk ) d , nn b(0 , Xtk ) ( b(( ), Xtk ) b(0 , Xtk )) d (0 , Xtk ) Using hypotheses (A2) and (A3)(b), (c), we have that for some constants C, , q > 0, n1 n1 E |Tk,n ||Ftk k=0 C|u| +2 |r| (1 + |Xtk |q ) , ( nn ) n k=0 142 Chapitre LAN property for a jump-diffusion process : drift and diffusion parameters P0 ,0 which, by Lemma 1.4.2, implies that T1 as n Thus, so does T2 by using the same argument On the other hand, applying Lemma 5.2.16, we obtain that as n , n1 n b(0 , Xtk ) (0 , Xtk ) k=0 P0 ,0 b (0 , ), (5.55) which gives (5.48) Proof of (5.50) First, from the previous computations, we have that n1 E k,n,1 |Ftk k=0 u4 = n n1 k=0 n1 Cu n2 ur b(( ), Xtk ) b(0 + , Xtk )d (0 , Xtk ) nn (1 + |Xtk |q ) , k=0 for some constants C, q > Next, using properties of the moments of the Brownian motion, we can write n1 E k,n,1 |Ftk = k=0 u2 n n1 b(0 , Xtk ) (0 , Xtk ) k=0 + T3 + T4 + T5 , where T3 := T4 := T5 := 2u2 n n1 k=0 n1 u b(0 , Xtk ) (0 , Xtk ) n k=0 n1 u2 n n k=0 b(( ), Xtk ) b(0 , Xtk ) d , (0 , Xtk ) b(( ), Xtk ) b(0 , Xtk ) d (0 , Xtk ) , b(( ), Xtk ) (b(0 , Xtk ) b(( ), Xtk )) d (0 , Xtk ) As for the term T1 , using hypotheses (A2) and (A3)(b), (c), we get that T3 , T4 , T5 converge to zero in P0 ,0 -probability as n Moreover, using again (5.55), we conclude (5.50) Proof of (5.53) Basic computation yields that n1 E k,n,1 |Ftk k=0 Cu4 n2 n1 (1 + |Xtk |q ) , k=0 for some constants C, q > Proof of (5.49) Again, using properties of the moments of the Brownian motion and the mean value theorem, we have n1 E k,n,1 |Ftk = k=0 v2 2 n n1 k=0 (0 , Xtk ) (0 , Xtk ) T6 T7 T8 , 5.3 Proof of Theorem 5.1.1 143 where, for some r (0, 1), T6 = T7 = T8 = 2v n n1 k=0 n1 2v vr vr (( ), Xtk ) (0 + , Xtk ) (0 + , Xtk ) (0 , Xtk ) d , n n n vr (( ), Xtk ) (0 + , Xtk ) (0 , Xtk ) (0 , Xtk )d , n n (( ), Xtk ) (0 , Xtk ) (0 , Xtk ) (0 , Xtk )d k=0 n1 2v k=0 As for the term T1 , using (A2) and (A3)(b), (d), together with Lemma 1.4.2, we conclude that T6 , T7 , T8 converge to zero in P0 ,0 -probability as n Again, applying Lemma 5.2.16, we obtain that as n , n n1 (0 , Xtk ) (0 , Xtk ) k=0 P0 ,0 (0 , ), (5.56) which gives (5.49) Proof of (5.51) First, from the previous computations, we have that n1 E k,n,1 |Ftk = k=0 4v n2 n1 k=0 n1 Cv n2 vr (( ), Xtk ) (0 + , Xtk )d n (1 + |Xtk |q ) , k=0 for some constants C, q > Next, using the fact that E[(Btk+1 Btk )2 |Ftk ] = n and E[(Btk+1 Btk )4 |Ftk ] = 32n , we can write n1 E k,n,1 |Ftk k=0 2v = n n1 k=0 (0 , Xtk ) (0 , Xtk ) v2 + n n1 Sk,n , k=0 where for some constants C, q > 0, v2 n n1 k=0 Cv E |Sk,n | |Ftk n n n1 (1 + |Xtk |q ) , k=0 which, together with Lemma 1.4.2 and (5.56), concludes (5.51) Proof of (5.52) Using properties of the moments of the Brownian motion, we get that n1 E k,n,1 k,n,1 |Ftk E k,n,1 |Ftk E k,n,1 |Ftk k=0 C n n1 (1 + |Xtk |q ) , k=0 for some constants C, q > Proof of (5.54) Basic computation yields that n1 E k,n,1 |Ftk k=0 Cv n2 n1 (1 + |Xtk |q ) , k=0 for some constants C, q > The proof of Theorem 5.1.1 is now completed Bibliographie [1] Aùt-Sahalia, Y and Jacod, J., Volatility estimators for discretely sampled Lộvy processes Ann Statist., 35(1) :355392, 2007 [2] Aùt-Sahalia, Y and Jacod, J., Fishers information for discretely sampled Lộvy processes Econometrica, 76(4) :727761, 2008 [3] Applebaum, D., Lộvy processes and stochastic calculus, volume 116 of Cambridge Studies in Advanced Mathematics Cambridge University Press, Cambridge, second edition, 2009 [4] Azencott, R., Densitộ des diffusions en temps petit : dộveloppements asymptotiques In Seminar on probability, XVIII, volume 1059 of Lecture Notes in Math., pages 402498 Springer, Berlin, 1984 [5] Barndorff-Nielsen, O E and Sứrensen, M., A review of some aspects of asymptotic likelihood theory for stochastic processes Int Stoch Review, 62 :133-165, 1994 [6] Basawa, I.V and Prakasa Rao, B L S., Statistical inference for stochastic processes Probability and Mathematical Statistics Academic Press, Inc [Harcourt Brace Jovanovich, Publishers], London-New York, 1980 [7] Basawa, I.V and Scott, D J., Asymptotic Optimal Inference for Non-ergodic Models Lecture Notes in Statistics Springer-Verlag New York, 1983 [8] Bertoin, J., Lộvy processes Cambridge University Press, 1998 [9] Bichteler, K., Gravereaux, J.B and Jacod, J., Malliavin calculus for processes with jumps, volume of Stochastics Monographs Gordon and Breach Science Publishers, New York, 1987 [10] Clộment, E., Delattre, S and Gloter, A., Asymptotic lower bounds in estimating jumps Bernoulli, 20(3) :1059-1096, 2014 [11] Clộment, E and Gloter, A., Local asymptotic mixed normality property for discretely observed stochastic differential equations driven by stable lộvy processes 2013 Preprint [12] Cont, R and Tankov, P., Financial modelling with jump processes Chapman & Hall/CRC, Boca Raton, FL, 2004 [13] Corcuera, J.M and Kohatsu-Higa, A., Statistical inference and Malliavin calculus In Seminar on Stochastic Analysis, Random Fields and Applications VI, volume 63 of Progr Probab., pages 5982 Birkhọuser/Springer Basel AG, Basel, 2011 [14] Crimaldi, I and Pratelli, L., Convergence results for multivariate martingales Stochastic Process Appl., 115(4) :571577, 2005 [15] Dacunha-Castelle, D and Florens-Zmirou, D., Estimation of the coefficients of a diffusion from discrete observations Stochastics, 19(4) :263284, 1986 [16] Dietz, H.M., A non-Markovian relative of the Ornstein-Uhlenbeck process and some of its local statistical properties Scand J Statist., 19(4) :363379, 1992 [17] Dohnal, G., On estimating the diffusion coefficient J Appl Probab., 24(1) :105114, 1987 [18] Feigin, P.D., Stable convergence of semimartingales Stochastic Process Appl., 19(1) :125 134, 1985 145 146 Bibliographie [19] Florens-Zmirou, D., Approximate discrete-time schemes for statistics of diffusion processes Statistics, 20(4) :547557, 1989 [20] Fournier, N and Printems, J., Absolute continuity for some one-dimensional processes Bernoulli, 16(2) :343360, 2010 [21] Genon-Catalot, V and Jacod, J., On the estimation of the diffusion coefficient for multidimensional diffusion processes Ann Inst H Poincarộ Probab Statist., 29(1) :119151, 1993 [22] Genon-Catalot, V and Jacod, J., Estimation of the diffusion coefficient for diffusion processes : random sampling Scand J Statist., 21(3) :193221, 1994 [23] Gloter, A and Gobet, E., LAMN property for hidden processes : the case of integrated diffusions Ann Inst Henri Poincarộ Probab Stat., 44(1) :104128, 2008 [24] Gobet, E., Local asymptotic mixed normality property for elliptic diffusion : a Malliavin calculus approach Bernoulli, 7(6) :899912, 2001 [25] Gobet, E., LAN property for ergodic diffusions with discrete observations Ann Inst H Poincarộ Probab Statist., 38(5) :711737, 2002 [26] Hỏjek, J., A characterization of limiting distributions of regular estimates Z Wahrscheinlichkeitstheorie und Verw Gebiete, 14 :323330, 1969/1970 [27] Hỏjek, J., Local asymptotic minimax and admissibility in estimation In Proceedings of the Sixth Berkeley Symposium on Mathematical Statistics and Probability (Univ California, Berkeley, Calif., 1970/1971), Vol I : Theory of statistics, pages 175194 Univ California Press, Berkeley, Calif., 1972 [28] Ibragimov, I.A, and Has minski, R.Z., Statistical estimation, volume 16 of Applications of Mathematics Springer-Verlag, New York-Berlin, 1981 Asymptotic theory, Translated from the Russian by Samuel Kotz [29] Jacod, J., Multivariate point processes : predictable projection, Radon-Nikodým derivatives, representation of martingales Z Wahrscheinlichkeitstheorie und Verw Gebiete, 31 :235253, 1974/75 [30] Jacod, J., Inference for Stochastic Processes - Handbook of Financial Econometrics, volume : Applications, Edited by Yacine Ait-Sahalia North-Holland, 2009 [31] Jacod, J., Statistics and high-frequency data In Statistical methods for stochastic differential equations, volume 124 of Monogr Statist Appl Probab., pages 191310 CRC Press, Boca Raton, FL, 2012 [32] Jacod, J and Mộmin, J., Caractộristiques locales et conditions de continuitộ absolue pour les semi-martingales Z Wahrscheinlichkeitstheorie und Verw Gebiete, 35(1) :137, 1976 [33] Jacod, J and Shiryaev, A.N., Limit theorems for stochastic processes, volume 288 of Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences] Springer-Verlag, Berlin, second edition, 2003 [34] Jeganathan, P., On the asymptotic theory of estimation when the limit of the log-likelihood ratios is mixed normal Sankhy a Ser A, 44(2) :173212, 1982 [35] Jeganathan, P., Some asymptotic properties of risk functions when the limit of the experiment is mixed normal Sankhy a Ser A, 45(1) :6687, 1983 [36] Kabanov, Ju M., Lipcer, R and irjaev, A N., Absolute continuity and singularity of locally absolutely continuous probability distributions I Mat Sb (N.S.), 107(149) :364 415, 1978 [37] Kabanov, Ju M., Lipcer, R and irjaev, A N., Absolute continuity and singularity of locally absolutely continuous probability distributions II Mat Sb (N.S.), 108(150) :3261, 1979 147 [38] Kawai, R., Local asymptotic normality property for Ornstein-Uhlenbeck processes with jumps under discrete sampling J Theoret Probab., 26(4) :932967, 2013 [39] Kawai, R and Masuda, H., Local asymptotic normality for normal inverse Gaussian Lộvy processes with high-frequency sampling ESAIM Probab Stat., 17 :1332, 2013 [40] Kessler, M., Estimation of an ergodic diffusion from discrete observations Scand J Statist., 24(2) :211229, 1997 [41] Kohatsu-Higa, A., Nualart, E and Tran, N.K., LAN property for a simple Lộvy process Accepted for publication in the C R Math Acad Sci Paris, Ser I 2014 [42] Kỹchler, U and Sứrensen, M., A note on limit theorems for multivariate martingales Bernoulli, 5(3) :483493, 1999 [43] Kulik, A.M., Exponential ergodicity of the solutions to SDEs with a jump noise Stochastic Process Appl., 119(2) :602632, 2009 [44] Kutoyants, Y A., Parameter estimation for stochastic processes, volume of Research and Exposition in Mathematics Heldermann Verlag, Berlin, 1984 Translated from the Russian and edited by B L S Prakasa Rao [45] Kutoyants, Y A., Identification of dynamical systems with small noise Mathematics and its Applications Kluwer Academic Publishers Group, Dordrecht, 1994 [46] Kutoyants, Y A., Statistical inference for spatial Poisson processes Lecture Notes in Statistics Springer-Verlag, New York, 1998 [47] Kutoyants, Y A., Statistical inference for ergodic diffusion processes Springer Series in Statistics Springer-Verlag London, Ltd., London, 2004 [48] Le Cam, L., Locally asymptotically normal families of distributions Certain approximations to families of distributions and their use in the theory of estimation and testing hypotheses Univ california Publ Statist., :3798, 1960 [49] Le Cam, L and Lo Yang, G., Asymptotics in statistics Springer Series in Statistics Springer-Verlag, New York, 1990 Some basic concepts [50] Leún, J.A., Solộ, J.L., Utzet, F and Vives, J., On Lộvy processes, Malliavin calculus and market models with jumps Finance Stoch., 6(2) :197225, 2002 [51] Letta, G and Pratelli, L., Convergence stable vers un noyau gaussien Rend Accad Naz Sci XL Mem Mat Appl (5), 20 :205213, 1996 [52] Luschgy, H., Local asymptotic mixed normality for semimartingale experiments Probab Theory Related Fields, 92(2) :151176, 1992 [53] Masuda, H., Ergodicity and exponential -mixing bounds for multidimensional diffusions with jumps Stochastic Process Appl., 117(1) :3556, 2007 [54] Masuda, H., On stability of diffusions with compound-Poisson jumps Bull Inform Cybernet., 40 :6174, 2008 [55] Masuda, H., Convergence of Gaussian quasi-likelihood random fields for ergodic Lộvy driven SDE observed at high frequency Ann Statist., 41(3) :15931641, 2013 [56] Meyn, S.P and Tweedie, R.L., Stability of Markovian processes III Foster-Lyapunov criteria for continuous-time processes Adv in Appl Probab., 25(3) :518548, 1993 [57] Nualart, D., The Malliavin calculus and related topics Probability and its Applications (New York) Springer-Verlag, Berlin, second edition, 2006 [58] Ogihara, T., Local Asymptotic Mixed Normality Property for Nonsynchronously Observed Diffusion Processes 2014 Preprint at arxiv.org/abs/1310.5304 [59] Ogihara, T and Yoshida, N., Quasi-likelihood analysis for the stochastic differential equation with jumps Stat Inference Stoch Process., 14(3) :189229, 2011 148 Bibliographie [60] Ogihara, T and Yoshida, N., Quasi-Likelihood Analysis for Stochastic Regression Models with Nonsynchronous Observations 2012 Preprint at arxiv.org/abs/1212.4911 [61] Petrou, E., Malliavin Calculus in Lộvy spaces and Applications to Finance Electron J Probab., 13 :852879, 2008 [62] Picard, J., On the existence of smooth densities for jump processes Probab Theory Related Fields, 105(4) :481511, 1996 [63] Prakasa Rao, B L S., Semimartingales and their statistical inference, volume 83 of Monographs on Statistics and Applied Probability Chapman & Hall/CRC, Boca Raton, FL, 1999 [64] Qiao, H., Exponential Ergodicity for SDEs with Jumps and Non-Lipschitz Coefficients J Theoret Probab., 27(1) :137152, 2014 [65] Sato, K., Lộvy processes and infinitely divisible distributions, volume 68 of Cambridge Studies in Advanced Mathematics Cambridge University Press, Cambridge, 1999 Translated from the 1990 Japanese original, Revised by the author [66] Segall, A and Kailath, T., Radon-Nikodým derivatives with respect to measures induced by discontinuous independent-increment processes Ann Probability, 3(3) :449464, 1975 [67] Shimizu, Y., M -estimation for discretely observed ergodic diffusion processes with infinitely many jumps Stat Inference Stoch Process., 9(2) :179225, 2006 [68] Shimizu, Y., Local asymptotic mixed normality for discretely observed non-recurrent Ornstein-Uhlenbeck processes Ann Inst Statist Math., 64(1) :193211, 2012 [69] Shimizu, Y and Yoshida, N., Estimation of parameters for diffusion processes with jumps from discrete observations Stat Inference Stoch Process., 9(3) :227277, 2006 [70] Solộ, J.L., Utzet, F and Vives, J., Canonical Lộvy process and Malliavin calculus In Stochastic Process Appl., 117(2) :165187, 2007 [71] Sứrensen, M., Likelihood methods for diffusions with jumps In Statistical inference in stochastic processes, volume of Probab Pure Appl., pages 67105 Dekker, New York, 1991 [72] Uchida, M., Contrast-based information criterion for ergodic diffusion processes from discrete observations Ann Inst Statist Math., 62(1) :161187, 2010 [73] van der Vaart, A.W., Asymptotic statistics, volume of Cambridge Series in Statistical and Probabilistic Mathematics Cambridge University Press, Cambridge, 1998 [74] Willmot, G.E and Lin, X.D., Lundberg bounds on the tails of compound distributions J Appl Probab., 31(3) :743756, 1994 [75] Woerner, J.H.C., Local asymptotic normality for the scale parameter of stable processes Statist Probab Lett., 63(1) :6165, 2003 [76] Yoshida, N., Estimation for diffusion processes from discrete observation J Multivariate Anal., 41(2) :220242, 1992 [...]... Rao [6] for stochastic processes, Basawa and Scott [7] for non-ergodic models, Kutoyants [44, 46, 47] for diffusion processes, Sørensen [71] for diffusions with jumps, Barndorff-Nielsen and Sørensen [5] for stochastic processes, and Prakasa Rao [63] for semimartingales 1.2 Motivation and model setting In practice, the observations are rather discrete than continuous The case of discrete- time observations. .. Chapitre 2 LAMN property for continuous observations of diffusion processes with jumps 2.3 Girsanov’s Theorem This section is devoted to recall Girsanov’s theorem for the diffusion process with jumps (2.1), which is needed in the proof of Theorem 2.2.4 Recall that in [71], Sørensen deals with a more general diffusion process with jumps where the dimension of the space on which the jumps of the Poisson... 4.2.8 Several examples of ergodic diffusion processes with jumps are given in [53], [54], and [67] Moreover, results on ergodicity and exponential ergodicity for diffusion processes with jumps have been established by Masuda in [53, 54] In addition, Kulik in [43] provides a set of sufficient conditions for the exponential ergodicity of diffusion processes with jumps without Gaussian part and gives some... deal with some uniformly elliptic diffusion processes with jumps and study the LAN property from discrete observations of their solution processes For our objective, we think it is essential to first understand the proof of this property in the continuously observed case As a result, excluding this introductory chapter, this thesis consists of four self-contained chapters each of which deals with a... have been studied Precisely, the LAN property is established for some Lévy processes whose transition density can be expressed in an explict form, for instance, stable processes and normal inverse Gaussian Lévy processes (see [75, 39]) In addition, Aït-Sahalia and Jacod in [1] established the LAN property for a class of Lévy processes Recently, Kawai in [38] deals with the particular case of the ergodic... this chapter is to understand and present the methodology for this simple case, which will be next used to prove the LAN property in the non-linear cases where the density function cannot be explicitly written 1.3.3 LAN property for a jump- diffusion process : drift parameter In Chapter 4 we address the validity of the LAN property for a jump- diffusion process X θ = solution to (Xtθ )t≥0 dXtθ = b(θ,... , y) Rd0 |σ −1 (Xt ) (b(θ0 , Xt ) − b(θ, Xt )) |2 dt 0 Ψ(θ0 , Xt− , y) − 1 µθ (Xt− , dy)dt Ψ(θ, Xt− , y) Rd0 0 T LAN property for ergodic diffusion processes with jumps In this section, we seek sufficient conditions in order for the LAN property to hold when the diffusion process with jumps X θ (2.1) is ergodic, as a consequence of Theorem 2.2.4 Let X θ = (Xt )t≥0 be the solution to equation (2.1),... the case of simple Lévy process or by applying a discrete time ergodic theorem (Lemmas 4.2.9, 5.2.16) in the non-linear cases As a result, the ergodicity condition is needed in Chapters 4-5 Chapitre 2 LAMN property for continuous observations of diffusion processes with jumps In this chapter we consider a diffusion process with jumps whose drift and jump coefficient depend on an unknown parameter We... process with jumps (2.2) is 2.2 LAMN property for jump- diffusion processes 21 a semimartingale Therefore, Luschgy’s theorem applies and one can derive sufficient conditions on the coefficients in order to have the LAMN property The aim of this chapter is to present a proof of the LAMN property for the solution X θ of (2.1) by following the proof of Luschgy but applying the Central Limit theorem for multivariate... continuous diffusion processes, and by Aït-Sahalia and Jacod [1, 2], Shimizu and Yoshida [69], Shimizu [67], Ogihara and Yoshida [59], Masuda [55], Kawai [38], Kawai and Masuda [39], Clément, Delattre and Gloter [10, 11] for jump- diffusion processes In the other direction, the statistical inference for stochastic processes with continuous-time observations has been widely developed during the last forty