Limit theorems for p Variations of stable Levy processes

63 353 0
Limit theorems for p Variations of stable Levy processes

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

Thông tin tài liệu

Limit theorems for p-Variations of stable L´ evy processes – Diplomarbeit – Humboldt-Universit¨at zu Berlin Mathematisch-Naturwissenschaftliche Fakult¨at II Institut f¨ ur Mathematik eingereicht von Claudia Hein geboren am 11.02.1982 in Hoyerswerda Betreuer: Prof Dr Peter Imkeller Berlin, den Oktober 2007 Acknowledgements I would like to thank my supervisor Prof Peter Imkeller for the excellent mentoring and for providing many useful ideas throughout the entire thesis I would also like to thank Dr Ilya Pavlyukevich for inspiring me with his many meaningful suggestions and for going the time and effort to support me I am very grateful for the love and patience of my family Contents Introduction Stochastic processes 2.1 2.2 11 L´evy processes 11 2.1.1 Definition and characterisation 11 2.1.2 Examples and stable processes 13 2.1.3 Properties of stable processes 15 Convergence of processes 16 2.2.1 The Skorokhod topology 16 2.2.2 Criteria for convergence 19 p-variation 3.1 3.2 21 p-variation 21 3.1.1 Definition and examples 21 3.1.2 Finiteness in case of stable processes 23 Limit behaviour of the Brownian motion 24 The results – p-variation of stable processes 29 4.1 Central limit theorem for random variable with infinite variance 30 4.2 p-variation for stable processes 33 4.3 Adding processes 33 Proofs for section 4.2 37 5.1 Proof of theorem 4.6 - finite dimensional distributions 43 5.2 Proof of theorem 4.6 - tightness 44 Proofs for section 4.3 53 Contents Introduction Modelling the climate has been of great interest lately Paleoclimatic data is very helpful for the understanding of its dynamics In particular time series from the greenland ice is the purpose of studies like [3] They also state that the model of a diffusion that is driven by a Brownian Motion is not appropriate for these data The temperature, that can be obtained by the analysis of the calcium signal in the ice, has quite abrupt changes That is why the assumption that the underlying stochastic process has jumps is obvious One suggestion is the use of α-stable L´evy processes instead One assumes, that the data can be modelled as a stochastic process of the form t Xtε = x0 − U (Xs− ) ds + εLt where L is a stable L´evy process and ε is small It is not clear, how U looks like, although one can conjecture it to be a double-well potential as the temperature in the time series mostly remains in the environment of one of two states The time it needs to change from the warm state to the cold one or vice versa is very short This kind of diffusion has been studied in [13], [14] and [12], so properties like the exit times from the wells are well known by now One remaining problem is to calibrate the model The aim of this work is to develop a method to extract the characteristics of the underlying stochastic process In this model this is equivalent to detect the parameter α Though the function U is often assumed to be a double-well potential, there is no evidence for this That is why the assumptions on U should be as weak as possible Our approach is based on [6] The idea is to analyse one property of the process that Introduction is determined mainly by the underlying process Comparing the L´evy process and a Lebesgue integral, one noticeable difference is the smoothness of its paths As the pvariation can be regarded as a ‘measure’ of smoothness, this is the property we will study here More precisely we analyse the behaviour of the limit of [nt] Vpn (X)t |Xi/n − Xi−1/n |p = i=1 for a stochastic process X and positive t It is well known that Lebesgue integrals have finite variation, so the limit is zero for every p bigger than one We will see that this is not true for stable processes, where the limit is always positive because of the jumps of the process The question is if in the sum of these two processes the limit is also dominated by the L´evy process The first results in this work provide an understanding of the p-variation of α-stable L´evy processes themselves With the help of a central limit theorem we can see that the limit of Vpn (L) of a stable process L is again a stable process in the meaning of weak convergence in the Skorokhod topology The stability index is well-defined by the stability index of L and by p Understanding the behaviour of the p-variation of stable processes we then concentrate on the conditions that stochastic processes have to satisfy not to interfere with the limit of the stable process One sufficient condition is, that the limit of Vpn has to be zero for some positive p That is exactly what we looked for as Lebesgue integrals satisfy this condition Now that we have a limit theorem for the diffusion that only depends on the underlying stochastic process it is possible to develop statistical tests to identify this process As U just influences the integral part of the diffusion there is no need to take any assumption on this function for these tests The structure of this work is as follows Chapter two will give a short introduction to the theory of stochastic processes in general and L´evy processes in particular This is followed by an explanation of the convergence of stochastic processes with some criteria for weak convergence in the Skorokhod topology The subject of chapter three is the p-variation After the definition and some basic examples we discuss the condition for finiteness in the case of α-stable processes Afterwards we give some results for the limit behaviour of the Brownian Motion for a better classification of the latter results Finally, in chapter four we develop the results for stable processes After studying the limit behaviour for stable processes themselves we begin to add other processes Note, that all conditions on these processes concern their smoothness There are no conditions on independence from the stable process Hence, these theorems are applicable for our diffusion as the two terms of course depend heavily on each other For a better readability we omit the proofs in this chapter They are the content of chapter five and six, together with an introduction to a more general central limit theorem Introduction 10 5.2 Proof of theorem 4.6 - tightness So the inequality is proven for θ < θ0 /2 If θ0 /2 ≤ θ < θ0 it is even easier because for n ≥ n0 we can directly find n with the same properties as above such that [nθ] P |∆n Li |p − n−p/α ❊ |L1 |p i=1 ≥ ε [n θ0 ] = P |∆n Li |p − n−p/α ❊ |L1 |p i=1 ≥ ε [n θ0 ] ≤ P |∆n Li |p − n −p/α ❊ |L1 |p ≥ ε |∆n Li |p − n −p/α ❊ |L1 |p ≥ ε i=1 [n θ0 ] ≤ P i=1 δ δ ≤ < Altogether we have demonstrated that for δ > and ε > exist θ0 > and n0 ∈ ◆ such that [nθ] |∆n Li |p − n−p/α ❊ |L1 |p 3P ≥ i=1 ε ≤ δ, n ≥ n0 , θ < θ0 That implies that the same inequality holds if the integer [nθ] is displaced by any other integer k, ≤ k ≤ [nθ] as we can always find a θ < θ such that k = [nθ ] Overall we have demonstrated that lim lim sup θ↓0 n→∞ sup n, S,T ∈SN S≤T ≤S+θ P XTn − XSn ≥ ε ≤ δ So condition of theorem 2.6 is satisfied as this inequality holds for every δ > 0: lim lim sup θ↓0 n→∞ sup n, S,T ∈SN S≤T ≤S+θ P XTn − XSn ≥ ε = The first condition of theorem 2.6 results with the same arguments because for an α/p-stable random variable Z and ε > lim P |Z| > K) = lim P θp/α |Z| > ε = K→∞ θ↓0 Again for δ > and N ∈ ◆ we can find K ∈ ❘+ and n0 ∈ ◆ such that for n ≥ n0 nN P |∆n Li |p − n−p/α ❊ |L1 |p sup 1≤k≤N n >K ≤ δ i=1 49 Proofs for section 4.2 Finally we have to show tightness for p = α We prove the second condition of theorem 2.6 first Again we have the inequality nT sup n, S,T ∈SN S≤T ≤S+θ P |∆n Li |α − ❊ sin(n−1 |L1 |α ) ≥ε i=nS+1 k ≤ max P |∆n Li |α − ❊ sin(n−1 |L1 |α ) 1≤k≤[nθ] ≥ i=1 ε So all we have to show is that for every ε > lim lim sup max P |Xnk | ≥ ε = θ↓0 n→∞ 1≤k≤[nθ] with k Xnk |∆n Li |α − ❊ sin(n−1 |L1 |α ) = , k, n ∈ ◆ i=1 Lemma 5.2 and the following remark provides an upper bound for the tail of Xnk given by P |Xnk | ≥ ε ≤ sup ❊ eiλXn k 1− λ∈[0,2/ε] Xnk is a sum of i.i.d random variables, so its characteristic function can be written as a product of characteristic functions Hence for λ ≥ ❊ eiλXn = ❊ eiλ(n −1 |L |α − k ❊(sin(n−1 |L1 |α ))) = exp k ln ❊ eiλ(n −1 |L |α − k ❊(sin(n−1 |L1 |α ))) ❊ eiλn −1 |L |α = exp k ln eiλ ❊(sin(n−1 |L1 |α )) ❊ eiλn −1 |L |α = exp k ln + − eiλ ❊(sin(n −1 |L |α )) eiλ ❊(sin(n−1 |L1 |α )) We know that ∞ ln(1 + x) = i=1 (−1)i+1 xi =: x + p(x), i |x| ≤ For |x| ≤ 1/2 there is a constant c1 such that |p(x)| ≤ c1 |x|2 From lemma 5.4 we know that ❊ eiλn −1 |L |α eiλ 50 − eiλ ❊(sin(n ❊(sin(n−1 |L1 |α )) −1 |L |α )) = ❊ eiλn −1 |L |α −eiλ ❊(sin(n −1 |L |α )) ≤ Cα,n (ε, θ), 5.2 Proof of theorem 4.6 - tightness with limn→∞ Cα,n (ε, θ) = So one can easily find n0 (α, ε, θ) such that ❊ eiλn − eiλ ❊(sin(n −1 |L |α eiλ −1 |L |α )) ❊(sin(n−1 |L1 |α )) ≤ , n ≥ n0 We also know that ∞ x e = i=0 xi =: + x + q(x), i! x ∈ ❈ Again we can find a constant c2 such that |q(x)| ≤ c2 |x|2 for |x| ≤ For the sake of simplicity, we define ❊ eiλn −1 |L |α ψn,λ := − eiλ ❊(sin(n −1 |L 1| α )) eiλ ❊(sin(n−1 |L1 |α )) Putting all together we have P |Xnk | ≥ ε ≤ sup 1− λ∈[0,2/ε] = sup 1− ❊ eiλXn k exp[k ln(1 + ψn,λ )] λ∈[0,2/ε] − = sup kψn,λ + kp(ψn,λ ) + q(kψn,λ + kp(ψn,λ )) λ∈[0,2/ε] ≤ sup k|ψn,λ | + k|p(ψn,λ )| + q(kψn,λ + kp(ψn,λ )) λ∈[0,2/ε] For n ≥ max(n0 , Nα (ε, θ)) with Nα (ε, θ) as defined in lemma 5.4 we already know that k|ψn,λ | + k|p(ψn,λ )| ≤ θ(n + 1)|ψn,λ | + c1 θ(n + 1)|ψn,λ |2 ≤ θ(n + 1)Cα,n (ε, θ) + c1 θ(n + 1)Cα,n (ε, θ) And in lemma 5.4 we have shown that this term converges to zero if we first let n → ∞ and then θ → In particular it gets smaller than one So for θ < θ0 and n ≥ max(n0 , Nα (ε, θ0 )) q(kψn,λ + kp(ψn,λ )) ≤ c2 kψn,λ + kp(ψn,λ ) , which is bounded by a term independent of λ and k and converges to zero as well Thus we have demonstrated that for ε > lim lim sup max P |Xnk | ≥ ε = 0, θ→0 n→∞ 1≤k≤[nθ] 51 Proofs for section 4.2 so that the second condition of theorem 2.6 is shown To prove the first condition we proceed in exactly the same way as for the second condition, taking advantage of the fact that for fixed N ∈ ◆ the equation lim lim Cα,n (K, N )N n = K→∞ n→∞ holds due to 5.4 52 Proofs for section 4.3 To prove theorem 4.7 we need the following inequality Lemma 6.1 Let a, b, c ∈ ❘+ such that ≤ a, b ≤ c Then for p ≥ |ap − bp | ≤ cp−1 p|a − b| Proof For x ≥ we define the function f (x) := xp Then for every x ∈ [0, p− p−1 ] we have f (x) ≤ So for x, y ∈ [0, p− p−1 ]: |xp − y p | = |f (x) − f (y)| ≤ |x − y| If we now take a, b ≤ c we can use this inequality an obtain p |ap − bp | = cp p p−1 a − p−1 p c p − b − p−1 p c p 1 b a − p−1 p − p− p−1 c c p−1 = c p|a − b| p ≤ cp p p−1 Proof of theorem 4.7 Again we will apply theorem 4.6 and 2.7 In order to achieve this we have to deal with the cases in which the increments of the stable process L becomes large Fix N ≤ T and define the following sets Jcn (ω) := {i ∈ [0, nN ] : |∆n Li | > c} Anc (j) := {ω ∈ Ω : |Jcn (ω)| = j} with some c ∈ ❘+ This means that Anc (j) is the set, on which the number of ‘large’ increments equals j 53 Proofs for section 4.3 We now pick ε > and split the sum into summands with large increments and bounded ones: P sup Vpn (L + Y )t − Vpn (L)t > ε 0≤t≤N nN ≤ P |∆n (L + Y )i |p − |∆n Li |p > ε i=1 nN nN P = |∆n (L + Y )i |p − |∆n Li |p > ε ∩ Anc (j) j=0 i=1 nN P = |∆n (L + Y )i |p − |∆n Li |p i∈Jcn j=0 |∆n (L + Y )i |p − ∆n Li |p > ε + ∩ Anc (j) i∈Jcn nN P ≤ |∆n (L + Y )i |p − |∆n Li |p > i∈Jcn j=0 ε ∩ Anc (j) nN P + j=0 |∆n (L + Y )i |p − |∆n Li |p > i∈Jcn ε ∩ Anc (j) Now let D(1) (n, c0 ) be the first sum in this last term and let D(2) (n, c0 ) be the second sum However for an arbitrary fixed δ > we can find a c0 > and a n0 ∈ ◆ such that D(1) (n, c0 ) + D(2) (n, c0 ) ≤ δ ∀ n ≥ n0 , as we will prove Indeed, let K := K( 4δ ) be a positive number such that δ P |Ys (ω) − Yt (ω)| ≤ K |s − t|, ∀ s, t ∈ [0, T ] ≥ − We call the set where Y satisfies this Lipschitz condition B Now we can split D(2) (n, c) again, by D(2) (n, c) = D(2,1) (n, c) + D(2,2) (n, c), where in the first part we intersect with B and in the second one with its complement We know that [nt] D (2,2) j=0 54 δ P(Anc (j) ∩ B c ) = P(B c ) ≤ (n, c) ≤ In the remainder of the proof we can assume that Y is Lipschitz with constant K and the increments of L are bounded by c So let ω ∈ B and c > Applying lemma 6.1 and the Lipschitz continuity of Y (ω) we can conclude that |∆n (L + Y )i (ω)|p − |∆n Li |p i∈Jcn (ω) ≤ K n p−1 K p c+ n p−1 K p c+ n p−1 p c+ i∈Jcn (ω) ≤ i∈Jcn (ω) ≤ i∈Jcn (ω) ≤p c+ |∆n (L + Y )i (ω)| − |∆n Li | |∆n Yi | K n p−1 K n K So we obtain nN D (2,1) P (n, c) = |∆n Li + ∆n Yi |p − |∆n Li |p > i∈Jcn j=0 nN P ≤ p c+ j=0 ≤ P p c+ K n K n p−1 K > p−1 K > ε ε ∩ Anc (j) ∩ B ∩ Anc (j) ∩ B ε =0 for n ≥ n and c = c with n := 2K 2pK ε p−1 ε and c := 2pK p−1 It remains to analyse D(1) (n, c ) We use the same techniques as we did with D(2) to see that (with the same notation, i.e D(1,1) means intersection with B and D(1,2) with its complement) δ So again we take an ω ∈ B to handle the remaining term We use the fact, that for D(1,2) (n, c ) ≤ ≤ p ≤ and |x| ≤ the inequality (1 + |x|)p − ≤ 3|x| 55 Proofs for section 4.3 holds and that for ω ∈ B and i ∈ Jcn (ω), defined as above with fixed c ∆n Yi (ω) K ≤ ≤ for n large enough ∆n Li (ω) nc Again with ω ∈ B we can conclude that |∆n (L + Y )i (ω)|p − |∆n Li (ω)|p i∈Jcn (ω) |∆n Li (ω)| ∆n Yi (ω) 1+ ∆n Li (ω) |∆n Li (ω)|p ∆n Yi (ω) ∆n Li (ω) p ≤ i∈Jcn (ω) ≤ i∈Jcn (ω) |∆n Li (ω)|p ≤ i∈Jcn (ω) p −1 3K nc So we start dealing with D(1,2) applying this inequality: nN D (1,2) P ≤ |∆n Li |p i∈Jcn j=0 ε 3K > nc ∩ Anc (j) εnc 6K j ∩ Anc (j) nN P ≤ |∆n Li |p > i∈Jcn j=0 Since this sum just depends on L which has independent and stationary increments, for j ∈ {1, , [nt]} and ω ∈ Anc (j) we can assume that |∆n Li | > c for ≤ i ≤ j and |∆n Li | ≤ c for j + ≤ i ≤ [nt] and multiply the probability with the binomial coefficient For simplicity of notation define A˜nc (j) := {ω ∈ Ω : |∆n Li | > c , ≤ i ≤ j; |∆n Li | ≤ c , j + ≤ i ≤ nN } the set were the first j increments are large and the others are small Again because of the stationarity of the increments we now can write nN D (1,2) ≤ j=0 nN = j=0 nN ≤ j=0 56 nN j nN j nN j j P |∆n Li |p > i=1 jP |∆n L1 |p > εnc 6jK εnc 6jK ∩ A˜nc ∩ A˜nc jpnj−1 (1 − pn )nN −j P |∆n L1 |p > εnc 6jK , with pn := P |∆n L1 | > c = P |L1 | > c n1/α We know that α > and so we can use the the series expansion that we get in page 115 of [17], i.e if α > ∞ P(|L1 | > x) ∼ (πα) −1 (−1)n n=1 Γ(αn + 1) sin(nπρ)x−αn n!n as x → ∞, with a constant ρ that depends on L1 So we can deduce that there exists a constant c˜ > (that may depend on α) such that P(|L1 | > x) = c˜x−α + O x−α for x → ∞ So for j ≤ n, i.e n/j ≥ it follows asymptotically for n → ∞ P |L1 | > εθnc0 3K0 j 1/p 1/α n c˜ = n εθnc0 3K0 j c˜ ≤2 n εθnc0 3K0 j −α/p 1+O n −α/p for n large enough, and moreover c˜ (c )α n pn = 1+O n Putting everything together (and merging all the constants to C) we see that D (1,2) C (n, c0 ) ≤ pn n1+α/p C ≤ pn n1+α/p nN j=0 nN j=0 nN j nN j j 1+α/p pjn (1 − pn )nN −j j pjn (1 − pn )nN −j We can calculate this expression directly because it corresponds to the third moment of the Binomial distribution Therefore nN j=0 nN j j pjn (1 − pn )nN −j = (nN − 2)(nN − 1)nN p3n + 3(nN − 1)nN p2n + nN pn So we can finally show the convergence: C (nN − 2)(nN − 1)nN p3n + 3(nN − 1)nN p2n + nN pn D(1,2) (n, c0 ) ≤ 1+α/p pn n = Cn−α/p N ≤ δ nN − nN − nN − + 3N +N p−1 p−1 p−1 n n n for n ≥ n1 with n1 large enough 57 Proofs for section 4.3 We have seen above, that pn ∼ c˜/n, so the term in parentheses converges to some finite number and the factor in front converges to Because δ > was arbitrary, we have shown that for every ε > and N ≤ T lim P n→∞ sup |Vpn (L + Y )t − Vpn (L)t | > ε = 0, 0≤t≤N and so we can apply theorem 2.7 Proof of theorem 4.8 To apply theorem 2.7 we have to show that supt≤N Vpn (L + Y )t − Vpn (L)t converges to in probability for every N ∈ ◆ So let us first suppose that p = m + q with m ∈ ◆ and q ∈ [0, 1) Then we have [nt] sup t≤N Vpn (L |∆n (L + Y )i |m+q − |∆n Li |m+q + Y )t − Vp (L)t = sup t≤N [nt] t≤N i=1 m−1 ≤ sup t≤N m |∆n (L + Y )i |q = sup k=0 k=0 i=1 m k |∆n Li |k |∆n Yi |m−k − |∆n Li |m+q [nt] m |∆n (L + Y )i |q |∆n Li |k |∆n Yi |m−k k i=1 [nt] + sup t≤N m−1 ≤ k=0 m k |∆n (L + Y )i |q |∆n Li |m − |∆n Li |m+q i=1 nN |∆n (L + Y )i |q |∆n Li |k |∆n Yi |m−k i=1 nN |∆n (L + Y )i |q |∆n Li |m − |∆n Li |m+q + i=1 m−1 ≤ k=0 m k nN m−1 k+q |∆n Li | m−k |∆n Yi | + i=1 k=0 m k nN |∆n Li |k |∆n Yi |m−k+q i=1 nN |∆n Li |m |∆n (L + Y )i |q − |∆n Li |q + i=1 m−1 ≤ k=0 m k nN m−1 |∆n Li |k+q |∆n Yi |m−k + i=1 nN |∆n Li |m |∆n Yi |q ✶{q>0} + i=1 58 k=0 m k nN |∆n Li |k |∆n Yi |m−k+q i=1 So we obtain a finite sum of terms of the form [nt] |∆n Li |a |∆n Yi |p−a , a ∈ (0, p) i=1 Applying the H¨older inequality we get [nt] a/p [nt] |∆n Li |a |∆n Yi |p−a ≤ i=1 |∆n Li |p i=1 p−a/p [nt] |∆n Yi |p P −→ i=1 because the first factor is a.s bounded by a finite random variable (p > α) and the second converges to because of the assumption on Y The number of summands of this form is bounded in p So altogether one can conclude that for N ∈ ◆ and ε > lim P sup |Vpn (L + Y )t − Vpn (L)t | > ε n→∞ = t≤N So the assumptions from theorem 2.7 are satisfied and together with theorem 4.6 we get the desired convergence 59 Proofs for section 4.3 60 Bibliography [1] D Applebaum, L´evy processes and stochastic calculus, Cambridge University Press, 2004 [2] P Billingsley, Convergence of probability measures, John Wiley & Sons, 1968 [3] P D Ditlevsen, Observation of α-stable noise induced millennial climate changes from an ice-core record, Geophysical Research Letters 26 (1999), 1441–1444 [4] W Feller, An introduction to probability theory and its applications, volume ii, John Wiley & Sons, 1971 [5] A N Shiryaev J Jacod, Limit theorems for stochastic processes, ed., Springer, 2002 [6] J H C Woerner J M Corcuera, D Nualart, A functional central limit theorem for the realized power variation of integrated stable processes, Stochastic Analysis and Applications, 25 (2007), 169–186 [7] O Kallenberg, Foundations of modern probability, ed., Springer, 2002 [8] M Lo`eve, Probability theory i, ed., Springer, 1977 [9] I Monroe, On the γ-variation of processes with stationary independent increments, The Annals of Mathematical Statistics, Vol 43, No (1972), 1213–1220 [10] N Shephard O E Barndorff-Nielsen, Realized power variation and stochastic volatility, Bernoulli, (2003), 243–265 and 1109–1111 [11] , Power variation and time change, Theory of Probability and Its Applications, 50 (2006), 1–15 [12] I Pavlyukevich P Imkeller, Metastable behaviour of small noise l´evy-driven diffusions, ESAIM: Probability and Statistics (to appear) 61 Bibliography [13] , First exit times of sdes driven by stable l´evy processes, Stochastic Processes and their Applications 116 (4) (2006), 611–642 [14] , L´evy flights: transitions and meta-stability, Journal of Physics A: Mathematical and General 39 (2006), L237–L246 [15] R K Getoor R M Blumenthal, Sample funtions of stochastic processes with stationary independent increments, Journal of Mathematics and Mechancis, Vol 10, No (1961), 493–516 [16] K Sato, L´evy processes and infinitely divisible distributions, Cambridge University Press, 1999 [17] V M Zolotarev V V Uchaikin, Chance and stability: Stable distributions and their applications, VSP International Science Publishers, 1999 62 Selbst¨ andigkeitserkl¨ arung Ich erkl¨are, dass ich die vorliegende Arbeit selbst¨andig und nur unter Verwendung der angegebenen Quellen und Hilfsmittel angefertigt habe Berlin, den Oktober 2007 Einverst¨ andniserkl¨ arung Hiermit erkl¨are ich mich einverstanden, dass ein Exemplar meiner Diplomarbeit in der Bibliothek des Institutes f¨ ur Mathematik verbleibt Berlin, den Oktober 2007 [...]... this value can be interpreted as a measure for the regularity of the sample paths of stochastic processes Example 3.1 (Poisson process) The Poisson process is a pure jump process with a finite number of jumps in every finite interval The jumps have height 1, so the p- variation equals the number of jumps in the observed interval for every p > 0 The compound Poisson process has jumps with random heights,... behaviour of the p- variation of a stable process Like in the previous chapter we will see the convergence in distribution and the convergence in probability of Vpn The idea for these studies comes from [6] The focus of this paper is the development of limit theorems for the p- variation of integrals with respect to a stable process We will shortly summarise them and simplify the results by stating the theorems. .. ❊ |W1 |p −1 +p/ 2 i=1 =n −1 +p/ 2 Vpn (Y )t + n−1 +p/ 2 Vpn (W )t − t ❊ |W1 |p The first part converges to zero in probability by assumption and the convergence of the second part has been demonstrated in the first part of this proof So altogether the difference converges to zero If p > 1, we use Minkowski’s inequality to get a similar result: n−1 +p/ 2 Vpn (W + Y )t ≤ 1 /p − t ❊ |W1 |p 1 /p n−1 +p/ 2 Vpn (W +... convergence in probability if if the limit is a Dirac measure 2 To prove the convergence in total we will first assume that p ≤ 1 We can now apply the triangle inequality to show that n−1 +p/ 2 Vpn (W + Y )t − t ❊ |W1 |p ≤ n−1 +p/ 2 Vpn (W + Y )t − Vpn (W )t + n−1 +p/ 2 Vpn (W )t − t ❊ |W1 |p [nt] |∆n (W + Y )i |p − |∆n Wi |p + n−1 +p/ 2 Vpn (W )t − t ❊ |W1 |p −1 +p/ 2 ≤n i=1 [nt] ≤n |∆n Yi |p + n−1 +p/ 2 Vpn (W )t... the p- variation of the Brownian motion we want to develop conditions for another process that, if added, does not influence the limit behaviour of the approximated p- variation At first we use the law of large numbers (for example [8], p 251) to get an impression of the behaviour of Vpn Theorem 3.4 Let W be a Brownian motion, Y be another stochastic process and p > 0 If Y satisfies for t ≥ 0 P n−1 +p/ 2... n−1 +p/ 2 Vpn (Y )t −→ 0, n → ∞ Then P n−1 +p/ 2 Vpn (W + Y )t −→ t ❊ |W1 |p , n → ∞ Proof We will first show the convergence for Vpn (W )t and then show that the difference Vpn (W + Y )t − Vpn (W )t converges to zero 1 So let us consider Vpn (W )t This convergence is a simple application of the weak law of large numbers By the scaling property of the Brownian motion and its 25 3 p- variation independent... this process has no jumps The Brownian motion and the Brownian motion with drift are the only L´evy processes without jumps The characteristic triple is (A, γ, 0) with γ = 0 if there is no drift Example 2.2 (Poisson process) Another process of special interest is the Poisson process This pure jump process only takes values in ◆0 because of its jump height of 1 This process can be generalised by replacing... very smooth, so the p- variation is infinite for p < 2 and finite for p ≥ 2 22 3.1 p- variation 3.1.2 Finiteness in case of stable processes We will now concentrate on the behaviour of the p- variation of stable processes So let L = (Lt )t≥0 be an α -stable L´evy process with α ∈ (0, 2) We will see that γ(L) only depends on α At first we introduce another characteristic value of L´evy processes, the BlumenthalGetoor... 1 /p n−1 +p/ 2 Vpn (W + Y )t 1 /p − n−1 +p/ 2 Vpn (W )t n−1 +p/ 2 Vpn (W )t 1 /p − t ❊ |W1 |p 1 /p n−1 +p/ 2 Vpn (W )t 1 /p + ≤ n−1 +p/ 2 Vpn (Y )t 1 /p + 1 /p − t ❊ |W1 |p 1 /p Now the same string of arguments as in the first case holds Additionally we use the continuity and bijectivity of the function x → xp in the interval [0, ∞) to get the desired convergence 26 3.2 Limit behaviour of the Brownian motion This result... −1/2 np/2−1/2 Vpn (W )t − [nt]n−1/2 ❊(|W1 |p ) − ❊(|W1 |p )) (n var(|W1 |p ))1/2 [nt] p/ 2 |∆n Wi |p i=1 (n − ❊(|W1 |p )) , (n var(|W1 |p ))1/2 [nt] p i=1 (|∆1 Wi | to which we can apply the theorem 2.5 We know that (nt − [nt])n−1/2 ❊(|W1 |p ) → 0 for n → ∞, so the desired result follows In the next chapter we will see that for α -stable processes with α < 2 we can add another process with certain properties ... the help of the scaling property of stable processes we have [nθ0 ]−[nθ] P |∆n Li |p − n p/ α ❊ |L1 |p ≥ i=1 ε [n θ0 ] = P |∆n Li |p − n p/ α ❊ |L1 |p ≥ i=1 = P n n p/ α [n θ0 ] |∆n Li |p − n p/ α... property: P |L1 |p > x = P |L1 | > x1 /p ∼ cx−α /p So the tail sum of |L1 | varies slowly with exponent −α /p and we can apply theorem 4.3 For the proof of theorem 4.6 we need to handle the supremum... + n−1 +p/ 2 Vpn (W )t − t ❊ |W1 |p [nt] |∆n (W + Y )i |p − |∆n Wi |p + n−1 +p/ 2 Vpn (W )t − t ❊ |W1 |p −1 +p/ 2 ≤n i=1 [nt] ≤n |∆n Yi |p + n−1 +p/ 2 Vpn (W )t − t ❊ |W1 |p −1 +p/ 2 i=1 =n −1 +p/ 2 Vpn (Y

Ngày đăng: 13/11/2015, 04:03

Tài liệu cùng người dùng

  • Đang cập nhật ...

Tài liệu liên quan