1. Trang chủ
  2. » Luận Văn - Báo Cáo

Thesis title skorokhod embedding

54 8 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Nội dung

HANOI PEDAGOGICAL UNIVERSITY DEPARTMENT OF MATHEMATICS NGUYEN THI THU HA THESIS TITLE: SKOROKHOD EMBEDDING BACHELOR THESIS Major: APPLIED MATHEMATICS Hanoi, 5/ 2019 HANOI PEDAGOGICAL UNIVERSITY DEPARTMENT OF MATHEMATICS NGUYEN THI THU HA THESIS TITLE: SKOROKHOD EMBEDDING BACHELOR THESIS Major: APPLIED MATHEMATICS Supervisor: Assoc Prof NGO HOANG LONG Hanoi, 5/ 2019 Thesis Acknowledgement I would like to express my gratitude to the teachers of the Department of Mathematics, Hanoi Pedagogy University 2, the teachers in the applied mathematics group as well as the teachers involved The lecturers have imparted valuable knowledge and facilitated for me to complete the course and the thesis In particular, I would like to express my deep respect and gratitude to Assoc.Prof.Ngo Hoang Long, who has direct guidance, help me complete this thesis I also want to thank to Dr Nguyen Duy Tan for his valuable advice and assistance in the course of my degree Due to time, capacity and conditions are limited, so the thesis cannot avoid errors So, I look forward to receiving valuable comments from teachers and friends Hanoi, May 06th 2018 Student Nguyen Thi Thu Ha Thesis Assurance I assure that the data and the results of this thesis are true and not identical to other topics I also assure that all the help for this thesis has been acknowledged and that the results presented in the thesis has been identified clearly Hanoi, May 06th 2018 Student Nguyen Thi Thu Ha Contents Preface 1 Stochastic processes 1.1 Conditional expectation 1.2 Discrete-time Martingales 1.2.1 Stopping times 1.2.2 Martingales 1.2.3 Optional stopping 1.2.4 Doob’s inequalities 1.2.5 Martingale convergence theorem Continuous-time Martingale 11 1.3.1 Stochastic processes 11 1.3.2 Brownian motion 12 1.3.3 Continuous-time Martingales 15 1.3.4 Markov properties of Brownian motion 19 1.3.5 Continuous semimartingales 21 1.3 Stochastic analysis 2.1 2.2 2.3 25 Stochastic integral 25 2.1.1 Stochastic integral in M2 ([a, b]) 25 2.1.2 Stochastic integral with limit are stopping times 29 2.1.3 Stochastic integral in L2 ([a, b]) 30 2.1.4 Itˆo integral of m dimensions 30 Itˆo’s formula 31 2.2.1 In the one-dimensional case 31 2.2.2 In the case m dimensions (m ≥ 2) 32 Some applications of Itˆo’s formula 34 i CONTENTS Skorokhod embedding 37 3.1 Preliminaries 37 3.2 Construction of the embedding 42 3.3 Some applications of Skorokhod embedding 45 Conclusion 47 References 47 ii Preface Stochastic analysis is a mathematics field having many applications, especially in financial mathematics and banking, insurance Stochastic analysis’s definitions are filtration, martingale, stochastic integral appear in reality naturally Skorokhod embedding is one of good result of stochastic analysis Not only is it a tool to research many theory problems, but also a tool to solve application problems in reality Skorokhod embedding was stated and solved by a Russian mathematician whose name is Anatoliy Skorokhod in 1961 It was stated that suppose Y is a random variable with mean zero and finite variance, W is a Brownian motion, there exits a stopping time T such that WT has the same law as Y Therefore, with the desire of learning more about it, I chose Thesis Title “Skorokhod embedding” for my graduation thesis Let us describe the content of this thesis In Chapter 1, we begin with the definition and basic properties of some important concepts of Stochastic process In Chapter 2, we consider Stochastic integral in some space Next, we introduce the notation of Itˆo’s integral of m dimensions and Itˆo’s formula After that, we prove some important inequality for stochastic integral by Itˆo’s formula In chapter 3, firstly, we will review some objection using in contruction of Skorokhod embedding after that Finally, we present some applications of Skorokhod embedding Chapter Stochastic processes 1.1 Conditional expectation Definition 1.1.1 Let (Ω, G, P) be a probability space and X a random variable with E(|X|) < ∞ Let F be a sub-σ-algebra of G Then there exists a random variable Y such that Y is F measurable E(|Y |) < ∞ for every A ∈ F, we have E[Y ; A] = E[X; A] or XdP = A Y dP A Then Y is called the conditional expectation of X given F and denote as E[X|F] Remark 1.1.2 Y exists uniquely Proposition 1.1.3 If X is F measurable, E[X|F] = X Proposition 1.1.4 If X is independent of F, E[X|F] = EX Proposition 1.1.5 E[E[X|F]] = EX Proposition 1.1.6 (1)If X ≥ Y are both integrable, then E[X|F] ≥ E[Y |F], a.s.(almost surely) (2)If X and Y are integrable and a ∈ R, then E[aX + Y |F] = aE[X|F] + E[Y |F] 1.2 DISCRETE-TIME MARTINGALES Proposition 1.1.7 If g is convex and X and g(X) are integrable, then E[g(X)|F] ≥ g(E[X|F]), a.s Proposition 1.1.8 If X and XY are integrable and Y is measurable with respect to F, then E[XY |F] = Y E[X|F] Proposition 1.1.9 If ε ⊂ F ⊂ G are σ-algebras, then E[E[X|F]|ε] = E[X|ε] = E[E[X|ε]|F] Proposition 1.1.10 If c is a constant, E[c|F] = c Proposition 1.1.11 If F = {∅, Ω}, E[X|F] = EX Proposition 1.1.12 If H is a sub-σ-algebra of G and independent of F and X, then E[X|σ(F, H)] = E[X|F] ,a.s Example 1.1.13 Let (Ω, G, P) be a probability space, and {A1 , A2 , , An } be a finite collection of pairwise disjoint sets whose union is Ω, P(Ai ) > for all i, and F is the σ-algebra generated by the Ai ’s, then n E[X|F] =  i=1 1.2 1.2.1   P(Ai ) XdP IAi Ai Discrete-time Martingales Stopping times Definition 1.2.1 Let (Ω, F, P) be a probability space {Fn : n ≥ 0} is called a filtration if it is an increasing family of sub-σ-algebras of F, that is, F0 ⊆ F1 ⊆ F2 ⊆ · · · ⊆ F We define F∞ := σ(∪n Fn ) ⊂ F Definition 1.2.2 Let (Fn ) be a filtration A random mapping T from Ω to {0, 1, 2, } is called a stopping time if for each n, {T ≤ n} = {ω : T (ω) ≤ n} ∈ Fn Proposition 1.2.3 (1) Fixed times n are stopping times (2) If T1 and T2 are stopping times, then so are T1 ∧ T2 and T1 ∨ T2 1.2 DISCRETE-TIME MARTINGALES (3) If Tn is an increasing sequence of stopping times, then so is T = supn Tn (4) If Tn is a decreasing sequence of stopping times, then so is T = inf n Tn (5) If T is a stopping time, then so is T + n we define FT = {A : A ∩ (T ≤ n) ∈ Fn for all n} Example 1.2.4 If T ≡ k ∈ N, T is a stopping time clearly 1.2.2 Martingales Definition 1.2.5 A process M = (Mn : n ≥ 0) is called adapted to the filtration (Fn ) if for each n, Mn is Fn -measurable Definition 1.2.6 A process M is called a martingale (relative to ((Fn ) , P)) if i) M is adapted to Fn ii) E[|Mn |) < ∞, ∀n iii)E[Mn |Fn−1 ] = Mn−1 , a.s.(∀n ≥ 1) A supermartingale (relative to ((Fn ) , P)) is defined similarly, except that (iii) is replaced by E[Mn |Fn−1 ] ≤ Mn−1 , a.s.(n ≥ 1), and a submartingale is defined with (iii) replaced by E[Mn |Fn−1 ] ≥ Mn−1 , a.s.(n ≥ 1) Example 1.2.7 Let Fi = σ {X1 , X2 , , Xi } and (Xi ) be a sequence of mean zero n independent random variables and Sn = Xi , then Mn = Sn is a martingale because i=1 ∀n ≥ 0, Mn is Fn -measurable clearly n EXi = implies that ∀n ≥ 0, E|Mn | = E|[ Xi ]| = < ∞ i=1 E[Mn |Fn−1 ] = Mn−1 + E[Mn − Mn−1 |Fn−1 ] = Mn−1 + E[Mn − Mn−1 ] = Mn−1 Example 1.2.8 Let Fi = σ {X1 , X2 , , Xi } and (Xi ) be a sequence of independent random variables with mean zero and variance one, Sn is as in the previous example and Mn = Sn2 − n Then Mn = Sn2 − n is a martingale because ∀n ≥ 0, Mn is Fn -measurable clearly We have Sn2 − n ≤ Sn2 , so n E|Mn | = E|Sn2 − n| ≤ E|Sn2 | Xi ]2 | + n < ∞ + n = E|[ i=1 ˆ FORMULA 2.3 SOME APPLICATIONS OF ITO’S that is dV (x(t), t) d d = Vt (x(t), t) + i=1 d m + i=1 k=1 m ∂V (x(t), t) ∂ V (x(t), t) fi (t) + dt gik (t)gjk ∂xi i,j=1 k=1 ∂xi ∂xj ∂V (x(t), t) gik (t)dBk (t) ∂xi Remark 2.2.7 Itˆo ’s formula of m-dimension can be rewritten as follows dV (x(t), t) = Vt (x(t), t)dt + Vx (x(t), t)dx(t) + dxT (t)Vxx (x(t), t)dx(t) Example 2.2.8 Let V (x, t) = x1 x2 , we obtain Itˆo ’s formula for a product that is d(x1 (t)x2 (t)) = x1 (t)dx2 (t) + x2 (t)dx1 (t) + dx1 x2 m = x1 (t)dx2 (t) + x2 (t)dx1 (t) + g1k (t)g2k (t)dt k=1 2.3 Some applications of Itˆ o’s formula We can set up some important inequalities for stochastic integral from Itˆo’s formula as follow Theorem 2.3.1 Suppose that p ≥ 2, g ∈ M2 ([0, T ]; Rd×m ) satisfies T |g(s)|p ds < ∞ E o Then p T g(s)dB(s) E ≤ p (p − 1) T p T p−2 |g(s)|p ds E o The sign ”=” occurs if and only if p = Proof • Case 1: p = 2, by Theorem 2.1.15, we have this inequality • Case 2: p > For ≤ t ≤ T , we consider t x(t) = g(s)dB(s) 34 ˆ FORMULA 2.3 SOME APPLICATIONS OF ITO’S By Itˆo’s formula and Theorem 2.1.15, we have t p E |x(t)| = E |x(s)|p−2 |g(s)|2 + (p − 2)|x(s)|p−4 xT (s)g(s) p ds t p(p − 1) ≤ E |x(s)|p−2 |g(s)|2 ds (**) By Hăolders inequality, we have E|x(t)|p ≤  p−2  p t p(p − 1)  E |x(s)|p ds =  p−2  p t p(p − 1)  |g(s)|p ds E   p2 t E|x(s)|p ds  p2 t |g(s)|p ds E 0 From (**), we obtain E|x(t)|p is not decreasing by t Hence,  E|x(t)|p ≤  p2 t p−2 p(p − 1) (tE|x(t)|p ) p E |g(s)|p ds p(p − 1) ≤ t p t p−2 |g(s)|p ds, E From this inequality, to complete this proof, we replace t by T Theorem 2.3.2 (Burkholder-Davis-Gundy’s inequality) Suppose g ∈ L2 (R+ ; Rd×m ) For t ≥ 0, we put t x(t) = t g(s)dB(s),A(t) = |g(s)|2 ds Then, for each p > 0, there exists positive constants cp , Cp (only depend on p) satisfy cp E |A(t)|p/2 ≤ E sup |x(s)|p 0≤s≤t 35 ≤ Cp E |A(t)|p/2 , ˆ FORMULA 2.3 SOME APPLICATIONS OF ITO’S for all t ≥ 0, where cp = (p/2)p ; Cp = (32/p)p/2 if < p < 2; cp = 1; Cp = 4, if p = 2; cp = (2p)−p/2 ; Cp = pp+1 /2(p − 1)p−1 Proof See in [6-page 77-80] 36 p/2 , if p > Chapter Skorokhod embedding In this chapter we consider the following problem: Let Y be a random variable with mean zero and finite variance and W a standard Brownian motion We construct a stopping time T such that WT has the same law as Y This problem was first studied by Skorokhod in 1961 and his result was published in [4] Since then, the problems has been changed, generalized or narrowed a great number of times The problem has stimulated research in probability theory for nearly 60 years now 3.1 Preliminaries Definition 3.1.1 A function f : R → R is a Lipschitz function if there exists a constant k such that |f (y) − f (x)| ≤ k |y − x| , ∀x, y ∈ R We need the following classical result on Picard iteration method for ordinary differential equations Theorem 3.1.2 Let f : [0, ∞) × R → R is a bounded function and k a positive constant such that |f (t, x) − f (t, y)| ≤ k |x − y| , ∀x, y ∈ R, t ≥ Let y0 ∈ R, define the function y by y (t) = y0 for all t ≥ 0, and define the function 37 3.1 PRELIMINARIES y i inductively by t y i+1 (t) = y0 + f (s, y i (s))ds, t ≥ o Then the function y i converges uniformly on bounded interval to a function y that satisfies t y(t) = y0 + f (s, y(s))ds (3.1) For any s such that f (s, y(s)) is continuous at s, the solution y satisfies dy = f (s, y(s)) ds (3.2) The solution to (3.1) is unique If the random variable Y equals to 0,a.s.,we can let the stopping time T equal 0,a.s., then BT = = Y if B is a Brownian motion In the remainder of this section, suppose EY = 0, EY < ∞, but Y is not identically zero We define ps (y) = √ −y2 /2s e , 2πs which is the density of a N (0, s) distribution We denote ps (x) is derivative of ps with respect to x Lemma 3.1.3 Let B is a Brownian motion and g : R → R such that E[g(B1 )2 ] < ∞ For < s < 1, let a(s, x) = − p1−s (z − x)g(z)dz and b(s, x) = p1−s (z − x)g(z)dz Then we have g(B1 ) = Eg(B1 ) + a(s, Bs )dBs , a.s (3.3) and E [g(B1 )|Fs ] = b(s, Bs ), a.s 38 (3.4) 3.1 PRELIMINARIES Proof Firstly, we will prove (3.3), with the first case g(x) = eiux Set Xt = iuBt + u2 t/2, and apply Itˆo’s formula with the function f (x) = ex we have t iuB1 +u2 t/2 e t e d(iuBs + u t/2) + Xs =1+ (−u2 )eXs ds t s/2 eiuBs +u = + iu dBs , Hence eiuB1 = e −u2 /2 (s−1)/2 iueiuBs eu + dBs (s−1)/2 We want to show iueiux eu a(s, x) = − = a(s, x) Using integration by parts,we have p1−s (z − x)g(z)dz = = iu 2π(1 − s) e−(z−x) p1−s (z − x)g (z)dz /2(1−s) iuz e dz (s−1)/2 = iueiux e−u This is iu times the characteristic function of a normal random variable with mean x and varience − s So we have eiuB1 = EeiuB1 − p1−s (z − Bs )eiuz dzdBs (3.5) Now assume that g is in the Schwartz class, u is replaced by −u in (3.5), multilpy by the Fourier transform of g and integrate over u ∈ R, we obtain −1 p1−s (z − Bs )e−iuz g(u)dzdBs du −1 (2π) g(B1 ) = (2π) Eg(B1 ) − = −(2π)−1 p1−s (z − Bs )g(z)dzdBs where g is the Fourier transform of g (Using the Fubini theorem and the inversion formula for Fourier transforms) It implies that (3.3) when g is the Schwartz class A limit argument gives (3.3) for all g To prove (3.4), we again start with the case 39 3.1 PRELIMINARIES g(x) = eiux We have (s−1)/2 E eiuB1 |Fs = eiuBs E eiu(B1 −Bs ) |Fs = eiuBs E eiu(B1 −Bs ) = eiuBs e−u On the other hand, the definition of b(s, x) shows that when g(x) = eiux , b(s, x) is the characteristic function of a normal random variable with mean x and variance − s, so (s−1)/2 b(s, x) = eiux e−u Replacing x by Bs proves (3.4) in the case g(x) = eiux We extend this to general g in the same way as in the proof of (3.3) Remark 3.1.4 We want to find a reasonable function g such that g(B1 ) is equal in law to Y , where again B is a Brownian motion Let FY (x) = P(Y ≤ x) be the distribution function of Y and Φ(x) = P(B1 ≤ x) Then P(Φ(B1 ) ≤ x) = P(B1 ≤ Φ−1 (x)) = Φ(Φ−1 (x)) = x For x ∈ [[0, 1], so Φ(B1 ) is a uniform random variable on [0, 1].We define g(x) = FY−1 (Φ(x)) (3.6) We use the right-continuous version of FY−1 if it is not continuous Then P(g(B1 ) ≤ x) = P(Φ(B1 ) ≤ FY (x)) = FY (x), or Y is equal in law to g(B1 ) Note g is an increasing function Proposition 3.1.5 Let g be defined by (3.6) and a, b be defined in lemma 3.1.3 For each L > and s0 < 1, a is continuously differentiable on [0, s0 ] × [−L, L] Also, for each L > and s0 < , a is bounded below by a positive constant on [0, s0 ] × [−L, L] For each L > and s0 < 1,b is continuously differentiable on [0, s0 ] × [−L, L] For each s ∈ [0, s0 ], the function x → b(s, x) is strictly increasing For each fixed s, let B(s, x) be the inverse of b(s, x) (so that B(s, b(s, x)) = x and b(s, B(s, x)) = x) For each L > and s0 < 1, B is continuously differentiable on [0, s0 ] × [−L, L] 40 3.1 PRELIMINARIES Proof We observe that for every r > 0, Eer|B1 | ≤ EerB1 + Ee−rB1 < ∞ Since |z|m ≤ m!e|z| if m is non-negative integer, then by the Cauchy-Schwartz inequality and the fact that EY < ∞, |z|m er|z| e−z /2 e(r+1)|z| e−z |g(z)| dz ≤ m! /2 |g(z)| dz (*) = m!E e(r+1)|B1 | |g(B1 )| ≤ m! Ee2(r+1)|B1 | 1/2 E|g(B1 )|2 ≤ m! Ee2(r+1)|B1 | 1/2 EY 1/2 1/2 < ∞ We turn to (1) p1−s (z − x) ≤ c |z − x| 3/2 (1 − s) e−(z−x) ≤ c |z − x| e−x ≤ c (|z| + L) e 2 /2(1−s) /2(1−s) ezx/2(1−s) e−z |z|L/2(1−s0 ) ≤ c |z| ec |z| e−z /2 e−z /2(1−s) /2 + cec |z| e−z /2 Hence |a(s, x)| ≤ c |z| ec |z| e−z /2 |g(z)| dz + cec |z| e−z /2 |g(z)| dz which is bounded by (*) This gives an upper bound for a By mean value theorem, p1−s (z − x) − p1−s (z − (x + h)) ≤ c |h| + |z|2 + L2 e−(z−x) /2(1−s) if s ≤ s0 , |x| ≤ L and |h| ≤ 1, so p (z − x) − p1−s (z − (x + h) h 1−s ≤ c + |z|2 ec |z| e−z /2 From (*), we can use dominated convergence to conlude that ∂a (s, x) = ∂x and tain ∂a (s, x) is bounded ∂x that ∂a (s, x) is also ∂s p1−s (z − x)g(z)dz above on [0, s0 ] × [−L, L] By a similar argument we obbounded above on [0, s0 ] × [−L, L] Ther same argument shows that the second partial derivatives of a are bounded, and hence the first partial 41 3.2 CONSTRUCTION OF THE EMBEDDING derivatives are continuous Using integration by parts, p1−s (z − x)dg(z), a(s, x) = where the integral is a Lebesgue-Stieltjes integral and g is an increasing function As we assume that Y is not identically zero, so g is not identically zero, it implies that a is bouded below for s ≤ s0 and |x| ≤ L The proof of (2) is quite similar To prove (3), as above, we use a dominated convergence argument to prove ∂b (s, x) = a(s, x) ∂x Since a(s, x) > for each x and for each s < s0 , we conclude that x → b(s, x) is strictly increasing The estimates for B follow from the implicit function theorem applied to f (s, x, y) = 0, where f (s, x, y) = b(s, x) − y 3.2 Construction of the embedding Theorem 3.2.1 Let X be a random variable with mean zero and EX < ∞ There exists a Brownian motion C and a stopping time T with respect to the minimal agumented filtration of C such that CT is equal in law to X Moreover ET = EX Proof The case X is identically zero is trivial for we take T = 0, so we suppose X is not identically zero Let Wt be a Brownian motion and let (Ft ) be its minimal augmented filtration We define the function g by (3.6) and define a, b for s < as in Lemma 3.1.3 and a(s, x) = 1; b(s, x) = x for s ≥ We let t Mt = a(s, Ws )dWs , and thus t M t a(s, Ws )2 ds = Note M t → ∞, a.s., t → ∞ As EX = 0, then Eg(W1 ) = 0, so M1 = g(W1 ) by (3.3).Let τt = inf {s : M s ≥ t} , the inverse of M Now if we set Ct = Mτt , then C is a Brownian motion Let (Gt ) be 42 3.2 CONSTRUCTION OF THE EMBEDDING the minimal augmented filtration generated by C Let T = M Then CT = C M = Mτ M = M1 = g(W1 ), and CT has the same law as X For the integrability of T we have ET = E M = EM12 = E g(W1 )2 = EX = V arX < ∞ It remains to show that T is a stopping time with respect to (Gt ) Because T = lim M s , it suffices to show that M s s↑1 is a stopping time with respect to (Gt ) for each s < Fix K We want to show τt ≤ s, sup |Cs | ≤ K ∈ Gt , s < s≤t Letting K → ∞ then show (M s ≥ t) = (τt ≤ s) ∈ Gt , s < Because τ is the inverse of M , then dτt 1 = = dt d M τt /dτt a(τt , Wτt )2 with τ0 = 0,a.s With B(s, x) is the inverse of b(s, x) in the x variable, Ms = E [M1 |Fs ] = E [g(W1 )|Fs ] = b(s, Ws ), or Ws = B(s, Ms ), s < Hence Wτt = B (τt , Mτt ) = B(τt , Ct ) on the event (τt ≤ s) if s < Therefore, we have τt solves the equation dτt = , τ0 = dt a(τt , B(τt , Ct ))2 or t τt = du a(τu , B(τu , Cu ))2 43 3.2 CONSTRUCTION OF THE EMBEDDING Now we fix s and t and s0 ∈ (s, 1) Let SK = inf {t : |Ct | ≥ K} and CtK = Ct∧SK We define Φ(q, r) = a r, B(r, CqK (ω) if r ≤ s0 We obtain that Φ depend on ω We define Φ(q, r) = for r ≥ and if r ∈ (s0 , 1), define Φ(q, r) by linear interpolation Then Φ is continuous, bounded and there exists k > such that |Φ(q, r) − Φ(q, r )| ≤ k |r − r | , r ∈ R, q ∈ [0, ∞) τt solves the equation t τt = Φ(u, τu )du We solve the differential equation t y(t) = Φ(u, y(u))du t From remark 3.3.1, the function y (t) is identically zero and y (t) = Φ(u, y (u))du depend on ω and be Gt measurable In induction, y i (t) is Gt measurable Hence, y(t) is Gt measurable Because CqK (ω) ≤ K for all q and we have y(t) ≤ s, so τt = y(t) since τt ≤ s Corollary 3.2.2 Let M be a Brownian motion and (Ft ) be the minimal augmented filtration for M Let X be a random variable with mean zero and variance finite There exists a stopping time V with respect to (Ft ) such that MV has the same law as X Proof We define Φ(q, r) = (a (r, B(r, Mq (ω)))2 and solve the equation dτt = Φ(t, τt ), τ0 = dt by Picard iteration Then τt satisty (τt ≤ s) ∈ Ft for every t as long as s < (by the proof of theorem 3.2.1) Let A be the inverse of τ and V = lim As Then V will be a stopping time s↑1 44 3.3 SOME APPLICATIONS OF SKOROKHOD EMBEDDING 3.3 Some applications of Skorokhod embedding An application of Skorokhod embedding is show that we can find a Brownian motion is relatively close to a random walk Assume X1 , X2 , is an i.i.d sequence of real-valued random variables with mean zero and variance one Given a Brownian motion Bt , we can find a stopping time T1 such that BT1 has the same distribution as X1 We use the strong Markov property at time T1 and find a stopping time T2 for BT1 +t − BT1 so that BT1 +T2 − BT1 has the same distribution as X2 and is independent of FT1 Moreover, Ti are i.i.d and by theorem 3.2.1,ETi = EXi2 = k Let Uk = n Ti Then for each n, Sn = Xi has the same distribution as BUn i=1 i=1 Theorem 3.3.1 √ sup |BUi − Bi | / n i≤n tends to in probability as n → ∞ Proof We want to show for each ε > √ lim sup P sup |BUk − Bk | > ε n n→∞ ≤ ε k≤n Brownian motion’s path is continuous, so we can find δ < small such that P |Bt − Bs | > ε sup < ε/2 s,t≤2;|t−s|≤δ Then P √ |Bt − Bs | > ε n sup < ε/2 s,t≤2n;|t−s|≤δn By the strong law of large numbers, we have Un /n → ET1 = 1, a.s We also have max |Uk − k| k≤n → 0, a.s n 45 3.3 SOME APPLICATIONS OF SKOROKHOD EMBEDDING Hence √ P max |BUk − Bk | > ε n k≤n ≤ P max |Uk − k| > δn + P k≤n ≤ P max k≤n sup √ |Bt − Bs | > ε n s,t≤2n;|t−s|≤δn ε |Uk − k| >δ + n So it will be less than ε when n is taken sufficiently large It concludes the proof 46 Conclusion In this thesis, we have presented systematically the following results (1) We have presented the definition and basic properties of some important concepts of Stochastic process (2) we consider Stochastic integral in some space.In particular, the Itˆo’s formula is also presented; (3) We have presented Skorokhod embedding and some its applications 47 Bibliography [1] R Bass, Stochastic Processes, Cambridge (2011) [2] M Beiglbock, A Cox, M Huesmann, Optimal transport and Skorokhod embedding, Inventiones mathematicae, 208(2), 327-400 (2017) [3] J Obloj, The Skorokhod embedding problem and its offsping Probability Surveys, Vol 1, 321-392 (2004) [4] A V Skorokhod, Studies in the theory of random processes, Translated from the Russian by Scripta Technica, Inc Addison-Wesley Publishing Co., Inc., Reading, Mass., 1965 viii+199 pp [5] D Williams, Probability with Martingales, Cambridge (1991) [6] N.H Long, Lecture notes on Stochastic analysis and stochastic differential equations 48 ... NGUYEN THI THU HA THESIS TITLE: SKOROKHOD EMBEDDING BACHELOR THESIS Major: APPLIED MATHEMATICS Supervisor: Assoc Prof NGO HOANG LONG Hanoi, 5/ 2019 Thesis Acknowledgement I would... with the desire of learning more about it, I chose Thesis Title ? ?Skorokhod embedding? ?? for my graduation thesis Let us describe the content of this thesis In Chapter 1, we begin with the definition... be a stopping time s↑1 44 3.3 SOME APPLICATIONS OF SKOROKHOD EMBEDDING 3.3 Some applications of Skorokhod embedding An application of Skorokhod embedding is show that we can find a Brownian motion

Ngày đăng: 07/04/2021, 07:30

w