Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống
1
/ 33 trang
THÔNG TIN TÀI LIỆU
Thông tin cơ bản
Định dạng
Số trang
33
Dung lượng
275,08 KB
Nội dung
StochasticDifferential Equations, Sixth Edition Solution of Exercise Problems Yan Zeng July 16, 2006 This is a solution manual for the SDE book by Øksendal, StochasticDifferential Equations, Sixth Edition It is complementary to the books own solution, and can be downloaded at www.math.fsu.edu/˜zeng If you have any comments or find any typos/errors, please email me at yz44@cornell.edu This version omits the problems from the chapters on applications, namely, Chapter 6, 10, 11 and 12 I hope I will find time at some point to work out these problems 2.8 b) Proof ∞ E[eiuBt ] = k=0 ik E[Btk ]uk = k! So E[Bt2k ] = t k k! (− ) (−1)k (2k)! = ∞ k=0 t (− )k u2k k! (2k)! k t k! · 2k d) Proof n E x [|Bt − Bs |4 ] (i) (i) E x [(Bt − Bs(i) )4 ] + = i=1 (j) E x [(Bt − Bs(i) )2 (Bt − Bs(j) )2 ] i=j 4! = n· · (t − s)2 + n(n − 1)(t − s)2 2! · = n(n + 2)(t − s)2 2.11 Proof Prove that the increments are independent and stationary, with Gaussian distribution Note for Gaussian random variables, uncorrelatedness=independence 2.15 d Proof Since Bt − Bs ⊥ Fs := σ(Bu : u ≤ s), U (Bt − Bs ) ⊥ Fs Note U (Bt − Bs ) = N (0, t − s) 3.2 Proof WLOG, we assume t = 1, then n B13 3 − B(j−1)/n ) (Bj/n = j=1 n [(Bj/n − B(j−1)/n )3 + 3B(j−1)/n Bj/n (Bj/n − B(j−1)/n )] = j=1 n n 3B(j−1)/n (Bj/n − B(j−1)/n ) (Bj/n − B(j−1)/n ) + = j=1 j=1 n 3B(j−1)/n (Bj/n − B(j−1)/n )2 + j=1 := I + II + III By Problem EP1-1 and the continuity of Brownian motion n (Bj/n − B(j−1)/n )2 ] max |Bj/n − B(j−1)/n | → a.s I≤[ 1≤j≤n j=1 To argue II → Bt2 dBt as n → ∞, it suffices to show E[ B(j−1)/n 1{(j−1)/n s, then E Mt E[eσBt−s ] |Fs = E eσ(Bt −Bs )− σ (t−s) |Fs = σ2 (t−s) = Ms e2 The second equality is due to the fact Bt − Bs is independent of Fs 4.4 Proof For part a), set g(t, x) = ex and use Theorem 4.12 For part b), it comes from the fundamental property of Itˆ o integral, i.e Itˆ o integral preserves martingale property for integrands in V Comments: The power of Itˆ o formula is that it gives martingales, which vanish under expectation 4.5 Proof t Btk = kBsk−1 dBs + k(k − 1) t Bsk−2 ds Therefore, k(k − 1) βk (t) = t βk−2 (s)ds This gives E[Bt4 ] and E[Bt6 ] For part b), prove by induction 4.6 (b) Proof Apply Theorem 4.12 with g(t, x) = ex and Xt = ct + constant coefficient n j=1 αj Bj Note n j=1 t t αj Bj is a BM, up to a 4.7 (a) Proof v ≡ In×n (b) Proof Use integration by parts formula (Exercise 4.3.), we have t Xt2 = X02 + t So Mt = X02 + t |vs |2 ds = X02 + Xs dX + |vs |2 ds Xs vs dBs + 0 Xs vs dBs Let C be a bound for |v|, then t t |Xs vs |2 ds ≤ C E E 0 t = C2 s |vu |2 du ds ≤ E t |Xs |2 ds = C 2 s vu dBu E ds C t2 So Mt is a martingale 4.12 Proof Let Yt = Y t t (n) u(s, ω)ds Then Y is a continuous {Ft }-martingale with finite variation On one hand, |Ytk+1 − Ytk |2 ≤ lim (total variation of Y on [0, t]) · max |Ytk+1 − Ytk | = = lim ∆tk →0 tk ≤t tk ∆tk →0 On the other hand, integration by parts formula yields t Yt2 = Ys dYs + Y t So Yt2 is a local martingale If (Tn )n is a localizing sequence of stopping times, by Fatou’s lemma, E[Yt2 ] ≤ lim E[Yt∧T ] = E[Y02 ] = n n So Y· ≡ Take derivative, we conclude u = 4.16 (a) Proof Use Jensen’s inequality for conditional expectations (b) T t Proof (i) Y = Bs dBs So Mt = T + Bs dBs T T T T t (ii) BT3 = 3Bs2 dBs + Bs ds = Bs2 dBs + 3(BT T − sdBs ) So Mt = Bs2 dBs + 3T Bt − t t sdBs = 3(Bs2 + (T − s) dBs (iii)Mt = E[exp(σBT )|Ft ] = E[exp(σBT − 12 σ T )|Ft ] exp( 12 σ T ) = Zt exp( 12 σ T ), where Zt = exp(σBt − 2 σ t) Since Z solves the SDE dZt = Zt σdBt , we have t Mt = (1 + t 1 Zs σdBs ) exp( σ T ) = exp( σ T ) + 2 σ exp(σBs + σ (T − s))dBs 5.1 (ii) Proof Set f (t, x) = x/(1 + t), then by Itˆ o’s formula, we have dXt = df (t, Bt ) = − dBt Xt dBt Bt dt + =− dt + (1 + t)2 1+t 1+t 1+t (iii) Proof By Itˆ o’s formula, dXt = cos Bt dBt − : Bs ∈ [− π2 , π2 ]} Then t∧τ Xt∧τ cos Bs dBs − = Xs ds 0 t∧τ − Xs2 dBs − So for t < τ , Xt = t − Xs2 dBs − t Xs ds Let τ = inf{s > t∧τ − sin2 Bs 1{s≤τ } dBs − = t Xs ds t = cos Bs 1{s≤τ } dBs − cos Bs dBs − t∧τ t = t sin Bt dt So Xt = 2 t∧τ Xs ds t∧τ Xs ds Xs ds (iv) Proof dXt1 = dt is obvious Set f (t, x) = et x, then dXt2 = df (t, Bt ) = et Bt dt + et dBt = Xt2 dt + et dBt 5.3 Proof Apply Itˆ o’s formula to e−rt Xt 5.5 (a) Proof d(e−µt Xt ) = −µe−µt Xt dt + e−µt dXt = σe−µt dBt So Xt = eµt X0 + (b) t σeµ(t−s) dBs Proof E[Xt ] = eµt E[X0 ] and t t e−µs dBs )2 + 2σe2µt X0 Xt2 = e2µt X02 + σ e2µt ( e−µs dBs 0 So t E[Xt2 ] = e−2µs ds e2µt E[X02 ] + σ e2µt t −µs e dBs since = = is a martingale vanishing at time e−2µt − −2µ 2µt e − e2µt E[X02 ] + σ 2µ e2µt E[X02 ] + σ e2µt So V ar[Xt ] = E[Xt2 ] − (E[Xt ])2 = e2µt V ar[X0 ] + σ e 2µt −1 2µ 5.6 Proof We find the integrating factor Ft by the follows Suppose Ft satisfies the SDE dFt = θt dt + γt dBt Then d(Ft Yt ) = Ft dYt + Yt dFt + dYt dFt = Ft (rdt + αYt dBt ) + Yt (θt dt + γt dBt ) + αγt Yt dt = (rFt + θt Yt + αγt Yt )dt + (αFt Yt + γt Yt )dBt (1) Solve the equation system θt + αγt = αFt + γt = 0, we get γt = −αFt and θt = α2 Ft So dFt = α2 Ft dt − αFt dBt To find Ft , set Zt = e−α t Ft , then 2 dZt = −α2 e−α t Ft dt + e−α t dFt = e−α t (−α)Ft dBt = −αZt dBt Hence Zt = Z0 exp(−αBt − α2 t/2) So 2 Ft = eα t F0 e−αBt − α t = F0 e−αBt + α t Choose F0 = and plug it back into equation (1), we have d(Ft Yt ) = rFt dt So t Yt = Ft−1 (F0 Y0 + r t eα(Bt −Bs )− α Fs ds) = Y0 eαBt − α t + r 0 5.7 (a) Proof d(et Xt ) = et (Xt dt + dXt ) = et (mdt + σdBt ) So t Xt = e−t X0 + m(1 − e−t ) + σe−t es dBs (b) (t−s) ds Proof E[Xt ] = e−t E[X0 ] + m(1 − e−t ) and t E[Xt2 ] = E[(e−t X0 + m(1 − e−t ))2 ] + σ e−2t E[ e2s ds] = e−2t E[X02 ] + 2m(1 − e−t )e−t E[X0 ] + m2 (1 − e−t )2 + σ (1 − e−2t ) Hence V ar[Xt ] = E[Xt2 ] − (E[Xt ])2 = e−2t V ar[X0 ] + 12 σ (1 − e−2t ) 5.9 Proof Let b(t, x) = log(1 + x2 ) and σ(t, x) = 1{x>0} x, then |b(t, x)| + |σ(t, x)| ≤ log(1 + x2 ) + |x| Note log(1 + x2 )/|x| is continuous on R − {0}, has limit as x → and x → ∞ So it’s bounded on R Therefore, there exists a constant C, such that |b(t, x)| + |σ(t, x)| ≤ C(1 + |x|) Also, |b(t, x) − b(t, y)| + |σ(t, x) − σ(t, y)| ≤ 2|ξ| |x − y| + |1{x>0} x − 1{y>0} y| + ξ2 for some ξ between x and y So |b(t, x) − b(t, y)| + |σ(t, x) − σ(t, y)| ≤ |x − y| + |x − y| Conditions in Theorem 5.2.1 are satisfied and we have existence and uniqueness of a strong solution 5.10 t t Proof Xt = Z + b(s, Xs )ds + σ(s, Xs )dBs Since Jensen’s inequality implies (a1 + · · · + an )p ≤ np−1 (ap1 + · · · + apn ) (p ≥ 1, a1 , · · · , an ≥ 0), we have t E[|Xt |2 ] ≤ E[|Z|2 ] + E σ(s, Xs )dBs 0 t ≤ E[|Z|2 ] + E[ t +E b(s, Xs )ds t |b(s, Xs )|2 ds] + E[ |σ(s, Xs )|2 ds] t ≤ 3(E[|Z|2 ] + C E[ t (1 + |Xs |)2 ds] + C E[ (1 + |Xs |)2 ds]) t = 2 (1 + |Xs |) ds]) 3(E[|Z| ] + 2C E[ t ≤ 3(E[|Z|2 ] + 4C E[ (1 + |Xs |2 )ds]) t ≤ 3E[|Z|2 ] + 12C T + 12C E[|Xs |2 ]ds t E[|Xs |2 ]ds, = K1 + K2 where K1 = 3E[|Z|2 ] + 12C T and K2 = 12C By Gronwall’s inequality, E[|Xt |2 ] ≤ K1 eK2 t 5.11 Proof First, we check by integration-by-parts formula, t −a + b − dYt = Set Xt = (1 − t) t dBs , 1−s dBs 1−s dt + (1 − t) dBt b − Yt = dt + dBt 1−t 1−t then Xt is centered Gaussian, with variance t E[Xt2 ] = (1 − t)2 ds = (1 − t) − (1 − t)2 (1 − s)2 So Xt converges in L to as t → Since Xt is continuous a.s for t ∈ [0, 1), we conclude is the unique a.s limit of Xt as t → 5.14 (i) Proof dZt = d(u(B1 (t), B2 (t)) + iv(B1 (t), B2 (t))) i = u · (dB1 (t), dB2 (t)) + ∆udt + i v · (dB1 (t), dB2 (t)) + ∆vdt 2 = ( u + i v) · (dB1 (t), dB2 (t)) ∂u ∂v ∂v ∂u = (B(t))dB1 (t) − (B(t))dB2 (t) + i( (B(t))dB1 (t) + (B(t))dB2 (t)) ∂x ∂x ∂x ∂x ∂v ∂v ∂u ∂u + i (B(t)))dB2 (t) = ( (B(t)) + i (B(t)))dB1 (t) + (i ∂x ∂x ∂x ∂x = F (B(t))dB(t) (ii) Proof By result of (i), we have deαB(t) = αeαB(t) dB(t) So Zt = eαB(t) + Z0 solves the complex SDE dZt = αZt dB(t) 5.15 t Proof The deterministic analog of this SDE is a Bernoulli equation dy dt = rKyt − ryt The correct substitu−2 −1 tion is to multiply −yt on both sides and set zt = yt Then we’ll have a linear equation dzt = −rKzt + r Similarly, we multiply −Xt−2 on both sides of the SDE and set Zt = Xt−1 Then − rKdt dBt dXt =− + rdt − β Xt2 Xt Xt and dZt = − dXt · dXt dXt + = −rKZt dt + rdt − βZt dBt + β Xt2 dt = rdt − rKZt dt + β Zt dt − βZt dBt Xt2 Xt3 Xt Define Yt = e(rK−β )t dYt = e(rK−β Zt , then )t (dZt + (rK − β )Zt dt) = e(rK−β )t (rdt − βZt dBt ) = re(rK−β )t dt − βYt dBt Now we imitate the solution of Exercise 5.6 Consider an integrating factor Nt , such that dNt = θt dt + γt dBt and d(Yt Nt ) = Nt dYt + Yt dNt + dNt · dYt = Nt re(rK−β )t dt − βNt Yt dBt + Yt θt dt + Yt γt dBt − βγt Yt dt Solve the equation θt = βγt γt = βNt , we get dNt = β Nt dt + βNt dBt So Nt = N0 eβBt + β d(Yt Nt ) = Nt re(rK−β Choose N0 = 1, we have Nt Yt = Y0 + Xt = Zt−1 = e(rK−β )t Yt−1 = t re(rK− β2 2 )t t t and dt = N0 re(rK− β )s+βBs e(rK−β Y0 + 2 )t )t+βBt dt ds with Y0 = Z0 = X0−1 So Nt re(rK− β 2 )s+βB s ds = e(rK− β x−1 + t )t+βBt re(rK− β )s+βB s ds 5.15 (Another solution) Proof We can also use the method in Exercise 5.16 Then f (t, x) = rKx − rx2 and c(t) ≡ β So Ft = e−βBt + β t and Yt satisfies dYt = Ft (rKFt−1 Yt − rFt−2 Yt2 )dt Divide −Yt2 on both sides, we have − dYt = Yt2 − rK + rFt−1 dt Yt So dYt−1 = −Yt−2 dYt = (−rKYt−1 + rFt−1 )dt, and d(erKt Yt−1 ) = erKt (rKYt−1 dt + dYt−1 ) = erKt rFt−1 dt Hence erKt Yt−1 = Y0−1 + r t rKs βBs − β s ds e e Xt = Ft−1 Yt = eβBt − β and erKt t Y0−1 +r t βBs +(rK− β )s ds e = e(rK− β x−1 + r )t+βBt t (rK− β )s+βBs ds e 5.16 (a) and (b) Proof Suppose Ft is a process satisfying the SDE dFt = θt dt + γt dBt , then d(Ft Xt ) = Ft (f (t, Xt )dt + c(t)Xt dBt ) + Xt θt dt + Xt γt dBt + c(t)γt Xt dt = (Ft f (t, Xt ) + c(t)γt Xt + Xt θt )dt + (c(t)Ft Xt + γt Xt )dBt Solve the equation c(t)γt + θt = c(t)Ft + γt = 0, we have γt = −c(t)Ft θt = c2 (t)F (t) So dFt = c2 (t)Ft dt − c(t)Ft dBt Hence Ft = F0 e integrating factor Ft and d(Ft Xt ) = Ft f (t, Xt )dt 10 t c2 (s)ds− t c(s)dBs Choose F0 = 1, we get desired 8.2 Proof By Kolmogorov’s backward equation (Theorem 8.1.1), it suffices to solve the SDE dXt = αXt dt + βXt dBt This is the geometric Brownian motion Xt = X0 e(α− ∞ β2 )t+βBt y2 (α− β2 )t+βy x f (xe u(t, x) = E [f (Xt )] = Then −∞ e− 2t )√ dy 2πt 8.3 Proof By (8.6.34) and Dynkin’s formula, we have E x [f (Xt )] = f (y)pt (x, y)dy Rn t = f (x) + E x [ Af (Xs )ds] t Ps Af (x)ds = f (x) + t = f (x) + ps (x, y)Ay f (y)dyds Rn Differentiate w.r.t t, we have f (y) Rn ∂pt (x, y) dy = ∂t pt (x, y)Ay f (y)dy = Rn Rn A∗y pt (x, y)f (y)dy, where the second equality comes from integration by parts Since f is arbitrary, we must have A∗y pt (x, y) ∂pt (x,y) ∂t = 8.4 Proof The expected total length of time that B· stays in F is ∞ T = E[ ∞ (Sufficiency) If m(F ) = 0, then F √ 1F (Bt )dt] = √1 e 2πt − x2t dx = for every t > 0, hence T = (Necessity) If T = 0, then for a.s t, Rn , we must have m(F ) = F F − x2 e 2t dxdt 2πt x √ e− 2t 2πt x2 dx = For such a t > 0, since e− 2t > everywhere in 8.5 Proof Apply the Feynman-Kac formula, we have u(t, x) = E x [e t ρds n f (Bt )] = eρt (2πt)− e− (x−y)2 2t f (y)dy Rn 8.6 Proof The major difficulty is to make legitimate using Feynman-Kac formula while (x − K)+ ∈ C02 For the conditions under which we can indeed apply Feynman-Kac formula to (x − K)+ ∈ C02 , c f the book of Karatzas & Shreve, page 366 19 8.7 Proof Let αt = inf{s > : βs > t}, then Xαt is a Brownian motion Since β· is continuous and limt→∞ βt = ∞ a.s., by the law of iterated logarithm for Brownian motion, we have lim sup √ t→∞ Xαβt 2βt log log βt = 1, a.s Assume αβt = t (this is true when, for example, beta· is strictly increasing), then we are done 8.8 Proof Since dNt = (u(t) − E[u(t)|Gt ])dt + dBt = dZt − E[u(t)|Gt ]dt, Nt = σ(Ns : s ≤ t) ⊂ Gt So E[u(t) − E[u(t)|Gt ]|Nt ] = By Corollary 8.4.5, N is a Brownian motion 8.9 Proof By Theorem 8.5.7, and αt = t2 , 1+ 23 t3 we have e αt αt es dBs = t αs e ˜s , where B ˜t is a Brownian motion Note eαt = αs dB + 23 t3 αt = t 8.10 Proof By Itˆ o’s formula, dXt = 2Bt dBt + dt By Theorem 8.4.3, and 4Bt2 = 4|Xt |, we are done 8.11 a) Proof Let Zt = exp{−Bt − t2 }, then it’s easy to see Z is a martingale Define QT by dQT = ZT dP , then QT is a probability measure on FT and QT ∼ P By Girsanov’s theorem (Theorem 8.6.6), (Yt )t≥0 is a Brownian motion under QT Since Z is a martingale, dQ|Ft = ZT dP |Ft = Zt dP = dQt for any t ≤ T This allows us to define a measure Q on F∞ by setting Q|FT = QT , for all T > b) ˆ is a Brownian motion, then Proof By the law of iterated logarithm, if B lim sup √ t→∞ Bt Bt = a.s and lim inf = −1, a.s t→∞ 2t log log t 2t log log t So under P , lim sup Yt = lim sup t→∞ t→∞ Bt t +√ 2t log log t 2t log log t 2t log log t = ∞, a.s Similarly, lim inf t→∞ Yt = ∞ a.s Hence P (limt→∞ Yt = ∞) = Under Q, Y is a Brownian motion The law of iterated logarithm implies limt→∞ Yt does’nt exist So Q(limt→∞ Yt = ∞) = This is not a contradiction, since Girsanov’s theorem only requires Q ∼ P on FT for any T > 0, but not necessarily on F∞ 8.12 Proof dYt = βdt + θdBt where β = u= −3 Put Mt = exp{− t 0 udBs − and θ = t −1 We solve the equation θu = β and get −2 u2 ds} = exp{3B1 (t) − B2 (t) − 5t} and dQ = MT dP on FT , ˜t with B ˜t = then by Theorem 8.6.6, dYt = θdB −3t + B(t) a Brownian motion w.r.t Q t 8.13 a) 20 Proof {Xtx ≥ M } ∈ Ft , so it suffices to show Q(Xtx ≥ M ) > for any probability measure Q which is equivalent to P on Ft By Girsanov’s theorem, we can find such a Q so that Xt is a Brownian motion w.r.t Q So Q(Xtx ≥ M ) > 0, which implies P (Xtx ≥ M ) > b) Proof Use the law of iterated logarithm and the proof is similar to that of Exercise 8.11.b) 8.15 a) Proof We define a probability measure Q by dQ|Ft = Mt dP |Ft , where t α(Bs )dBs − Mt = exp{ t α2 (Bs )ds} ∆ t ˆt = Then by Girsanov’s theorem, B Bt − α(Bs )ds is a Brownian motion So Bt satisfies the SDE dBt = ˆt By Theorem 8.1.4, the solution can be represented as α(Bt )dt + dB t x EQ [f (Bt )] = E x [exp( α(Bs )dBs − t α2 (Bs )ds)f (Bt )] Remark: To see the advantage of this approach, we note the given PDE is like Kolmogorovs backward equation So directly applying Theorem 8.1.1, we get the solution E x [f (Xt)] where X solves the SDE dXt = α(Xt)dt + dBt However, the formula E x [f (Xt)] is not sufficiently explicit if α is non-trivial and the expression of X is hard to obtain Resorting to Girsanovs theorem makes the formula more explicit b) Proof e t α(Bs )dBs − 21 t α2 (Bs )ds =e t γ(Bs )dBs − 21 t γ (Bs )ds So u(t, x) = e−γ(x) E x eγ(Bt ) f (Bt )e− = eγ(Bt )−γ(B0 )− t ( t ∆γ(Bs )ds− 21 γ (Bs )+∆γ(Bs ))ds t γ (Bs )ds c) Proof By Feynman-Kac formula and part b), v(t, x) = E x eγ(Bt ) f (Bt )e− t ( γ +∆γ)(Bs )ds = eγ(x) u(t, x) 8.16 a) Proof Let Lt = − T t n ∂h i i=1 ∂xi (Xs )dBs Then L is a square-integrable martingale Furthermore, L C01 (Rn ) | h(Xs )| ds is bounded, since h ∈ By Novikov’s condition, Mt = exp{Lt − martingale We define P¯ on FT by dP¯ = MT dP Then dXt = h(Xt )dt + dBt defines a BM under P¯ 21 T = L t } is a E x [f (Xt )] ¯ x [Mt−1 f (Xt )] = E ¯ x [e = E x = E [e t n ∂h i=1 ∂xi (Xs )dXsi − 21 t | h(Xs )|2 ds t n ∂h i=1 ∂xi (Bs )dBsi − 12 t | h(Bs )|2 ds f (Xt )] f (Bt )] Apply Itˆ o’s formula to Zt = h(Bt ), we get t n h(Bt ) − h(B0 ) = i=1 ∂h (Bs )dBsi + ∂xi So E x [f (Xt )] = E x [eh(Bt )−h(B0 ) e− t t n i=1 V (Bs )ds ∂2h (Bs )ds ∂x2i f (Bt )] b) Proof If Y is the process obtained by killing Bt at a certain rate V , then it has transition operator TtY (g, x) = E x [e− t V (Bs )ds g(Bt )] So the equality in part a) can be written as TtX (f, x) = e−h(x) TtY (f eh , x) 8.17 Proof dY (t) = dY1 (t) dY2 (t) = 1 2 β1 (t) dt + β2 (t) 2 dB1 (t) dB2 (t) dB3 (t) So equation (8.6.17) has the form u 1 u2 = u3 β1 (t) β2 (t) The general solution is u1 = −2u2 + β1 − 3(β1 − β2 ) = −2u2 − 2β1 + 3β2 and u3 = β1 − β2 Define Q by (8.6.19), then there are infinitely many equivalent martingale measure Q, as u2 varies 9.2 (i) Proof The book’s solution is detailed enough We only comment that for any bounded or positive g ∈ B(R+ × R), E s,x [g(Xt )] = E[g(s + t, Btx )], where the left hand side is expectation under the measure induced by Xts,x on R2 , while the right hand side is expectation under the original given probability measure P Remark: The adding-one-dimension trick in the solution is quite typical and useful Often in applications, the SDE of our interest may not be homogeneous and the coefficients are functions of both X and t However, to obtain (strong) Markov property, it is necessary that the SDE is homogeneous If we augment the original SDE with an additional equation dXt = dt or dXt = −dt, then the SDE system is an (n + 1)-dimension SDE driven by an m-dimensional BM The solution Yts,x = (Xt , Xt ) (X0 = s and X0 = x) can be identified with 22 a probability measure P s,x on Rn+1 , with P s,x = Y s,x (P ), where Y s,x (P ) means the distribution function of Y s,x With this perspective, we have E s,x [g(Xt )] = E[g(t + s, Btx )] Abstractly speaking, the (strong) Markov property of SDE solution can be formulated precisely as follows Suppose we have a filtered probability space (Ω, F, (Ft )t≥0 , P ), on which an m-dimensional continuous semimartingale Z is defined Then we can consider an n-dimensional SDE driven by Z, dXt = f (t, Xt )dZt If X x is a solution with X0 = x, the distribution X x (P ) of X x , denoted by P x , induces a probability measure on C(R+ , Rn ) The (strong) Markov property then means the coordinate process defined on C(R+ , Rn ) is a (strong) Markov process under the family of measures (P x )x∈Rn Usually, we need the SDE dXt = f (t, Xt )dZt is homogenous, i.e f (t, x) = f (x), and the driving process Z is itself a Markov process When Z is a BM, we emphasize that it is a standard BM (cf [8] Chapter IX, Definition 1.2) 9.5 a) Proof If 21 ∆u = −λu in D, then by integration by parts formula, we have −λ u, u = −λ D u2 (x)dx = 1 u(x) · u(x)dx ≤ So λ ≥ Because u is not identically zero, we must have D u(x)∆u(x)dx = − D λ > b) Proof We follow the hint Let u be a solution of (9.3.31) with λ = ρ Applying Dynkin’s formula to the process dYt = (dt, dBt ) and the function f (t, x) = eρt u(x), we get τ ∧n E (t,x) [f (Yτ ∧n )] = f (t, x) + E (t,x) Lf (Ys )ds Since Lf (t, x) = ρeρt u(x) + 21 eρt ∆u(x) = 0, we have E (t,x) [eρτ ∧n u(Bτ ∧n )] = eρt u(x) Let t = and n ↑ ∞, we are done Note ∀ξ ∈ bF∞ , E (t,x) [ξ] = E x [ξ] (cf (7.1.7)) c) Proof This is straightforward from b) 9.6 Proof Suppose f ∈ C02 (Rn ) and let g(t, x) = e−αt f (x) If τ satisfies the condition E x [τ ] < ∞, then by Dynkin’s formula applied to Y and y, we have τ E (t,x) [e−ατ f (Xτ )] = e−αt f (x) + E (t,x) ( That is, ∂ + A)g(s, Xs )ds ∂s τ E x [e−ατ f (Xτ )] = e−ατ f (x) + E x [ e−αs (−α + A)f (Xs )ds] Let t = 0, we get τ E x [e−ατ f (Xτ )] = f (x) + E x [ e−αs (A − α)f (Xs )ds] If α > 0, then for any stopping time τ , we have τ ∧n E x [e−ατ ∧n f (Xτ ∧n )] = f (x) + E x [ e−αs (A − α)f (Xs )ds] Let n ↑ ∞ and apply dominated convergence theorem, we are done 9.7 a) 23 Proof Without loss of generality, assume y = First, we consider the case x = Following the hint and note ln |x| is harmonic in R2 \{0}, we have E x [f (Bτ )] = f (x), since E x [τ ] = 21 E x [|Bτ |2 ] < ∞ If we define τρ = inf{t > : |Bt | ≤ ρ} and τR = inf{t > : |Bt | ≥ R}, then P x (τρ < τR ) ln ρ + P x (τρ > τR ) ln R = ln |x|, P x (τρ < τR ) + P x (τρ > τR ) = ln R−ln |x| ln R−ln ρ Hence ln R−ln |x| limR→∞ limρ→0 ln R−ln ρ = So P x (τρ < τR ) = P x (τ0 < ∞) = limR→∞ P x (τρ < τR ) = limR→∞ limρ→0 P x (τρ < τR ) = For the case x = 0, we have P (∃ t > 0, Bt = 0) = P (∃ > 0, τ0 ◦ θ < ∞) = P (∪ >0, ∈Q+ {τ0 ◦ θ P (τ0 ◦ θ < ∞) < ∞}) = lim = lim E [P B (τ0 < ∞)] →0 →0 z2 = lim = e− √ P z (τ0 < ∞)dz 2π →0 b) ˜t = Proof B −1 0 Bt and −1 0 ˜ is also a Brownian motion is orthogonal, so B c) Proof P (τD = 0) = lim →0 P (τD ≤ ) ≥ lim (1) P (∃ t ∈ (0, ], Bt = P (∃ t ∈ (0, ], = (2) Bt (1) →0 P (∃ t ∈ (0, ], Bt (2) ≥ 0, Bt (1) = 0) + P (∃ t ∈ (0, ], Bt = 0) + P (∃ t ∈ (0, ], (1) (2) ≥ 0, Bt (1) Bt = 0, (2) Bt (2) = 0) Part a) implies (2) ≤ 0, Bt = 0) = 0) (1) (2) And part b) implies P (∃ t ∈ (0, ], Bt ≥ 0, Bt = 0) = P (∃ t ∈ (0, ], Bt ≤ 0, Bt = 0) So (1) (2) P (∃ t ∈ (0, ], Bt ≥ 0, Bt = 0) = 21 Hence P (τD = 0) ≥ 12 By Blumenthal’s 0-1 law, P (τD = 0) = 1, i.e is a regular boundary point d) (2) Proof P (τD = 0) ≤ P (∃ t > 0, Bt = 0) ≤ P (∃ t > 0, Bt boundary point (3) = Bt = 0) = So is an irregular 9.9 a) Proof Assume g has a local maximum at x ∈ G Let U ⊂⊂ G be an open set that contains x, then g(x) = E x [g(XτU )] and g(x) ≥ g(XτU ) on {τU < ∞} When X is non-degenerate, P x (τU < ∞) = So we must have g(x) = g(XτU ) a.s This implies g is locally a constant Since G is connected, g is identically a constant 9.10 24 Proof Consider the diffusion process Y that satisfies dt dXt dYt = dt αXt dt + βXt dBt = dt + dBt αXt βXt = Let τ = inf{t > : Yt ∈ (0, T ) × (0, ∞)}, then by Theorem 9.3.3, τ K(Xs )e−ρs ds] = E (t,x) [e−ρτ φ(Xτ )] + E (t,x) [ f (t, x) T −t K(Xsx )e−ρ(s+t) ds], = E[e−ρ(T −t) φ(XTx −t )] + E[ where Xtx = xe(α− β2 )t+βBt Then it’s easy to calculate T −t f (t, x) = e−ρ(T −t) E[φ(XTx −t )] + e−ρ(s+t) E[K(Xsx )]ds 9.11 a) Proof First assume F is closed Let {φn }n≥1 be a sequence of bounded continuous functions defined on ∂D such that φn → 1F boundedly This is possible due to Tietze extension theorem Let hn (x) = E x [φn (Bτ )] ¯ and ∆hn (x) = in D So by Poisson formula, for z = reiθ ∈ D, Then by Theorem 9.2.14, hn ∈ C(D) hn (z) = 2π 2π Pr (t − θ)hn (eit )dt Let n → ∞, hn (z) → E x [1F (Bτ )] = P x (Bτ ∈ F ) by bounded convergence theorem, and RHS → 2π it 2π Pr (t − θ)1F (e )dt by dominated convergence theorem Hence P z (Bτ ∈ F ) = 2π 2π Pr (t − θ)1F (eit )dt Then by π − λ theorem and the fact Borel σ-field is generated by closed sets, we conclude P z (Bτ ∈ F ) = 2π 2π Pr (t − θ)1F (eit )dt for any Borel subset of ∂D b) Proof Let B be a BM starting at By example 8.5.9, φ(Bt ) is, after a change of time scale α(t) and under the original probability measure P, a BM in the plane ∀F ∈ B(R), P (B exits D from ψ(F )) = P (φ(B) exits upper half plane from F ) = P (φ(B)α(t) exits upper half plane from F ) = Probability of BM starting at i that exits from F = µ(F ) So by part a), µ(F ) = 2π 2π 1ψ(F ) (eit )dt = f (ξ)dµ(ξ) = R 2π 2π 2π 1F (φ(eit ))dt This implies 2π f (φ(eit ))dt = 25 2πi ∂D f (φ(z)) dz z c) Proof By change-of-variable formula, f (ξ)dµ(ξ) = R π f (ω) ∂H ∞ dω = |ω − i|2 π dx x2 + f (x) −∞ d) Proof Let g(z) = u + vz, then g is a conformal mapping that maps i to u + vi and keeps upper half plane invariant Use the harmonic measure on x-axis of a BM starting from i, and argue as above in part a)-c), we can get the harmonic measure on x-axis of a BM starting from u + iv 9.12 dXt , then the generator of Y is Aφ(y1 , y2 ) = Ly1 φ(y) + q(Xt )dt q(y1 ) ∂y∂ φ(y), for any φ ∈ C02 (Rn × R) Choose a sequence (Un )n≥1 of open sets so that Un ⊂⊂ D and Un ↑ D Define τn = inf{t > : Yt ∈ Un × (−n, n)} Then for a bounded solution h, Dynkin’s formula applied to h(y1 )e−y2 (more precisely, to a C02 -function which coincides with h(y1 )e−y2 on Un ×(−n, n)) yields Proof We consider the diffusion dYt = (1) τn ∧n (2) E y [h(Yτn ∧n )e−Yτn ∧n ] = h(y1 )e−y2 − E y (2) g(Ys(1) )e−Ys ds , since A(h(y1 )e−y2 ) = −g(y1 )e−y2 Let y2 = 0, we have τn ∧n (2) (1) h(y1 ) = E (y1 ,0) [h(Yτn ∧n )e−Yτn ∧n ] + E (y1 ,0) (2) g(Ys(1) )e−Ys ds (2) Note Yt = y2 + t q(Xs )ds ≥ y2 , let n → ∞, by dominated convergence theorem, we have h(y1 ) τD (2) = E (y1 ,0) [h(Yτ(1) )e−YτD ] + E (y1 ,0) D = E[e− τD q(Xs )ds φ(XτyD1 )] + E τD τD τD q(Xs )ds g(Xsy1 )e− s y1 q(Xu )du ds Hence h(x) = E x [e− (2) g(Ys(1) )e−Ys ds φ(XτD )] + E x g(Xs )e− s q(Xu )du ds Remark: An important application of this result is when g = 0, φ = and q is a constant, the Laplace transform of first exit time E x [e−qτD ] is the solution of Ah(x) − qh(x) = on D limx→y h(x) = y ∈ ∂D In the one-dimensional case, the ODE can be solved by separation of variables and gives explicit formula for E x [e−qτD ] For details, see Exercise 9.15 and Durrett [3], page 170 9.13 a) 26 Proof w(x) solves the ODE µw (x) + σ2 w (x) = −g(x), a < x < b; w(x) = φ(x), x = a or b 2µ x σ2 (x) = − 2g(x) on both sides, we get σ Multiply e 2µ σ2 w The first equation gives w (x) + 2µ 2µ (e σ2 x w (x)) = −e σ2 x 2µ 2µ x a So w (x) = C1 e− σ2 x − e− σ2 x 2g(x) σ2 2µ e σ2 ξ 2g(ξ) σ dξ Hence w(x) = C2 − x 2µ σ2 C e− σ x − 2µ y 2µ e− σ2 y 2µ e σ2 ξ a a 2g(ξ) dξdy σ2 By boundary condition, φ(a) = C2 − φ(b) = C2 − 2µ σ2 Let 2µ −σ σ2 2a 2µ C1 e 2µ 2µ b b − σ σ2 − a e− σ y 2µ C1 e y a 2µ e σ2 ξ 2g(ξ) σ dξdy = θ and solve the above equation, we have θ b y θ(ξ−y) g(ξ)dξdy µ a a e , −θa −θb e −e θ[φ(b) − φ(a)] + C1 = C2 = φ(a) + C1 −θa e θ b) Proof b a g(y)G(x, dy) = E x [ τD C1 g(Xt )dt] = w(x) in part a), when φ ≡ In this case, we have = = = θ2 µ(e−θa − e−θb ) θ2 µ(e−θa − e−θb ) θ2 µ(e−θa − e−θb ) b = a b y eθ(ξ−y) g(ξ)dξdy a a b b e−θy dydξ eθξ g(ξ) a ξ b eθξ g(ξ) a e−θξ − e−θb dξ θ θ g(ξ) (1 − eθ(ξ−b) )dξ, −θa µ(e − e−θb ) and b C2 = g(ξ) a e−θa (1 − eθ(ξ−b) )dξ − e−θb ) µ(e−θa 27 (2) So b g(y)G(x, dy) a C2 − C1 e−θx − θ = C1 (e−θa − e−θx ) − θ b = a g(ξ) a b a a θ 1{a : Xt ∈ K}) for all compacts K ⊂ D and all x ∈ D, then −Rα (α ≥ 0) is the inverse of characteristic operator A on Cc2 (D): (A − α)(Rα f ) = Rα (A − α)f = −f, ∀f ∈ Cc2 (D) Note when D = Rn , we get back to the resolvent equation in B Application of diffusions to obtaining formulas The following is a table of computation tricks used to obtain formulas: BM w/o drift general diffusion, esp BM with drift Distribution of first passage time reflection principle Girsanovs theorme Exit probability P (τa < τb ), P (τb < τa ) BM as a martingale Dynkins formula / boundary value problems Expectation of exit time Wt2 − t is a martingale Dynkins formula / boundary value problems Laplace transform of first passage time exponential martingale Girsanovs theorem Laplace transform of first exit time exponential martingale FK formula for boundary value problems 33 ... L Gong Introduction to stochastic differential equations Second edition Peking University Press, Beijing, 1995 [6] S W He, J G Wang and J A Yan Semimartingale theory and stochastic calculus Science... stochastic calculus Science Press, Beijing; CRC Press, Boca Raton, 1992 [7] B Øksendal Stochastic differential equations: An introduction with applications Sixth edition SpringerVerlag, Berlin, 2003... E x [u(BτD )] = u(y)µxD (dy) = ∂D u(y)σ(dy) ∂D c) Proof See, for example, Evans: Partial Differential Equations, page 26 7.8 a) Proof {τ1 ∧ τ2 ≤ t} = {τ1 ≤ t} ∪ {τ2 ≤ t} ∈ Nt And since {τi ≥