Stochastic Control Part 16 docx

40 89 0
Stochastic Control Part 16 docx

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

Thông tin tài liệu

Stochastic Control592 Besides, the optimal filtered wealth process  X x,π ∗ t = x +  t 0 π ∗ u d  S u is a solution of the linear equation  X ∗ t = x −  t 0 ρ 2 u ψ u (2) +  λ u Y u (2) 1 −ρ 2 u + ρ 2 u Y u (2)  X ∗ u d  S u +  t 0 ψ u (1)ρ 2 u +  λ u Y u (1) − ˜ h u 1 −ρ 2 u + ρ 2 u Y u (2) d  S u . (4.7) Proof. Similarly to the case of complete information one can show that the optimal strategy exists and that V H (t, x) is a square trinomial of the form (4.3) (see, e.g., (Mania & Tevzadze, 2003)). More precisely the space of stochastic integrals J 2 t,T (G) =   T t π u dS u : π ∈ Π(G)  is closed by Proposition 2.1, since M is G-predictable. Hence there exists optimal strategy π ∗ (t, x) ∈ Π(G) and U H (t, x) = E[|H − x −  T t π ∗ u (t, x)dS u | 2 |G t ]. Since  T t π ∗ u (t, x)dS u co- incides with the orthogonal projection of H − x ∈ L 2 on the closed subspace of stochastic integrals, then the optimal strategy is linear with respect to x, i.e., π ∗ u (t, x) = π 0 u (t) + xπ 1 u (t). This implies that the value function U H (t, x) is a square trinomial. It follows from the equality (3.14) that V H (t, x) is also a square trinomial, and it admits the representation (4.3). Let us show that V t (0), V t (1), and V t (2) satisfy the system (4.4)–(4.6). It is evident that V t (0) =V H (t, 0)= ess inf π∈Π (G) E    T t π u d  S u −  H T  2 +  T t [π 2 u  1 −ρ 2 u  +2π u ˜ h u ]dM u |G t  (4.8) and V t (2) = V 0 (t, 1) = ess inf π∈Π (G) E   1 +  T t π u d  S u  2 +  T t π 2 u  1 −ρ 2 u  d M u |G t  . (4.9) Therefore, it follows from the optimality principle (taking π = 0) that V t (0) and V t (2) are RCLL G-submartingales and V t (2) ≤ E(V T (2)|G t ) ≤ 1, V t (0) ≤ E(E 2 (H|G T )|G t ) ≤ E(H 2 |G t ). Since V t (1) = 1 2 (V t (0) + V t (2) −V H (t, 1)), (4.10) the process V t (1) is also a special semimartingale, and since V t (0) − 2V t (1)x + V t (2)x 2 = V H (t, x) ≥ 0 for all x ∈ R, we have V 2 t (1) ≤ V t (0)V t (2); hence V 2 t (1) ≤ E  H 2 |G t  . Expressions (4.8), (4.9), and (3.13) imply that V T (0) = E 2 (H|G T ), V T (2) = 1, and V H (T, x) = ( x − E(H|G T )) 2 . Therefore from (4.10) we have V T (1) = E(H|G T ), and V(0), V(1), and V(2) satisfy the boundary conditions. Thus, the coefficients V t (i), i = 0,1, 2, are special semimartingales, and they admit the decom- position V t (i) = V 0 (i) + A t (i) +  t 0 ϕ s (i)d  M s + m t (i), i = 0,1, 2, (4.11) where m(0), m(1), and m(2) are G-local martingales strongly orthogonal to  M and A(0), A( 1), and A (2) are G-predictable processes of finite variation. There exists an increasing continuous G-predictable process K such that M t =  t 0 ν u dK u , A t (i) =  t 0 a u (i)dK u , i = 0, 1, 2, where ν and a (i), i = 0,1, 2, are G-predictable processes. Let  X x,π s,t ≡ x +  t s π u d  S u and Y x,π s,t ≡ V H  t,  X x,π s,t  +  t s  π 2 u  1 −ρ 2 u  + 2π u ˜ h u  d M u . Then by using (4.3), (4.11), and the Itô formula for any t ≥ s we have   X x,π s,t  2 = x +  t s  2π u  λ u  X x,π s,u + π 2 u ρ 2 u  d M u + 2  t s π u  X x, π s,u d  M u (4.12) and Y x, π s,t −V H (s, x) =  t s    X x,π s,u  2 a u (2) −2  X x,π s,u a u (1) + a u (0)  dK u +  t s  π 2 u  1 −ρ 2 u + ρ 2 u V u− (2)  + 2π u  X x, π s,u   λ u V u− (2) + ϕ u (2)ρ 2 u  − 2π u  V u− (1)  λ u + ϕ u (1)ρ 2 u − ˜ h u  ν u dK u + m t −m s , (4.13) where m is a local martingale. Let G (π, x) = G( ω, u, π, x) = π 2  1 −ρ 2 u + ρ 2 u V u− (2)  + 2πx   λ u V u− (2) + ϕ u (2)ρ 2 u  −2π(V u− (1)  λ u + ϕ u (1)ρ 2 u − ˜ h u ). It follows from the optimality principle that for each π ∈ Π(G ) the process  t s    X x,π s,u  2 a u (2) −2  X x,π s,u a u (1) + a u (0)  dK u +  t s G  π u ,  X x, π s,u  ν u dK u (4.14) is increasing for any s on s ≤ t ≤ T, and for the optimal strategy π ∗ we have the equality  t s    X x, π ∗ s,u  2 a u (2) −2  X x,π ∗ s,u a u (1) + a u (0)  dK u = −  t s G  π ∗ u ,  X x, π ∗ s,u  ν u dK u . (4.15) Since ν u dK u = dM u is continuous, without loss of generality one can assume that the pro- cess K is continuous (see (Mania & Tevzadze, 2003) for details). Therefore, by taking in (4.14) τ s (ε) = inf{t ≥ s : K t −K s ≥ ε} instead of t, we have that for any ε > 0 and s ≥ 0 1 ε  τ s (ε) s    X x,π s,u  2 a u (2) −2  X x,π s,u a u (1) + a u (0)  dK u ≥ − 1 ε  τ s (ε) s G  π u ,  X x,π s,u  ν u dK u . (4.16) By passing to the limit in (4.16) as ε → 0, from Proposition B of (Mania & Tevzadze, 2003) we obtain x 2 a u (2) −2xa u (1) + a u (0) ≥ −G(π u , x)ν u , µ K -a.e., Mean-variance hedging under partial information 593 Besides, the optimal filtered wealth process  X x,π ∗ t = x +  t 0 π ∗ u d  S u is a solution of the linear equation  X ∗ t = x −  t 0 ρ 2 u ψ u (2) +  λ u Y u (2) 1 −ρ 2 u + ρ 2 u Y u (2)  X ∗ u d  S u +  t 0 ψ u (1)ρ 2 u +  λ u Y u (1) − ˜ h u 1 −ρ 2 u + ρ 2 u Y u (2) d  S u . (4.7) Proof. Similarly to the case of complete information one can show that the optimal strategy exists and that V H (t, x) is a square trinomial of the form (4.3) (see, e.g., (Mania & Tevzadze, 2003)). More precisely the space of stochastic integrals J 2 t,T (G) =   T t π u dS u : π ∈ Π(G)  is closed by Proposition 2.1, since M is G-predictable. Hence there exists optimal strategy π ∗ (t, x) ∈ Π(G) and U H (t, x) = E[|H − x −  T t π ∗ u (t, x)dS u | 2 |G t ]. Since  T t π ∗ u (t, x)dS u co- incides with the orthogonal projection of H − x ∈ L 2 on the closed subspace of stochastic integrals, then the optimal strategy is linear with respect to x, i.e., π ∗ u (t, x) = π 0 u (t) + xπ 1 u (t). This implies that the value function U H (t, x) is a square trinomial. It follows from the equality (3.14) that V H (t, x) is also a square trinomial, and it admits the representation (4.3). Let us show that V t (0), V t (1), and V t (2) satisfy the system (4.4)–(4.6). It is evident that V t (0) =V H (t, 0)= ess inf π∈Π (G) E    T t π u d  S u −  H T  2 +  T t [π 2 u  1 −ρ 2 u  +2π u ˜ h u ]dM u |G t  (4.8) and V t (2) = V 0 (t, 1) = ess inf π∈Π (G) E   1 +  T t π u d  S u  2 +  T t π 2 u  1 −ρ 2 u  d M u |G t  . (4.9) Therefore, it follows from the optimality principle (taking π = 0) that V t (0) and V t (2) are RCLL G-submartingales and V t (2) ≤ E(V T (2)|G t ) ≤ 1, V t (0) ≤ E(E 2 (H|G T )|G t ) ≤ E(H 2 |G t ). Since V t (1) = 1 2 (V t (0) + V t (2) −V H (t, 1)), (4.10) the process V t (1) is also a special semimartingale, and since V t (0) − 2V t (1)x + V t (2)x 2 = V H (t, x) ≥ 0 for all x ∈ R, we have V 2 t (1) ≤ V t (0)V t (2); hence V 2 t (1) ≤ E  H 2 |G t  . Expressions (4.8), (4.9), and (3.13) imply that V T (0) = E 2 (H|G T ), V T (2) = 1, and V H (T, x) = ( x − E(H|G T )) 2 . Therefore from (4.10) we have V T (1) = E(H|G T ), and V(0), V(1), and V(2) satisfy the boundary conditions. Thus, the coefficients V t (i), i = 0,1, 2, are special semimartingales, and they admit the decom- position V t (i) = V 0 (i) + A t (i) +  t 0 ϕ s (i)d  M s + m t (i), i = 0,1, 2, (4.11) where m(0), m(1), and m(2) are G-local martingales strongly orthogonal to  M and A(0), A( 1), and A (2) are G-predictable processes of finite variation. There exists an increasing continuous G-predictable process K such that M t =  t 0 ν u dK u , A t (i) =  t 0 a u (i)dK u , i = 0, 1, 2, where ν and a (i), i = 0,1, 2, are G-predictable processes. Let  X x,π s,t ≡ x +  t s π u d  S u and Y x,π s,t ≡ V H  t,  X x,π s,t  +  t s  π 2 u  1 −ρ 2 u  + 2π u ˜ h u  d M u . Then by using (4.3), (4.11), and the Itô formula for any t ≥ s we have   X x,π s,t  2 = x +  t s  2π u  λ u  X x,π s,u + π 2 u ρ 2 u  d M u + 2  t s π u  X x, π s,u d  M u (4.12) and Y x, π s,t −V H (s, x) =  t s    X x,π s,u  2 a u (2) −2  X x,π s,u a u (1) + a u (0)  dK u +  t s  π 2 u  1 −ρ 2 u + ρ 2 u V u− (2)  + 2π u  X x, π s,u   λ u V u− (2) + ϕ u (2)ρ 2 u  − 2π u  V u− (1)  λ u + ϕ u (1)ρ 2 u − ˜ h u  ν u dK u + m t −m s , (4.13) where m is a local martingale. Let G (π, x) = G( ω, u, π, x) = π 2  1 −ρ 2 u + ρ 2 u V u− (2)  + 2πx   λ u V u− (2) + ϕ u (2)ρ 2 u  −2π(V u− (1)  λ u + ϕ u (1)ρ 2 u − ˜ h u ). It follows from the optimality principle that for each π ∈ Π(G ) the process  t s    X x,π s,u  2 a u (2) −2  X x,π s,u a u (1) + a u (0)  dK u +  t s G  π u ,  X x, π s,u  ν u dK u (4.14) is increasing for any s on s ≤ t ≤ T, and for the optimal strategy π ∗ we have the equality  t s    X x, π ∗ s,u  2 a u (2) −2  X x,π ∗ s,u a u (1) + a u (0)  dK u = −  t s G  π ∗ u ,  X x, π ∗ s,u  ν u dK u . (4.15) Since ν u dK u = dM u is continuous, without loss of generality one can assume that the pro- cess K is continuous (see (Mania & Tevzadze, 2003) for details). Therefore, by taking in (4.14) τ s (ε) = inf{t ≥ s : K t −K s ≥ ε} instead of t, we have that for any ε > 0 and s ≥ 0 1 ε  τ s (ε) s    X x,π s,u  2 a u (2) −2  X x,π s,u a u (1) + a u (0)  dK u ≥ − 1 ε  τ s (ε) s G  π u ,  X x,π s,u  ν u dK u . (4.16) By passing to the limit in (4.16) as ε → 0, from Proposition B of (Mania & Tevzadze, 2003) we obtain x 2 a u (2) −2xa u (1) + a u (0) ≥ −G(π u , x)ν u , µ K -a.e., Stochastic Control594 for all π ∈ Π(G). Similarly from (4.15) we have that µ K -a.e. x 2 a u (2) −2xa u (1) + a u (0) = −G(π ∗ u , x)ν u and hence x 2 a u (2) −2xa u (1) + a u (0) = −ν u ess inf π∈Π (G) G(π u , x). (4.17) The infimum in (4.17) is attained for the strategy ˆ π t = V t (1)  λ t + ϕ t (1)ρ 2 t − ˜ h t − x(V t (2)  λ t + ϕ t (2)ρ 2 t ) 1 −ρ 2 t + ρ 2 t V t (2) . (4.18) From here we can conclude that ess inf π∈Π (G) G(π t , x) ≥ G( ˆ π t , x) = −  V t (1)  λ t + ϕ t (1)ρ 2 t − ˜ h t − x  V t (2)  λ t + ϕ t (2)ρ 2 t  2 1 −ρ 2 t + ρ 2 t V t (2) . (4.19) Let π n t = I [0,τ n [ (t) ˆ π t , where τ n = inf{t : |V t (1)| ≥ n}. It follows from Lemmas A.2, 3.1, and A.3 that π n ∈ Π(G ) for every n ≥ 1 and hence ess inf π∈Π (G) G(π t , x) ≤ G(π n t , x) for all n ≥ 1. Therefore ess inf π∈Π (G) G(π t , x) ≤ lim n→∞ G(π n t , x) = G( ˆ π t , x). (4.20) Thus (4.17), (4.19), and (4.20) imply that x 2 a t (2) −2xa t (1) + a t (0) = ν t (V t (1)  λ t + ϕ t (1)ρ 2 t − ˜ h t − x(V t (2)  λ t + ϕ t (2)ρ 2 t )) 2 1 −ρ 2 t + ρ 2 t V t (2) , µ K -a.e., (4.21) and by equalizing the coefficients of square trinomials in (4.21) (and integrating with respect to dK) we obtain A t (2) =  t 0  ϕ s (2)ρ 2 s +  λ s V s (2)  2 1 −ρ 2 s + ρ 2 s V s (2) dM s , (4.22) A t (1) =  t 0  ϕ s (2)ρ 2 s +  λ s V s (2)  ϕ s (1)ρ 2 s +  λ s V s (1) − ˜ h s  1 −ρ 2 s + ρ 2 s V s (2) dM s , (4.23) A t (0) =  t 0  ϕ s (1)ρ 2 s +  λ s V s (1) − ˜ h s  2 1 −ρ 2 s + ρ 2 s V s (2) dM s , (4.24) which, together with (4.11), implies that the triples (V(i), ϕ(i), m(i)), i = 0, 1, 2, satisfy the system (4.4)–(4.6). Note that A(0) and A(2) are integrable increasing processes and relations (4.22) and (4.24) imply that the strategy ˆ π defined by (4.18) belongs to the class Π (G). Let us show now that if the strategy π ∗ ∈ Π(G) is optimal, then the corresponding filtered wealth process  X π ∗ t = x +  t 0 π ∗ u d  S u is a solution of (4.7). By the optimality principle the process Y π ∗ t = V H  t,  X π ∗ t  +  t 0  ( π ∗ u ) 2  1 −ρ 2 u  + 2π ∗ u ˜ h u  d M u is a martingale. By using the Itô formula we have Y π ∗ t =  t 0   X π ∗ u  2 dA u (2) −2  t 0  X π ∗ u dA u (1) + A t (0) +  t 0 G  π ∗ u ,  X π ∗ u  d M u + N t , where N is a martingale. Therefore by applying equalities (4.22), (4.23), and (4.24) we obtain Y π ∗ t =  t 0  π ∗ u − V u (1)  λ u + ϕ u (1)ρ 2 u − ˜ h u 1 −ρ 2 u + ρ 2 u V u (2) +  X π ∗ u V u (2)  λ u + ϕ u (2)ρ 2 u 1 −ρ 2 u + ρ 2 u V u (2)  2  1 −ρ 2 u + ρ 2 u V u (2)  d M u + N t , which implies that µ M  -a.e. π ∗ u = V u (1)  λ u + ϕ u (1)ρ 2 u − ˜ h u 1 −ρ 2 u + ρ 2 u V u (2) −  X π ∗ u  V u (2)  λ u + ϕ u (2)ρ 2 u  1 −ρ 2 u + ρ 2 u V u (2) . By integrating both parts of this equality with respect to d  S (and adding then x to the both parts), we obtain that  X π ∗ satisfies (4.7).  The uniqueness of the system (4.4)–(4.6) we shall prove under following condition (D ∗ ), stronger than condition (D). Assume that (D ∗ )  T 0  λ 2 u ρ 2 u dM u ≤ C. Since ρ 2 ≤ 1 (Lemma A.1), it follows from (D ∗ ) that the mean-variance tradeoff of S is bounded, i.e.,  T 0  λ 2 u dM u ≤ C, which implies (see, e.g., Kazamaki (Kazamaki, 1994)) that the minimal martingale measure for S exists and satisfies the reverse Hölder condition R 2 (P). So, condition (D ∗ ) implies condition (D). Besides, it follows from condition (D ∗ ) that the minimal martingale measure  Q min for  S d  Q min = E T  −  λ ρ 2 ·  M  Mean-variance hedging under partial information 595 for all π ∈ Π(G). Similarly from (4.15) we have that µ K -a.e. x 2 a u (2) −2xa u (1) + a u (0) = −G(π ∗ u , x)ν u and hence x 2 a u (2) −2xa u (1) + a u (0) = −ν u ess inf π∈Π (G) G(π u , x). (4.17) The infimum in (4.17) is attained for the strategy ˆ π t = V t (1)  λ t + ϕ t (1)ρ 2 t − ˜ h t − x(V t (2)  λ t + ϕ t (2)ρ 2 t ) 1 −ρ 2 t + ρ 2 t V t (2) . (4.18) From here we can conclude that ess inf π∈Π (G) G(π t , x) ≥ G( ˆ π t , x) = −  V t (1)  λ t + ϕ t (1)ρ 2 t − ˜ h t − x  V t (2)  λ t + ϕ t (2)ρ 2 t  2 1 −ρ 2 t + ρ 2 t V t (2) . (4.19) Let π n t = I [0,τ n [ (t) ˆ π t , where τ n = inf{t : |V t (1)| ≥ n}. It follows from Lemmas A.2, 3.1, and A.3 that π n ∈ Π(G ) for every n ≥ 1 and hence ess inf π∈Π (G) G(π t , x) ≤ G(π n t , x) for all n ≥ 1. Therefore ess inf π∈Π (G) G(π t , x) ≤ lim n→∞ G(π n t , x) = G( ˆ π t , x). (4.20) Thus (4.17), (4.19), and (4.20) imply that x 2 a t (2) −2xa t (1) + a t (0) = ν t (V t (1)  λ t + ϕ t (1)ρ 2 t − ˜ h t − x(V t (2)  λ t + ϕ t (2)ρ 2 t )) 2 1 −ρ 2 t + ρ 2 t V t (2) , µ K -a.e., (4.21) and by equalizing the coefficients of square trinomials in (4.21) (and integrating with respect to dK) we obtain A t (2) =  t 0  ϕ s (2)ρ 2 s +  λ s V s (2)  2 1 −ρ 2 s + ρ 2 s V s (2) dM s , (4.22) A t (1) =  t 0  ϕ s (2)ρ 2 s +  λ s V s (2)  ϕ s (1)ρ 2 s +  λ s V s (1) − ˜ h s  1 −ρ 2 s + ρ 2 s V s (2) dM s , (4.23) A t (0) =  t 0  ϕ s (1)ρ 2 s +  λ s V s (1) − ˜ h s  2 1 −ρ 2 s + ρ 2 s V s (2) dM s , (4.24) which, together with (4.11), implies that the triples (V(i), ϕ(i), m(i)), i = 0, 1, 2, satisfy the system (4.4)–(4.6). Note that A(0) and A(2) are integrable increasing processes and relations (4.22) and (4.24) imply that the strategy ˆ π defined by (4.18) belongs to the class Π (G). Let us show now that if the strategy π ∗ ∈ Π(G) is optimal, then the corresponding filtered wealth process  X π ∗ t = x +  t 0 π ∗ u d  S u is a solution of (4.7). By the optimality principle the process Y π ∗ t = V H  t,  X π ∗ t  +  t 0  ( π ∗ u ) 2  1 −ρ 2 u  + 2π ∗ u ˜ h u  d M u is a martingale. By using the Itô formula we have Y π ∗ t =  t 0   X π ∗ u  2 dA u (2) −2  t 0  X π ∗ u dA u (1) + A t (0) +  t 0 G  π ∗ u ,  X π ∗ u  d M u + N t , where N is a martingale. Therefore by applying equalities (4.22), (4.23), and (4.24) we obtain Y π ∗ t =  t 0  π ∗ u − V u (1)  λ u + ϕ u (1)ρ 2 u − ˜ h u 1 −ρ 2 u + ρ 2 u V u (2) +  X π ∗ u V u (2)  λ u + ϕ u (2)ρ 2 u 1 −ρ 2 u + ρ 2 u V u (2)  2  1 −ρ 2 u + ρ 2 u V u (2)  d M u + N t , which implies that µ M  -a.e. π ∗ u = V u (1)  λ u + ϕ u (1)ρ 2 u − ˜ h u 1 −ρ 2 u + ρ 2 u V u (2) −  X π ∗ u  V u (2)  λ u + ϕ u (2)ρ 2 u  1 −ρ 2 u + ρ 2 u V u (2) . By integrating both parts of this equality with respect to d  S (and adding then x to the both parts), we obtain that  X π ∗ satisfies (4.7).  The uniqueness of the system (4.4)–(4.6) we shall prove under following condition (D ∗ ), stronger than condition (D). Assume that (D ∗ )  T 0  λ 2 u ρ 2 u dM u ≤ C. Since ρ 2 ≤ 1 (Lemma A.1), it follows from (D ∗ ) that the mean-variance tradeoff of S is bounded, i.e.,  T 0  λ 2 u dM u ≤ C, which implies (see, e.g., Kazamaki (Kazamaki, 1994)) that the minimal martingale measure for S exists and satisfies the reverse Hölder condition R 2 (P). So, condition (D ∗ ) implies condition (D). Besides, it follows from condition (D ∗ ) that the minimal martingale measure  Q min for  S d  Q min = E T  −  λ ρ 2 ·  M  Stochastic Control596 also exists and satisfies the reverse Hölder condition. Indeed, condition (D ∗ ) implies that E t (−2  λ ρ 2 ·  M ) is a G-martingale and hence E  E 2 tT  −  λ ρ 2 ·  M  |G t  = E  E tT  −2  λ ρ 2 ·  M  e  T t  λ 2 u ρ 2 u dM u G t  ≤ e C . Recall that the process Z belongs to the class D if the family of random variables Z τ I (τ≤T) for all stopping times τ is uniformly integrable. Theorem 4.2. Let conditions (A), (B), (C), and (D ∗ ) be satisfied. If a triple (Y(0), Y(1), Y(2)), where Y (0) ∈ D, Y 2 (1) ∈ D, and c ≤ Y(2) ≤ C for some constants 0 < c < C, is a solution of the system (4.4)–(4.6), then such a solution is unique and coincides with the triple (V(0), V(1), V(2)). Proof. Let Y (2) be a bounded strictly positive solution of (4.4), and let  t 0 ψ u (2)d  M u + L t (2) be the martingale part of Y(2). Since Y (2) solves (4.4), it follows from the Itô formula that for any π ∈ Π(G) the process Y π t = Y t (2)  1 +  t s π u d  S u  2 +  t s π 2 u  1 −ρ 2 u  d M u , (4.25) t ≥ s, is a local submartingale. Since π ∈ Π(G ), from Lemma A.1 and the Doob inequality we have E sup t≤T  1 +  t 0 π u d  S  2 ≤ const  1 + E  T 0 π 2 u ρ 2 u dM u  + E   T 0 |π u  λ u |dM u  2 < ∞. (4.26) Therefore, by taking in mind that Y (2) is bounded and π ∈ Π(G) we obtain E  sup s≤u≤T Y π u  2 < ∞, which implies that Y π ∈ D. Thus Y π is a submartingale (as a local submartingale from the class D), and by the boundary condition Y T (2) = 1 we obtain Y s (2) ≤ E   1 +  T s π u d  S u  2 +  T s π 2 u  1 −ρ 2 u  d M u |G s  for all π ∈ Π(G ) and hence Y t (2) ≤ V t (2). (4.27) Let ˜ π t = −  λ t Y t (2) + ψ t (2)ρ 2 t 1 −ρ 2 t + ρ 2 t Y t (2) E t  −  λY (2) + ψ(2)ρ 2 1 −ρ 2 + ρ 2 Y(2) ·  S  . Since 1 +  t 0 ˜ π u d  S u = E t (−  λY (2)+ψ (2)ρ 2 1−ρ 2 +ρ 2 Y(2) ·  S ), it follows from (4.4) and the Itô formula that the process Y ˜ π defined by (4.25) is a positive local martingale and hence a supermartingale. Therefore Y s (2) ≥ E   1 +  T s ˜ π u d  S u  2 +  T s ˜ π 2 u  1 −ρ 2 u  d M u |G s  . (4.28) Let us show that ˜ π belongs to the class Π (G). From (4.28) and (4.27) we have for every s ∈ [0, T] E   1 +  T s ˜ π u d  S u  2 +  T s ˜ π 2 u  1 −ρ 2 u  d M u |G s  ≤ Y s (2) ≤ V s (2) ≤ 1 (4.29) and hence E  1 +  T 0 ˜ π u d  S u  2 ≤ 1, (4.30) E  T 0 ˜ π 2 u  1 −ρ 2 u  d M u ≤ 1. (4.31) By (D ∗ ) the minimal martingale measure  Q min for  S satisfies the reverse Hölder condition, and hence all conditions of Proposition 2.1 are satisfied. Therefore the norm E   T 0 ˜ π 2 s ρ 2 s dM s  + E   T 0 | ˜ π s  λ s |dM s  2 is estimated by E  1 +  T 0 ˜ π u d  S u ) 2 and hence E  T 0 ˜ π 2 u ρ 2 u dM u < ∞, E   T 0 | ˜ π s  λ s |dM s  2 < ∞. It follows from (4.31) and the latter inequality that ˜ π ∈ Π(G ), and from (4.28) we obtain Y t (2) ≥ V t (2), which together with (4.27) gives the equality Y t (2) = V t (2). Thus V (2) is a unique bounded strictly positive solution of (4.4). Besides,  t 0 ψ u (2)d  M u =  t 0 ϕ u (2)d  M u , L t (2) = m t (2) (4.32) for all t, P-a.s. Let Y (1) be a solution of (4.5) such that Y 2 (1) ∈ D. By the Itô formula the process R t = Y t (1)E t  − ϕ(2)ρ 2 +  λV (2) 1 −ρ 2 + ρ 2 V(2) ·  S  +  t 0 E u  − ϕ(2)ρ 2 +  λV (2) 1 −ρ 2 + ρ 2 V(2) ·  S  (ϕ u (2)ρ 2 u +  λ u V u (2)) ˜ h u 1 −ρ 2 u + ρ 2 u V u (2) dM u (4.33) is a local martingale. Let us show that R t is a martingale. Mean-variance hedging under partial information 597 also exists and satisfies the reverse Hölder condition. Indeed, condition (D ∗ ) implies that E t (−2  λ ρ 2 ·  M ) is a G-martingale and hence E  E 2 tT  −  λ ρ 2 ·  M  |G t  = E  E tT  −2  λ ρ 2 ·  M  e  T t  λ 2 u ρ 2 u dM u G t  ≤ e C . Recall that the process Z belongs to the class D if the family of random variables Z τ I (τ≤T) for all stopping times τ is uniformly integrable. Theorem 4.2. Let conditions (A), (B), (C), and (D ∗ ) be satisfied. If a triple (Y(0), Y(1), Y(2)), where Y (0) ∈ D, Y 2 (1) ∈ D, and c ≤ Y(2) ≤ C for some constants 0 < c < C, is a solution of the system (4.4)–(4.6), then such a solution is unique and coincides with the triple (V(0), V(1), V(2)). Proof. Let Y (2) be a bounded strictly positive solution of (4.4), and let  t 0 ψ u (2)d  M u + L t (2) be the martingale part of Y(2). Since Y (2) solves (4.4), it follows from the Itô formula that for any π ∈ Π(G) the process Y π t = Y t (2)  1 +  t s π u d  S u  2 +  t s π 2 u  1 −ρ 2 u  d M u , (4.25) t ≥ s, is a local submartingale. Since π ∈ Π(G ), from Lemma A.1 and the Doob inequality we have E sup t≤T  1 +  t 0 π u d  S  2 ≤ const  1 + E  T 0 π 2 u ρ 2 u dM u  + E   T 0 |π u  λ u |dM u  2 < ∞. (4.26) Therefore, by taking in mind that Y (2) is bounded and π ∈ Π(G) we obtain E  sup s≤u≤T Y π u  2 < ∞, which implies that Y π ∈ D. Thus Y π is a submartingale (as a local submartingale from the class D), and by the boundary condition Y T (2) = 1 we obtain Y s (2) ≤ E   1 +  T s π u d  S u  2 +  T s π 2 u  1 −ρ 2 u  d M u |G s  for all π ∈ Π(G ) and hence Y t (2) ≤ V t (2). (4.27) Let ˜ π t = −  λ t Y t (2) + ψ t (2)ρ 2 t 1 −ρ 2 t + ρ 2 t Y t (2) E t  −  λY (2) + ψ(2)ρ 2 1 −ρ 2 + ρ 2 Y(2) ·  S  . Since 1 +  t 0 ˜ π u d  S u = E t (−  λY (2)+ψ (2)ρ 2 1−ρ 2 +ρ 2 Y(2) ·  S ), it follows from (4.4) and the Itô formula that the process Y ˜ π defined by (4.25) is a positive local martingale and hence a supermartingale. Therefore Y s (2) ≥ E   1 +  T s ˜ π u d  S u  2 +  T s ˜ π 2 u  1 −ρ 2 u  d M u |G s  . (4.28) Let us show that ˜ π belongs to the class Π (G). From (4.28) and (4.27) we have for every s ∈ [0, T] E   1 +  T s ˜ π u d  S u  2 +  T s ˜ π 2 u  1 −ρ 2 u  d M u |G s  ≤ Y s (2) ≤ V s (2) ≤ 1 (4.29) and hence E  1 +  T 0 ˜ π u d  S u  2 ≤ 1, (4.30) E  T 0 ˜ π 2 u  1 −ρ 2 u  d M u ≤ 1. (4.31) By (D ∗ ) the minimal martingale measure  Q min for  S satisfies the reverse Hölder condition, and hence all conditions of Proposition 2.1 are satisfied. Therefore the norm E   T 0 ˜ π 2 s ρ 2 s dM s  + E   T 0 | ˜ π s  λ s |dM s  2 is estimated by E  1 +  T 0 ˜ π u d  S u ) 2 and hence E  T 0 ˜ π 2 u ρ 2 u dM u < ∞, E   T 0 | ˜ π s  λ s |dM s  2 < ∞. It follows from (4.31) and the latter inequality that ˜ π ∈ Π(G ), and from (4.28) we obtain Y t (2) ≥ V t (2), which together with (4.27) gives the equality Y t (2) = V t (2). Thus V (2) is a unique bounded strictly positive solution of (4.4). Besides,  t 0 ψ u (2)d  M u =  t 0 ϕ u (2)d  M u , L t (2) = m t (2) (4.32) for all t, P-a.s. Let Y (1) be a solution of (4.5) such that Y 2 (1) ∈ D. By the Itô formula the process R t = Y t (1)E t  − ϕ(2)ρ 2 +  λV (2) 1 −ρ 2 + ρ 2 V(2) ·  S  +  t 0 E u  − ϕ(2)ρ 2 +  λV (2) 1 −ρ 2 + ρ 2 V(2) ·  S  (ϕ u (2)ρ 2 u +  λ u V u (2)) ˜ h u 1 −ρ 2 u + ρ 2 u V u (2) dM u (4.33) is a local martingale. Let us show that R t is a martingale. Stochastic Control598 As was already shown, the strategy ˜ π u = ψ u (2)ρ 2 u +  λ u Y u (2) 1 −ρ 2 + ρ 2 Y u (2) E u  − ψ(2)ρ 2 +  λY (2) 1 −ρ 2 + ρ 2 Y(2) ·  S  belongs to the class Π (G). Therefore (see (4.26)), E sup t≤T E 2 t  − ψ(2)ρ 2 +  λY (2) 1 −ρ 2 + ρ 2 Y(2) ·  S  = E sup t≤T  1 +  t 0 ˜ π u d  S  2 < ∞, (4.34) and hence Y t (1)E t  − ϕ(2)ρ 2 +  λV (2) 1 −ρ 2 + ρ 2 V(2) ·  S  ∈ D. On the other hand, the second term of (4.33) is the process of integrable variation, since ˜ π ∈ Π(G) and ˜ h ∈ Π(G) (see Lemma A.2) imply that E  T 0      E u  − ϕ(2)ρ 2 +  λV (2) 1 −ρ 2 + ρ 2 V(2) ·  S  (ϕ u (2)ρ 2 u +  λ u V u (2)) ˜ h u 1 −ρ 2 u + ρ 2 u V u (2)      d M u = E  T 0 | ˜ π u ˜ h u |dM u ≤ E 1/2  T 0 ˜ π 2 u dM u E 1/2  T 0 ˜ h 2 u dM u < ∞. Therefore, the process R t belongs to the class D, and hence it is a true martingale. By using the martingale property and the boundary condition we obtain Y t (1) = E   H T E tT  − ϕ(2)ρ 2 +  λV (2) 1 −ρ 2 + ρ 2 V(2) ·  S  +  T t E tu  − ϕ(2)ρ 2 +  λV (2) 1 −ρ 2 + ρ 2 V(2) ·  S  (ϕ u (2)ρ 2 u +  λ u V u (2)) ˜ h u 1 −ρ 2 u + ρ 2 u V u (2) dM u |G t  . (4.35) Thus, any solution of (4.5) is expressed explicitly in terms of (V(2), ϕ(2)) in the form (4.35). Hence the solution of (4.5) is unique, and it coincides with V t (1). It is evident that the solution of (4.6) is also unique.  Remark 4.1. In the case F S ⊆ G we have ρ t = 1, ˜ h t = 0, and  S t = S t , and (4.7) takes the form  X ∗ t = x −  t 0 ψ u (2) +  λ u Y u (2) Y u (2)  X ∗ u dS u +  t 0 ψ u (1) +  λ u Y u (1) Y u (2) dS u . Corollary 4.1. In addition to conditions (A)–(C) assume that ρ is a constant and the mean-variance tradeoff   λ · M T is deterministic. Then the solution of (4.4) is the triple (Y(2), ψ(2), L(2)), with ψ (2) = 0, L(2) = 0, and Y t (2) = V t (2) = ν  ρ, 1 −ρ 2 +   λ · M T −  λ · M t  , (4.36) where ν(ρ, α) is the root of the equation 1 −ρ 2 x −ρ 2 ln x = α. (4.37) Besides, Y t (1) = E  HE tT  −  λV (2) 1 −ρ 2 + ρ 2 V(2) ·  S  +  T t E tu  −  λV (2) 1 −ρ 2 + ρ 2 V(2) ·  S  λ u V u (2) ˜ h u 1 −ρ 2 + ρ 2 V u (2) dM u |G t  (4.38) uniquely solves (4.5), and the optimal filtered wealth process satisfies the linear equation  X ∗ t = x −  t 0  λ u V u (2) 1 −ρ 2 + ρ 2 V u (2)  X ∗ u d  S u +  t 0 ϕ u (1)ρ 2 +  λ u V u (1) − ˜ h u 1 −ρ 2 + ρ 2 V u (2) d  S u . (4.39) Proof. The function f (x) = 1−ρ 2 x − ρ 2 ln x is differentiable and strictly decreasing on ]0, ∞[ and takes all values from ] −∞, +∞[. So (4.37) admits a unique solution for all α. Besides, the inverse function α (x) is differentiable. Therefore Y t (2) is a process of finite variation, and it is adapted since   λ · M T is deterministic. By definition of Y t (2) we have that for all t ∈ [0, T] 1 −ρ 2 Y t (2) − ρ 2 ln Y t (2) = 1 − ρ 2 +   λ · M T −  λ · M t . It is evident that for α = 1 −ρ 2 the solution of (4.37) is equal to 1, and it follows from (4.36) that Y (2) satisfies the boundary condition Y T (2) = 1. Therefore 1 −ρ 2 Y t (2) − ρ 2 ln Y t (2) −  1 −ρ 2  = −  1 −ρ 2   T t d 1 Y u (2) + ρ 2  T t d ln Y u (2) =  T t  1 −ρ 2 Y 2 u (2) + ρ 2 Y u (2)  dY u (2) and  T t 1 −ρ 2 + ρ 2 Y u (2) Y 2 u (2) dY u (2) =   λ · M T −  λ · M t for all t ∈ [0, T]. Hence  t 0 1 −ρ 2 + ρ 2 Y u (2) Y 2 u (2) dY u (2) =   λ · M t , and, by integrating both parts of this equality with respect to Y (2)/(1 − ρ 2 + ρ 2 Y(2)), we obtain that Y (2) satisfies Y t (2) = Y 0 (2) +  t 0 Y 2 u (2)  λ 2 u 1 −ρ 2 + ρ 2 Y u (2) dM u , (4.40) which implies that the triple (Y(2), ψ(2) = 0, L(2) = 0) satisfies (4.4) and Y(2) = V(2) by Theorem 4.2. Equations (4.38) and (4.39) follow from (4.35) and (4.7), respectively, by taking ϕ (2) = 0.  Mean-variance hedging under partial information 599 As was already shown, the strategy ˜ π u = ψ u (2)ρ 2 u +  λ u Y u (2) 1 −ρ 2 + ρ 2 Y u (2) E u  − ψ(2)ρ 2 +  λY (2) 1 −ρ 2 + ρ 2 Y(2) ·  S  belongs to the class Π (G). Therefore (see (4.26)), E sup t≤T E 2 t  − ψ(2)ρ 2 +  λY (2) 1 −ρ 2 + ρ 2 Y(2) ·  S  = E sup t≤T  1 +  t 0 ˜ π u d  S  2 < ∞, (4.34) and hence Y t (1)E t  − ϕ(2)ρ 2 +  λV (2) 1 −ρ 2 + ρ 2 V(2) ·  S  ∈ D. On the other hand, the second term of (4.33) is the process of integrable variation, since ˜ π ∈ Π(G) and ˜ h ∈ Π(G) (see Lemma A.2) imply that E  T 0      E u  − ϕ(2)ρ 2 +  λV (2) 1 −ρ 2 + ρ 2 V(2) ·  S  (ϕ u (2)ρ 2 u +  λ u V u (2)) ˜ h u 1 −ρ 2 u + ρ 2 u V u (2)      d M u = E  T 0 | ˜ π u ˜ h u |dM u ≤ E 1/2  T 0 ˜ π 2 u dM u E 1/2  T 0 ˜ h 2 u dM u < ∞. Therefore, the process R t belongs to the class D, and hence it is a true martingale. By using the martingale property and the boundary condition we obtain Y t (1) = E   H T E tT  − ϕ(2)ρ 2 +  λV (2) 1 −ρ 2 + ρ 2 V(2) ·  S  +  T t E tu  − ϕ(2)ρ 2 +  λV (2) 1 −ρ 2 + ρ 2 V(2) ·  S  (ϕ u (2)ρ 2 u +  λ u V u (2)) ˜ h u 1 −ρ 2 u + ρ 2 u V u (2) dM u |G t  . (4.35) Thus, any solution of (4.5) is expressed explicitly in terms of (V(2), ϕ(2)) in the form (4.35). Hence the solution of (4.5) is unique, and it coincides with V t (1). It is evident that the solution of (4.6) is also unique.  Remark 4.1. In the case F S ⊆ G we have ρ t = 1, ˜ h t = 0, and  S t = S t , and (4.7) takes the form  X ∗ t = x −  t 0 ψ u (2) +  λ u Y u (2) Y u (2)  X ∗ u dS u +  t 0 ψ u (1) +  λ u Y u (1) Y u (2) dS u . Corollary 4.1. In addition to conditions (A)–(C) assume that ρ is a constant and the mean-variance tradeoff   λ · M T is deterministic. Then the solution of (4.4) is the triple (Y(2), ψ(2), L(2)), with ψ (2) = 0, L(2) = 0, and Y t (2) = V t (2) = ν  ρ, 1 −ρ 2 +   λ · M T −  λ · M t  , (4.36) where ν(ρ, α) is the root of the equation 1 −ρ 2 x −ρ 2 ln x = α. (4.37) Besides, Y t (1) = E  HE tT  −  λV (2) 1 −ρ 2 + ρ 2 V(2) ·  S  +  T t E tu  −  λV (2) 1 −ρ 2 + ρ 2 V(2) ·  S  λ u V u (2) ˜ h u 1 −ρ 2 + ρ 2 V u (2) dM u |G t  (4.38) uniquely solves (4.5), and the optimal filtered wealth process satisfies the linear equation  X ∗ t = x −  t 0  λ u V u (2) 1 −ρ 2 + ρ 2 V u (2)  X ∗ u d  S u +  t 0 ϕ u (1)ρ 2 +  λ u V u (1) − ˜ h u 1 −ρ 2 + ρ 2 V u (2) d  S u . (4.39) Proof. The function f (x) = 1−ρ 2 x − ρ 2 ln x is differentiable and strictly decreasing on ]0, ∞[ and takes all values from ] −∞, +∞[. So (4.37) admits a unique solution for all α. Besides, the inverse function α (x) is differentiable. Therefore Y t (2) is a process of finite variation, and it is adapted since   λ · M T is deterministic. By definition of Y t (2) we have that for all t ∈ [0, T] 1 −ρ 2 Y t (2) − ρ 2 ln Y t (2) = 1 − ρ 2 +   λ · M T −  λ · M t . It is evident that for α = 1 −ρ 2 the solution of (4.37) is equal to 1, and it follows from (4.36) that Y (2) satisfies the boundary condition Y T (2) = 1. Therefore 1 −ρ 2 Y t (2) − ρ 2 ln Y t (2) −  1 −ρ 2  = −  1 −ρ 2   T t d 1 Y u (2) + ρ 2  T t d ln Y u (2) =  T t  1 −ρ 2 Y 2 u (2) + ρ 2 Y u (2)  dY u (2) and  T t 1 −ρ 2 + ρ 2 Y u (2) Y 2 u (2) dY u (2) =   λ · M T −  λ · M t for all t ∈ [0, T]. Hence  t 0 1 −ρ 2 + ρ 2 Y u (2) Y 2 u (2) dY u (2) =   λ · M t , and, by integrating both parts of this equality with respect to Y (2)/(1 − ρ 2 + ρ 2 Y(2)), we obtain that Y (2) satisfies Y t (2) = Y 0 (2) +  t 0 Y 2 u (2)  λ 2 u 1 −ρ 2 + ρ 2 Y u (2) dM u , (4.40) which implies that the triple (Y(2), ψ(2) = 0, L(2) = 0) satisfies (4.4) and Y(2) = V(2) by Theorem 4.2. Equations (4.38) and (4.39) follow from (4.35) and (4.7), respectively, by taking ϕ (2) = 0.  Stochastic Control600 Remark 4.2. In case F S ⊆ G we have  M = M and ρ = 1. Therefore (4.40) is linear and Y t (2) = e   λ ·M t −  λ ·M T . In the case A = G of complete information, Y t (2) = e λ·N t −λ·N T . 5. Diffusion Market Model Example 1. Let us consider the financial market model d ˜ S t = ˜ S t µ t (η)dt + ˜ S t σ t (η)dw 0 t , dη t = a t (η)dt + b t (η)dw t , subjected to initial conditions. Here w 0 and w are correlated Brownian motions with Edw 0 t dw t = ρdt, ρ ∈ (−1, 1). Let us write w t = ρw 0 t +  1 −ρ 2 w 1 t , where w 0 and w 1 are independent Brownian motions. It is evident that w ⊥ = −  1 −ρ 2 w 0 + ρw 1 is a Brownian motion independent of w, and one can express Brownian motions w 0 and w 1 in terms of w and w ⊥ as w 0 t = ρw t −  1 −ρ 2 w ⊥ t , w 1 t =  1 −ρ 2 w t + ρw ⊥ t . (5.1) Suppose that b 2 > 0, σ 2 > 0, and coefficients µ, σ, a, and b are such that F S,η t = F w 0 ,w t and F η t = F w t . We assume that an agent would like to hedge a contingent claim H (which can be a function of S T and η T ) using only observations based on the process η. So the stochastic basis will be (Ω, F, F t , P), where F t is the natural filtration of ( w 0 , w) and the flow of observable events is G t = F w t . Also denote dS t = µ t dt + σ t dw 0 t , so that d ˜ S t = ˜ S t dS t and S is the return of the stock. Let ˜ π t be the number of shares of the stock at time t. Then π t = ˜ π t ˜ S t represents an amount of money invested in the stock at the time t ∈ [0, T]. We consider the mean-variance hedging problem to minimize E   x +  T 0 ˜ π t d ˜ S t − H  2  over all ˜ π for which ˜ π ˜ S ∈ Π(G ), (5.2) which is equivalent to studying the mean-variance hedging problem to minimize E   x +  T 0 π t dS t − H  2  over all π ∈ Π(G ). Remark 5.1. Since S is not G-adapted,  π t and  π t  S t cannot be simultaneously G-predictable and the problem to minimize E   x +  T 0 ˜ π t d ˜ S t − H  2  over all ˜ π ∈ Π(G ) is not equivalent to the problem (5.2). In this setting, condition (A) is not satisfied, and it needs separate consideration. By comparing with (1.1) we get that in this case M t =  t 0 σ s dw 0 s , M t =  t 0 σ 2 s ds, λ t = µ t σ 2 t . It is evident that w is a Brownian motion also with respect to the filtration F w 0 ,w 1 and condition (B) is satisfied. Therefore by Proposition 2.2  M t = ρ  t 0 σ s dw s . By the integral representation theorem the GKW decompositions (3.2) and (3.3) take the fol- lowing forms: c H = EH, H t = c H +  t 0 h s σ s dw 0 s +  t 0 h 1 s dw 1 s , (5.3) H t = c H + ρ  t 0 h G s σ s dw s +  t 0 h ⊥ s dw ⊥ s . (5.4) By putting expressions (5.1) for w 0 and w 1 in (5.3) and equalizing integrands of (5.3) and (5.4), we obtain h t = ρ 2 h G t −  1 −ρ 2 h ⊥ t σ t and hence  h t = ρ 2  h G t −  1 −ρ 2  h ⊥ t σ t . Therefore by the definition of  h  h t = ρ 2  h G t −  h t =  1 −ρ 2  h ⊥ t σ t . (5.5) By using notations Z s (0) = ρσ s ϕ s (0), Z s (1) = ρσ s ϕ s (1), Z s (2) = ρσ s ϕ s (2), θ s = µ s σ s , we obtain the following corollary of Theorem 4.1. Corollary 5.1. Let H be a square integrable F T -measurable random variable. Then the processes V t (0), V t (1), and V t (2) from (4.3) satisfy the following system of backward equations: V t (2) = V 0 (2) +  t 0 ( ρZ s (2) + θ s V s (2) ) 2 1 −ρ 2 + ρ 2 V s (2) ds +  t 0 Z s (2)dw s , V T (2) = 1, (5.6) V t (1) = V 0 (1) +  t 0 ( ρZ s (2) + θ s V s (2) )  ρZ s (1) + θ s V s (1) −  1 −ρ 2  h ⊥ s  1 −ρ 2 + ρ 2 V s (2) ds +  t 0 Z s (1)dw s , V T (1) = E(H|G T ), (5.7) V t (0) = V 0 (0) +  t 0  ρZ s (1) + θ s V s (1) −  1 −ρ 2  h ⊥ s  2 1 −ρ 2 + ρ 2 V s (2) ds +  t 0 Z s (0)dw s , V T (0) = E 2 (H|G T ). (5.8) [...]... Bertsekas & Tsitsiklis, 2008) that stochastic control is only a subfield of control theory which mainly addresses the design of a control methodology to deal with the probability of uncertainty in the data In a stochastic control problem, the designer usually assumes that random noise and disturbances exist in both subsystems parts (in the model and in the controller), and the control design always must take... some new stochastic control or optimization approaches we can operatively reduce a level of noise from irrelevant D, I, K, about market, in both of our day trading subsystem parts (model and controller) In our stochastic control solution, the designer have to assume that random noise and disturbances exist in both subsystems parts (in the model and in the controller), and consequently the control design... appropriate operative, tactical and strategic stochastic control approaches 3 Operative, tactical and strategic research examples of appropriate stochastic control approaches to various markets 3.1 Example of appropriate operative stochastic control approach In this chapter we give an operative research example as a relatively original and new stochastic control approach to day trading, and through... controller), and the control design always must take into account these random deviations Also, stochastic control aims to predict and to minimize the effects of these random deviations, by optimizing the design of the controller Applications of stochastic control solutions are very different, like usage of stochastic control in: artificial intelligence, natural sciences (biology, physics, medicine, creativity,... etc For this research, interesting examples are: usage of stochastic control in insurance (Schmidli, 2008), and usage continuous-time stochastic control and optimization with financial applications (Pham, 2009), or usage stochastic optimal control for researching international finance and debt crises (Stein, 2006), etc The financial markets use stochastic models to represent the seemingly random behaviour... and other markets documentation and results First, here is only short insight in some definitions of the main terms and subjects of researching area (stochastic, stochastic control, probabilistic and stochastic approaches, modern control and conventional control theory, cybernetics and informatics, pertinence and information needs, subjects on stock, bond, commodity, and currency markets, etc.) Usually... subjects on markets and appropriate operative (tactical or strategic) stochastic control approaches 621 3.2 Examples of appropriate tactical and strategic stochastic control approaches In this chapter we give two very short research examples (from practice and relevant literature) of appropriate tactical and strategic stochastic control approaches The tactical model conceptually represents one example... G τ G hu σdWu + Lt (5 .16) for F S -predictable process h G and F S martingale L G strongly orthogonal to W τ Therefore, by equalizing the right-hand sides of (5.15) and (5 .16) and taking the mutual characteristics of t∧τ G both parts with W τ , we obtain 0 (hu ρ2 − hu )du = 0 and hence u t 0 hu du = t 0 G hu I(u≤τ ) − hu du = − t 0 S I(u>τ ) E hu | Fτ du (5.17) 604 Stochastic Control Therefore, by... which is analyzable in terms of probability, deserves the name of stochastic process In mathematics, especially in probability theory, the field of stochastic processes has been a major area of research, and stochastic matrix is a matrix that has non-negative real entries that sum to one in each row Stochastic always means random, and where a stochastic process is one whose behavior is non-deterministic... + m(2) t ) Mean-variance hedging under partial information 607 6 Acknowledgments This work was supported by Georgian National Science Foundation grant STO09-471-3-104 7 References Bismut, J M (1973) Conjugate convex functions in optimal stochastic control, J Math Anal Appl., Vol 44, 384–404 Chitashvili, R (1983) Martingale ideology in the theory of controlled stochastic processes, In: Probability theory . main terms and subjects of researching area (stochastic, stochastic control, probabilistic and stochastic approaches, modern control and conventional control theory, cybernetics and informatics,. stochastic control is only a subfield of control theory which mainly addresses the design of a control methodology to deal with the probability of uncertainty in the data. In a stochastic control. of stochastic control in insurance (Schmidli, 2008), and usage continuous-time stochastic control and optimization with financial applications (Pham, 2009), or usage stochastic optimal control

Ngày đăng: 21/06/2014, 05:20

Tài liệu cùng người dùng

  • Đang cập nhật ...

Tài liệu liên quan