Let{Wt, t∈[0,1]}be a standard Brownian motion defined on the canonical probability space (Ω,F, P). Consider the stochastic differential equation
2 Xt=X0−t
0f(Xs)ds+Wt
X0=g(X1−X0), (4.21)
wheref, g:R→Rare two continuous functions.
Observe that the periodic condition X0 = X1 is not included in this formulation. In order to handle this and other interesting cases, one should consider more general boundary conditions of the form
X0=g(e−λX1−X0),
with λ ∈ R. The periodic case would correspond to λ = 0 and g(x) = (e−λ−1)−1x. In order to simplify the exposition we will assume henceforth that λ= 0.
Whenf ≡0, the solution of (4.21) is
Yt=Wt+g(W1). (4.22)
Denote by Σ the set of continuous functionsx: [0,1]→Rsuch that x0= g(x1−x0). The mappingω→Y(ω) is a bijection from Ω into Σ. Consider the process Y ={Yt, t∈[0,1]} given by (4.22). Define the transformation T : Ω→Ω by
T(ω)t=ωt+ t
0
f(Ys(ω))ds. (4.23)
Lemma 4.2.1 The transformationT is a bijection ofΩif and only if Eq.
(4.21) has a unique solution for each ω ∈ Ω; in this case this solution is given by X=Y(T−1(ω)).
Proof: If T(η) =ω, then the functionXt =Yt(η) solves Eq. (4.21) for Wt=ωt. Indeed:
Xt=X0+ηt=X0+ωt− t
0
f(Ys(η))ds=X0+Wt− t
0
f(Xs)ds.
Conversely, given a solution X to Eq. (4.21), we have T(Y−1(X)) = W. Indeed, if we setY−1(X) =η, then
T(η)t = ηt+ t
0
f(Ys(η))ds=ηt+ t
0
f(Xs)ds
= ηt+Wt−Xt+X0=Wt.
There are sufficient conditions for T to be a bijection (see Exercise 4.2.10). Henceforth we will impose the following assumptions:
(H.1) There exists a unique solution to Eq. (4.21) for eachω∈Ω.
(H.2) f andgare of classC1.
Now we turn to the discussion of the Markov field property. First notice that the processY is a Markov random field (see Exercise 4.2.3). Suppose thatQis a probability on Ω such thatP =Q◦T−1. Then{T(ω)t,0≤t≤1} will be a Wiener process underQ, and, consequently, the law of the process X under the probabilityP coincides with the law ofY underQ. In this way we will translate the problem of the Markov property ofX into the problem of the Markov property of the processY under a new probabilityQ. This problem can be handled, providedQis absolutely continuous with respect to the Wiener measureP and we can compute an explicit expression for its Radon-Nikodym derivative. To do this we will make use of Theorem 4.1.2, applied to the process
ut=f(Yt). (4.24)
Notice thatT is bijective by assumption (H.1) and thatuisH-continuously differentiable by (H.2). Moreover,
Dsut=f′(Yt)[g′(W1) +1{s≤t}]. (4.25) The the Carleman-Fredholm determinant of kernel (4.25) is computed in the next lemma.
Lemma 4.2.2 Set αt=f′(Yt). Then det2(I+Du) =
1 +g′(W1)
1−e−01αtdt
e−g′(W1)01αtdt. Proof: From (A.12) applied to the kernelDu, we obtain
det2(I+Du) = 1 + ∞ n=2
γn
n!, (4.26)
where
γn =
[0,1]n
det(1{i=j}Dtiutj)dt1. . . dtn
= n!
{t1<t2<ããã<tn}
det(1{i=j}Dtiutj)dt1. . . dtn
= 1
0
α(t)dt n
detBn, and the matrix Bn is given by
B =
0 g′(W1) + 1 g′(W1) + 1 ã ã ã g′(W1) + 1 g′(W1) 0 g′(W1) + 1 ã ã ã g′(W1) + 1 g′(W1) g′(W1) 0 ã ã ã g′(W1) + 1
ã ã ã ã g′(W1) g′(W1) g′(W1) ã ã ã 0
.
Simple computations show that for alln≥1
detBn= (−1)ng′(W1)n(g′(W1) + 1) + (−1)n+1g′(W1)(g′(W1) + 1)n. Hence,
det2(I+Du) = 1 + ∞ n=1
1 n!
1 0
α(t)dt n
×!
(−1)ng′(W1)n(g′(W1) + 1) + (−1)n+1g′(W1)(g′(W1) + 1)n"
= (g′(W1) + 1)e−g′(W1)01αtdt−g′(W1)e−(g′(W1)+1)01αtdt
=
1 +g′(W1)
1−e−01αtdt
e−g′(W1)01αtdt.
Therefore, the following condition implies that det2(I+Du)= 0 a.s.:
(H.3) 1 +g′(y)
1−e−f′(x+g(y))
= 0, for almost all x, yinR.
Suppose that the functions f and g satisfy conditions (H.1) through (H.3). Then the process ugiven by (4.24) satisfies the conditions of Theo- rem 4.1.2, and we obtain
η(u) =dQ dP =
1 +g′(W1)
&
1−exp
− 1
0
f′(Yt)dt'
(4.27)
×exp
−g′(W1) 1
0
f′(Yt)dt− 1
0
f(Yt)dWt
−1 2
1 0
f(Yt)2dt
. We will denote by Φ the term
Φ = 1 +g′(W1)
&
1−exp
− 1
0
f′(Yt)dt '
,
and letLbe the exponential factor in (4.27). Using the relationship between the Skorohod and Stratonovich integrals, we can write
1 0
f(Yt)dWt= 1
0
f(Yt)◦dWt−1 2
1 0
f′(Yt)dt−g′(W1) 1
0
f′(Yt)dt.
Consequently, the termLcan be written as L= exp
− 1
0
f(Yt)◦dWt+1 2
1 0
f′(Yt)dt−1 2
1 0
f(Yt)2dt
. (4.28) In this form we get
η(u) =|Φ|L.
The main result about the Markov field property of the processX is the following:
Theorem 4.2.1 Suppose that the functions f and g are of class C2 and f′ has linear growth. Suppose furthermore that the equation
Xt=X0−t
0f(Xs)ds+Wt
X0=g(X1−X0) (4.29)
has a unique solution for each W ∈ Ω and that (H.3) holds. Then the process X verifies the Markov field property if and only if one of the fol- lowing conditions holds:
(a) f(x) =ax+b, for some constantsa, b∈R,
(b) g′ = 0, (c) g′ =−1.
Remarks:
1.If condition (b) or (c) is satisfied, we have an initial or final fixed value.
In this case, assuming only thatf is Lipschitz, it is well known that there is a unique solution that is a Markov proces (not only a Markov random field).
2. Suppose that (a) holds, and assume that the implicit equation x = g((e−a−1)x+y) has a unique continuous solution x= ϕ(y). Then Eq.
(4.29) admits a unique solution that is a Markov random field (see Exercise 4.2.6).
Proof: Taking into account the above remarks, it suffices to show that if X is a Markov random field then one of the above conditions is satisfied.
LetQbe the probability measure onC0([0,1]) given by (4.27). The law of the processX under P is the same as the law ofY under Q. Therefore,Y is a Markov field underQ.
For anyt∈(0,1), we define theσ-algebras Fti=σ9
Yu, 0≤u≤t:
=σ9
Wu, 0≤u≤t, g(W1): , Fte=σ9
Yu, t≤u≤1, Y0:
=σ9
Wu, t≤u≤1: , and Ft0=σ{Y0, Yt}=σ{Wt, g(W1)}.
The random variable L defined in (4.28) can be written as L =LitLet, where
Lit= exp
− t
0
f(Ys)◦dWs+1 2
t 0
f′(Ys)ds−1 2
t 0
f(Ys)2ds
and
Let= exp
− 1
t
f(Ys)◦dWs+1 2
1 t
f′(Ys)ds−1 2
1 t
f(Ys))2ds
. Notice thatLitisFti-measurable andLetisFte-measurable. For any nonneg- ative random variableξ,Fti-measurable, we define (see Exercise 4.2.11)
∧ξ =EQ(ξ|Fte) = E(ξη(u)| Fte)
E(η(u)| Fte) =E(ξ|Φ|Lit| Fte) E(|Φ|Lit| Fte) .
The denominator in the above expression is finite a.s. becauseη(u) is inte- grable with respect toP. The fact thatY is a Markov field underQimplies that the σ-fieldsFti andFteare conditionally independent givenFt0. As a consequence, ∧ξ isFt0-measurable. Choosingξ= (Lit)−1 andξ=χ(Lit)−1,
where χ is a nonnegative, bounded, and Fti-measurable random variable, we obtain that
E(|Φ| | Fte)
E(|Φ|Lit| Fte) and E(χ|Φ| | Fte) E(|Φ|Lit| Fte) areFt0-measurable. Consequently,
Gχ= E(χ|Φ| | Fte) E(|Φ| | Fte) is alsoFt0-measurable.
The next step will be to translate this measurability property into an analytical condition using Lemma 1.3.3. First notice that ifχ is a smooth random variable that is bounded and has a bounded derivative, thenGχ
belongs toD1,2loc becausef′ has linear growth. Applying Lemma 1.3.3 to the random variableGχ and to theσ-field σ{Wt, W1}yields
d
dsDs[Gχ] = 0
a.e. on [0,1]. Notice thatdsdDsχ= 0 a.e. on [t,1], becauseχisFti-measurable (again by Lemma 1.3.3). Therefore, for almost alls∈[t,1], we get
E
&
χ d
dsDs|Φ| | Fte
'
E[|Φ| | Fte] =E[χ|Φ| | Fte]E
&d
dsDs|Φ| | Fte
' . The above equality holds true if χ is Fte-measurable. So, by a monotone class argument, it holds for any bounded and nonnegative random variable χ, and we get that
1 Φ
d
dsDsΦ =−g′(W1)Zf′′(Ws+g(W1)) 1 +g′(W1)(1−Z)
is Fte-measurable for almost all s ∈ [t,1] (actually for all s ∈ [t,1] by continuity), where
Z = exp
− 1
0
f′(Wr+g(W1))dr
.
Suppose now that condition (a) does not hold, that is, there exists a point y ∈ Rsuch thatf′′(y)= 0. By continuity we havef′′(x)= 0 for all xin some interval (y−ǫ, y+ǫ). Given t < s <1, define
As={f′′(Ws+g(W1))∈(y−ǫ, y+ǫ)}. ThenP(As)>0, and
1As
g′(W1)Z 1 +g′(W1)(1−Z)
isFte-measurable. Again applying Lemma 1.3.3, we obtain that d
drDr
& g′(W1)Z 1 +g′(W1)(1−Z)
'
= 0 for almost allr∈[0, t] andω∈As. This implies
g′(W1)(1 +g′(W1))f′′(Wr+g(W1)) = 0 a.e. on [0, t]×As. Now, if
B=As∩ {f′′(Wr+g(W1))∈(y−ǫ, y+ǫ)}, we have thatP(B)= 0 and
g′(W1)(1 +g′(W1)) = 0,
a.s. onB. Then if (b) and (c) do not hold, we can find an interval I such that if W1 ∈I then g′(W1)(1 +g′(W1)) = 0. The setB∩ {W1 ∈ I} has nonzero probability, and this implies a contradiction.
Consider the stochastic differential equation (4.21) in dimensiond > 1.
One can ask under which conditions the solution is a Markov random field.
This problem is more difficult, and a complete solution is not available. First we want to remark that, unlike in the one-dimensional case, the solution can be a Markov process even thoughf is nonlinear. In fact, suppose that the boundary conditions are of the form
X0ik = ak; 1≤k≤l, X1jk = bk; 1≤k≤d−l,
where {i1, . . . , il} ∪ {j1, . . . , jd−l} is a partition of {1, . . . , d}. Assume in addition thatf is triangular, that means,fk(x) is a function ofx1, . . . , xk for all k. In this case, if for each k, fk satisfies a Lipschitz and linear growth condition on the variablexk, one can show that there exists a unique solution of the equation dXt+f(Xt) = dWt with the above boundary conditions, and the solution is a Markov process. The Markov field property for triangular functions f and triangular boundary conditions has been studied by Ferrante [97]. Other results in the general case obtained by a change of probability argument are the following:
(1) In dimension one, and assuming a linear boundary condition of the typeF0X0+F1X1=h0, Donati-Martin (cf. [80]) has obtained the existence and uniqueness of a solution for the equation
dXt=σ(Xt)◦dWt+b(Xt)
when the coefficients b and σ are of class C4 with bounded derivatives, and F0F1 = 0. On the other hand, ifσ is linear (σ(x) =αx),h0= 0, and assuming thatb is of classC2, then one can show that the solutionX is a Markov random field only if the drift is of the formb(x) =Ax+Bxlog|x|, where |B| < 1. See also [5] for a discussion of this example using the approach developed in Section 4.2.3.
(2) In thed-dimensional case one can show the following result, which is similar to Theorem 2.1 (cf. Ferrante and Nualart [98]):
Theorem 4.2.2 Supposef is infinitely differentiable,gis of classC2, and det(I−φ(1)g′(W1)+g′(W1))= 0a.s., whereφ(t)is the solution of the linear equation dφ(t) =f′(Yt)φ(t)dt,φ(0) =I. We also assume that the equation
Xt=X0−t
0f(Xs)ds+Wt
X0=g(X1−X0)
has a unique solution for each W ∈ C0([0,1];Rd), and that the following condition holds:
(H.4) span∂i1ã ã ã∂imf′(x);i1, . . . , im∈ {1, . . . , d}, m≥1=Rdìd, for allx∈Rd.
Then we have that g′(x) is zero or −Id, that is, the boundary condition is of the form X0=aorX1=b.
(3) It is also possible to have a dichotomy similar to the one-dimensional case in higher dimensions (see Exercise 4.2.12).