Definition 4.1. Viscosity sub/super/solution for the non-local eikonal equation)
3. Upper bound for the error on the free boundaries
In this section we establish our main result: the following Theorem.
Theorem 3.1. Letε∈[0,+∞)dsuch thatε= 1. There exists a constantCT,ε>0 such that
0≤sn(T , ε)−s(T , ε)≤CT,ε√ h, whenh=T /n is small enough.
This result is quite easy to prove in the cased= 1 because we are able to control∂∂x2P2 thanks to the variational inequality satisfied byP. However, ford >1, we can not get this control. To prove the result forε∈(0,+∞)dsuch thatε= 1, we will use a maximum principle. If there exists i∈ {1, . . . , d} such thatεi = 0, then we come back to the case withd−1 assets.
3.1. The cased= 1
Throughout this section, we assume thatd= 1 and thatα= 1. We will establish Theorem 3.1 in this particular case.
We recall the definition of the critical price for an American option with pay-offf. Fort∈(0, T], we set
s(t) = inf{x∈R+:P(t, x)> f(x)}.
As the value function of the corresponding Bermudean option,Pn, is convex, it is possible to define a critical price for the Bermudean option. Fork∈ {1, . . . , n}, we set
sn(kh) = inf{x∈R+:Pn(kh, x)> f(x)}.
AsPn ≤P for alln∈N∗ and limn→+∞Pn(T ,ã) =P(T ,ã), it is easy to see that sn ≥s for all n∈N∗ and limn→+∞sn(T) =s(T). We recall that we know that there exists a constantC1>0 such that
sup
x∈[0,+∞)
[P−Pn](T , x)≤C1h.
We deduce from this an upper bound for the difference betweens andsn. On the open set (s(T), sn(T)), we have:
σ2ξ2 2
∂2P
∂x2(T , ξ) = rP(T , ξ)−(r−δ)ξ∂P
∂x(T , ξ) +∂P
∂t(T , ξ)
≥ rP(T , ξ)−(r−δ)ξ∂P
∂x(T , ξ)
≥ r(K−ξ)−(δ−r)+ξ
≥ rK−min{δ, r}sn(T), then there existsC1>0 such that ∂∂x2P2(T , ξ)≥C1.
As we know that s(T) < K, we assume that n is great enough to have sn(T)< K and we integrate this inequality between s(T) and x∈(s(T), sn(T)).
We get:
σ2sn(T)2 2
∂P
∂x(T , x) + 1
≥ (rK−δsn(T)) (x−s(T))
−(r−δ)+sn(T) (P(T , x)−f(x))
≥ (rK−δsn(T)) (x−s(T))
−(r−δ)+sn(T) (P(T , sn(T))−f(sn(T))), because the functionx→P(T , x)−f(x) is nondecreasing on [0, K]. Integrating a second time betweens(T) andsn(T), we obtain:
σ2sn(T)2
2 (P(T , sn(T))−f(sn(T)))≥1
2(rK−δsn(T)) (sn(T)−s(T))2
−(r−δ)+sn(T) (P(T , sn(T))−f(sn(T))) (sn(T)−s(T)).
142 E. Chevalier
As limn→+∞sn(T) =s(T), forη >0 andngreat enough, we have
(1 +η)σ2sn(T)2(P(T , sn(T))−f(sn(T)))≥(rK−δsn(T)) (sn(T)−s(T))2. It follows that
(sn(T)−s(T))2 ≤ (1 +η) σ2sn(T)2
(rK−δsn(T))(P(T , sn(T))−f(sn(T)))
≤ (1 +η) σ2s(T)2
(rK−δs(T))C1h+o(h).
To conclude, we have:
lim sup
n→+∞
sn(T)−s(T) s(T)√
h ≤σ
.
C1(1 +η) rK−δs(T), and if we letη going to 0, we get the result.♦
3.2. The cased >1
To prove Theorem 3.1, we need much information on the regularity of the function x→sn(T , x). The following lemma will provide us with this regularity result.
Lemma 3.2. Letε∈(0,+∞)d such that ε= 1. For η >0, we set Vηε=
x∈]0,+∞[d: x−s(T , ε)ε< η .
Letη >0 such that the setVηε is included in a compact subset of]0,+∞)d. There exists a constantsεη >0 such that, for n∈Ngreat enough, we have
sup
x∈Vηε
|sn
T ,xx
−sn(T , ε)| x−s(T , ε)ε ≤sεη.
Proof of Lemma 3.2. Let n ∈ N∗. As 0 belongs to ET ∩ ETn and as these two sets are convex subsets of Rd, we can define the functions g and gn such that
∀x∈(0,+∞)d,
g(x) = inf{à >0, x/à∈ ET} and gn(x) = inf{à >0, x/à∈ ETn}. They are convex and homogeneous functions (see [4], Lemma I.2, p. 5). On the other hand, forx∈(0,+∞)d, it is easy to see that
g(x) = x s
T ,xx
and gn(x) = x sn
T ,xx
.
Forx∈Vηε, we have
|sn
T ,xx
−sn(T , ε)|
x−s(T , ε)ε = 1 xgn(x)gn(ε)
|gn(xε)−gn(x)| x−s(T , ε)ε .
It follows from the definition ofsn that s1≥sn≥sso g1≤gn≤g. Sinceg1 and gare continuous with positive values on (0,+∞)d, there exists two constants C1, C2>0 such thatC1< gn< C2. Hence, there exists a constantC >0 such that
sup
x∈Vηε
|sn
T ,xx
−sn(T , ε)|
x−s(T , ε)ε ≤C sup
x∈Vηε
|gn(xε)−gn(x)| x−s(T , ε)ε .
Writinggn(x) =gn(s(T , ε)ε) + (gn(x)−gn(s(T , ε)ε)) and using the homogeneity ofgandgn, we have
sup
x∈Vηε
|sn
T ,xx
−sn(T , ε)|
x−s(T , ε)ε ≤ Cgn(ε) sup
x∈Vηε
| x −s(T , ε)| x−s(T , ε)ε +C sup
x∈Vηε
|gn(s(T , ε)ε)−gn(x)| x−s(T , ε)ε . As (gn(ε))n∈N∗ is bounded, there exists ˜C >0 such that
Cgn(ε) sup
x∈Vηε
| x −s(T , ε)| x−s(T , ε)ε ≤C.˜
Moreover,gn is convex, denoting byδVηε the boundary ofVηε, then we obtain sup
x∈Vηε
|gn(s(T , ε)ε)−gn(x)|
x−s(T , ε)ε ≤ sup
x∈δVηε
|gn(s(T , ε)ε)−gn(x)| x−s(T , ε)ε
≤ 2 η sup
x∈δVηε
|gn(x)|
≤ 2C2 η . ♦
Moreover, we recall a parabolic maximum principle on which lies the proof of Theorem 3.1. It appears inFriedman A.(1975).
LetD a bounded domain of(0, T)×Rd. We define the parabolic boundary ofD by δpD =δD− {(t, x)∈δD: t=T} whereδD is the boundary ofD and introduce the operatorM˜ such thatM˜h=Mh−rh.
Letua function defined on [0, T]×Rd, continuous on D, and such that¯ u∈ C1,2(D), M˜u≥0 on D and u≤0 onδpD.
Then we haveu≤0on D.
Now we are able to prove Theorem 3.1.
Proof of Theorem3.1.Assume that there exists a constantb >0 such thats(T , ε)<
sn(T , ε)−b√
h.We will prove that it leads to a contradiction by proving that it implies that there existsλ∈[s(T , ε), sn(T , ε)−b√
h] such that 0≥[P−f] (T , λε).
144 E. Chevalier
For that we will apply the maximum principle on the following domain:
D = (t, x)∈(0, T)×(0,+∞)d : x−s(T , ε)ε< η√ h ands
t, x
x
<x< sn
T , x x
, whereη >0 is a constant which will be determined later.
Since for allx∈(0,+∞)d, the functiont→s
t,xx
is non-increasing and since s
T ,xx
< sn
T ,xx
, we can assert that D is a bounded domain in (0, T)×(0,+∞)d.
Fort ∈(0, T), we set ¯t =hmin{k ∈ {0, . . . , n} : t ≤kh}. First we notice that Pn =f on ¯D because for allt∈(0, T), we havesn
T ,xx
≤sn ¯t,xx
. Hence, it follows from the estimation of the value functions error and the fact thatP is non-increasing with respect to time that we have:
[P−f] (t, x)≤P(¯t, x)−Pn(¯t, x)≤Cdh onD.
Notice that onδpD, we have [P−f](t, x)≤
⎧⎨
⎩
0 if x=s
t,xx
or t= 0 Ch if x=sn
T ,xx
orx−s(T , ε)ε=η√ h.
Hence we will introduce a function which will kill the positive part of P−f on δpD. On (0, T)×(0,+∞)d, we define the functionβ(t, x) =β1(x) +β2(x),with
β1(x) = a
√h
x −sn(T , ε) +b√ h
+3
β2(x) = c
√h
x−s(T , ε)ε − η 2
√h +3
, wherea,b, andc are positive constants which will be determined later.
Now we want to prove that P−f−β ≤0 on D, so we just have to prove that the functionP−f−β satisfies the assumptions of the maximum principle.
Indeedβ is a C1,2 function on D and D is included in a compact subset of (0, T)×(0,+∞)d so we can apply the maximum principle onD to the function P−f−β.
First step: We prove thatP−f−β ≤0 on δpD.
Let (t, x)∈δpD. We have four cases to study, corresponding to four part ofδpD.
• First case: Assume thatx=s
t,xx
. In this case, we have
[P−f−β] (t, x) =−β(t, x)≤0.
• Second case: Assume thatx−s(T , ε)ε=η√ h.
From the estimation of the error on value functions (see [1]), we have [P−f−β] (t, x) ≤ Cdh−β2(x)
≤
Cd−cη3 8
h.
Hence, [P−f−β] (t, x)≤0 if we choosec andη such thatCd−cη83 <0.
• Third case: Assume thatx=sn
T ,xx
. In this case, we have
[P−f−β] (t, x) ≤ Cdh−β1(t, x)
≤ Cdh− a
√h
sn
T , x x
−sn(T , ε) +b√ h
+3
. Since there existsR >0 such thatD⊂VRεandVRεis included in a compact subset of (0,+∞)d, we can apply Lemma 3.2, to prove that there existssL >0 such that
[P−f−β] (t, x) ≤
Cd−a(b−sLη)3 h.
We conclude by asserting that [P−f−β] (t, x)≤0 as soon as we choosea,b, and η such thatCd−a(b−sLη)3<0.
• Fourth case: We assume thatt= 0. We have [P−f−β] (0, x) =−β(t, x)≤0.
In conclusion,P−f−β≤0 onδpD if the two following conditions are satisfied:
Cd< cη3
8 and Cd< a(b−sLη)3. Second step: We prove that ˜M[P−f−β]≥0 onD.
We begin with evaluating ˜Mβ(t, x) whenhgoes to 0. Computing the deriva- tives of β on D and using Lemma 3.2, we get the following upper bound for h going to 0.
M˜β(t, x)≤3aM s(T , ε)2(b+sLη) + 3cηM s(T , ε)2+o(1),
where theo(1) does not depend onx. AsDis included in the continuation region of the American option, we have
M˜P(t, x) = 0 and M˜f(t, x) =−rK+αδ, x onD.
We obtain:
M˜ [P−f−β] (t, x) = rK− αδ, x −M˜β(t, x)
≤ rK− αδ, x
−3aM s(T , ε)2(b+sLη−3cηM s(T , ε)2+o(1)
≤ rK−s(T , ε)αδ, ε
−3aM s(T , ε)2(b+sLη−3cηM s(T , ε)2+o(1).
146 E. Chevalier
We have given some conditions on the constantsa, b, c, andη such that if there are satisfied, the assumptions of the maximum principle are too. Indeed, we have showed that forhsmall enough,P−f−β≤0 onδpDand ˜M[P−f −β] (t, x)≥0 on D if the constants a, b, c, and η satisfy Cd < cη83, Cd < a(b−sLη)3 and 3aM s(T , ε)2(b+sLη) + 3cηM s(T , ε)2 < rK−s(T , ε)αδ, ε. It is quite easy to find some constants a, b, c, and η satisfying these conditions, then with these constants, we can apply the maximum principle onDand prove thatP−f−β ≤0 onD. However, if we assume that s(T , ε)< sn(T , ε)−b√
h, then there existsλ∈ (s(T , ε), sn(T , ε)−b√
h) such that (T , λε) owns to ¯D. That leads to a contradiction because the continuity of the functionP−f −β should imply that
0≥[P−f−β] (T , λε) = [P−f] (T , λε)>0.
In conclusion, we have proved thats(T , ε)≥sn(T , ε)−b√
h.