Numerical treatment of the free boundary problem

Một phần của tài liệu Figueiredo i rodrigues j santos l (eds ) free boundary problems theory and applications (Trang 107 - 111)

The most common method of handling the early exercise condition (which leads to the free boundary problem) in numerical finance is simply to advance the discrete solution over a timestep ignoring the restriction and then to make its projection on the set of constraints (see, for example, [11]). In the case of a single factor (Amer- ican vanilla put pricing problem, for instance) the algebraic linear complementary problem is commonly solved using a projected iteration method (PSOR) that cap- tures the unknown exercise boundary at each time step (see Wilmott [26]). In [10]

a multigrid method to accelerate convergence of the basic relaxation method is suggested and in [24] a Uzawa’s algorithm to better capture the free boundary is used. Moreover, in [15] an implicit penalty method for pricing American options is proposed. Authors show that, when variable timestep is used, quadratic conver- gence is achieved. The drawbacks of projected relaxation methods are that their rates of convergence depend on the choice of the relaxation parameter, they dete- riorate when the meshes are refined and, moreover, they do not take into account the domain decomposition given by the free boundary. In this section we describe two algorithms for which the developed regularization does not introduce any fur- ther source of error as penalty methods do: the Berm´udez-Moreno algorithm and the Augmented Lagrangian Active Set method.

3.1. The Berm´udez-Moreno iterative algorithm (BM)

This method has been introduced in [3] for solving elliptic variational inequalities.

It consists of approximating the solution of the variational inequality by a sequence of solutions of variational equalities. While this method has been extensively ap- plied to solve free boundary problems in computational mechanics, its application to price financial derivatives has been recently proposed [8].

In order to apply the duality method proposed in [3], we introduce a new Lagrange multiplier,Q, in terms of a parameterω >0, by

Q:=P−ωV. (3.1)

Then, condition (1.9) can be equivalently formulated as Q(x, t)∈Gω(V(x, t)) a.e. in Ω×(0, T),

whereGω:=G−ωI,Iis the identity function andGdenotes the following multi- valued maximal monotone operator (see [9]):

G(Y) =

⎧⎨

if Y <Λ, (−∞,0] if Y = Λ, 0 if Λ< Y,

. (3.2)

We recall that, if B is a maximal monotone operator in a Hilbert space then itsresolvent operator is the single-valued contractionJλ= (I+λB)1 and itsYosida regularization is the Lipschitz-continuous mappingGλ =λ1(I−Jλ), whereλis any positive real number (see for instance [9]).

The following equivalence is straightforward:

p∈B(v)⇔p=Bλ(v+λp). (3.3)

In fact, a similar equivalence holds for operatorBω :=B−ωI, for anyλ <1.

In the particular case ofB=Ggiven by (3.2), the Yosida regularization of Gω is

Gωλ(Y) =

⎧⎪

⎪⎩ Y Λ

λ if Y <(1−ωλ,

ω

1−ωλY if Y (1−ωλ,

(3.4)

and equivalence (3.3) becomes

Q=Gωλ(V +λQ). (3.5)

The above developments lead to consider the following algorithm:

1. Initialization: Q0is arbitrarily given.

2. Iterationm:Qmis known.

(a) ComputeVm+1 by solving

⎧⎪

⎪⎪

⎪⎪

⎪⎪

⎪⎪

⎪⎪

⎪⎪

⎪⎩

∂Vm+1

∂τ Div(A∇Vm+1) +vã ∇Vm+1

+(r+ω)Vm+1+Qm = 0 in Ω×(0, T), Vm+1(x1, x2,0) = Λ (x1, x2) in Ω,

∂Vm+1

∂x1 (x1, x2, τ) = g(x1, x2, τ) on Γ1,+×(0, T).

(3.6) (b) Update Lagrange multiplierQby

Qm+1=àGωλ[Vm+1+λQm] + (1−à)Qm in Ωì(0, T), (3.7) whereàis a relaxation parameter,à∈(0,1].

Remark 3.1. We emphasize that, since (3.5) is completely equivalent to (3.1), this algorithm does not introduce any further source of error as penalty methods do.

Convergence results in [3] can be easily adapted to our degenerate case. In particular, one can show convergence as far asω andλare chosen such thatωλ= 1/2 but, unfortunately, the speed of convergence depends on these parameters.

3.2. Augmented Lagrangian Active set method (ALAS)

The ALAS algorithm proposed in [17] is here applied to the fully discretized in time and space mixed formulation (1.8)–(1.9). In this method the basic iteration of the active set consists of two steps. In the first one the domain is decomposed into active and inactive parts (depending on whether the constraint “acts” or not), and then, a reduced linear system associated to the inactive part is solved. We use the algorithm for unilateral problems, which is based on the augmented Lagrangian formulation. Some a priori known properties of our particular problem are taken into account in order to improve the performance of this method.

Numerically Solving Amerasian Options Pricing Problems 101 First, for any decomposition N = IJ, where N :={1,2, . . . , Ndof}, let us denote by [Mh]II the principal minor of matrixMh and by [Mh]IJ the codiagonal block indexed by I and J. Thus, for each mesh time tn, the ALAS algorithm computes not onlyVhn andPhn but also a decomposition N = JnIn such that

MhVhn+Phn = bnh1,

[Phn]j+β[VhnΛ]j 0 ∀j∈Jn, [Phn]i = 0 ∀i∈In,

(3.8) for any positive constantβ. In the above, In and Jn are, respectively, theinactive and theactive sets at time tn. More precisely, the iterative algorithm builds se- quences

Vh,mn

m

,

Ph,mn

m

,{Inm}mand{Jnm}m,converging toVhn, Phn, In and Jn, by means of the following steps:

1. InitializeVh,0n andPh,0n 0. Chooseβ >0. Setm= 0.

2. Compute

Qnh,m = min

0, Ph,mn +β

Vh,mn Λ

, Jnm = j∈N,

Qnh,m

j

<0

, Inm = {i∈N,

Qnh,m

i

= 0}.

3. Ifm≥1 andJmn =Jmn1 then convergence is achieved. Stop.

4. LetV and P be the solution of the linear system MhV +P =bn1,

P = 0 on Inm andV = Λ on Jnm. (3.9) SetVh,m+1n =V, Ph,m+1n = min{0, P}, m=m+ 1 and go to 2.

It is important to notice that, instead of solving the full linear system in (3.9), for I = Inmand J = Jnmthe following reduced one on the inactive set is solved:

[Mh]II[V]I =

bn1 I[Mh]IJ[Λ]J, [V]J = [Λ]J,

P = bn1MhV.

(3.10)

Remark 3.2. In a unilateral obstacle problem, the parameterβ only influences the first iteration.

In [17], authors proved convergence of the algorithm in a finite number of steps for a Stieltjes matrix (i.e., a real symmetric positive definite matrix with negative off-diagonal entries [23]) and a suitable initialization. They also proved that Im Im+1. Nevertheless, a Stieltjes matrix can be only obtained for linear elements but never for “our” quadratic elements because we have some positive off-diagonal entries coming from the stiffness matrix (actually we use a lumped

mass matrix). However, we have obtained good results by using ALAS algorithm with quadratic finite elements and the following particular additional features:

The algorithm is initialized as proposed in [17]:

Vh,0n = Λ and Ph,0n =bnMhVh,0n .

We compute the set In :=

i,∈N, xi= (xi1, xi2) is a mesh node with xi2< K, xi1>(1 +r(tn−Ti)xi2

,

and impose that In Inm for everym( using Propositions 1.3 and 1.4).

We do not assume monotonicity with respect tomfor the sets{Inm}.

Block "s"

Block "r"

Figure 1. Spatial domain of solution for the Amerasian call options pricing problem, separating the active from the inactive set. Two sets of FE nodes with the samex2 coordinate are represented, and the nodes inside the active set are filled.

Special care has to be taken for an efficient solution of the linear system when using the ALAS algorithm. Meshes with edges parallel to the axis and with suitable mesh numbering have already been used in the BM algorithm. The fact that in the ALAS algorithm only an incomplete linear system is solved requires a deeper study. More precisely, by ordering the nodes from right to left and from bottom to top, we are led to a matrix withNx2 blocks of dimension Nx1. In other words, each set of nodes with the samex2 coordinate gives rise to a block in the matrix. Thus, for each block either all of the nodes are inside the inactive set (the case of Block “r” in Figure 1) or only the firstn(x2) nodes (with n(x2) ≤Nx1) belong to the inactive set (the case of Block “s” in Figure 1). The main point is that also for the ALAS algorithm we develop the factorization of the (complete) matrix only once outside the time loop and the iterative algorithm loop, and, at each iteration, we solveNx2 systems of variable dimension (less or equal thanNx1).

A “general” comparison between the two iterative algorithms is not practi- cal because the performance of this second algorithm is very problem-dependent.

Numerically Solving Amerasian Options Pricing Problems 103 For example, the larger the active set, the more efficient the second algorithm is. Nevertheless, we can establish the following a priori comments related to the comparison of the two algorithms when applied to our particular problem. They will be completed when showing the numerical results in the next section:

Linear systems in the ALAS algorithm are smaller than those in the BM algorithm.

The ALAS algorithm uses some a priori known data about the inactive set.

The BM algorithm is strongly parameter (ω) dependent, whereas the (β) parameter appearing in ALAS algorithm only influences the first iteration.

ALAS algorithm can be interpreted as a semi-smooth Newton method [16], and thus it exhibits a super-linear convergence rate.

Một phần của tài liệu Figueiredo i rodrigues j santos l (eds ) free boundary problems theory and applications (Trang 107 - 111)

Tải bản đầy đủ (PDF)

(460 trang)