Long-step homogeneous interior point algorithm for the P* -nonlinear complementarity problems

32 35 0
Long-step homogeneous interior point algorithm for the P* -nonlinear complementarity problems

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

Thông tin tài liệu

A P*-Nonlinear Complementarity Problem as a generalization of the P*-Linear Complementarity Problem is considered. We show that the long-step version of the homogeneous self-dual interior-point algorithm could be used to solve such a problem.

Yugoslav Journal of Operations Research 12 (2002), Number 1, 17-48 LONGHOMOGENEOUS S INTERIORLONG-STEP HOMOGENEOU INTERIOR-POINT ALGORITHM FOR THE P* -NONLINEAR COMPLEMENTARITY PROBLEMS PROBLEMS* Goran LE[AJA Department of Mathematics and Computer Science Georgia Southern University Statesboro, USA Abstract: A P* -Nonlinear Complementarity Problem as a generalization of the P* Linear Complementarity Problem is considered We show that the long-step version of the homogeneous self-dual interior-point algorithm could be used to solve such a problem The algorithm achieves linear global convergence and quadratic local convergence under the following assumptions: the function satisfies a modified scaled Lipschitz condition, the problem has a strictly complementary solution, and certain submatrix of the Jacobian is nonsingular on some compact set Keywords: P* -nonlinear complementarity problem, homogeneous interior-point algorithm, wide neighborhood of the central path, polynomial complexity, quadratic convergence INTRODUCTION The nonlinear complementarity problem (NCP), as described in the next section, is a framework which can be applied to many important mathematical programming problems The Karush-Kuhn-Tucker (KKT) system for the convex optimization problems is a monotone NCP Also, the variational inequality problem can be formulated as a mixed NCP (see Farris and Pang [6]) The linear complementarity problem (LCP), a special case of NCP, has been studied extensively For a comprehensive treatment of LCP see the monograph of Cottle et al [4] * Some results contained in this paper were first published in the author's Ph.D thesis Further research on this topic was supported in part by Georgia Southern Faculty Research Subcommittee Faculty Research Grant AMS subject classification: 90C05, 65K05 18 G Le{aja / Long-Step Homogeneous Interior-Point Algorithm The interior-point methods, originally developed for the linear programming problem (LP), have been successfully extended to LCP, NCP, and the semidefinite programming problems (SDP) A number of papers dealing with LP and LCP, is extensive Many topics, like the existence of the central path, global and local convergence and implementation issues have been studied extensively Fewer papers are devoted to NCP Among the earliest are the important works of Dikin [5], McLinden [19], and Nesterov and Nemirovskii [24] In the series of papers Kojima et al [14, 15, 13, 16, 17, 11] studied different classes of NCP when the function was P0 -function, uniform P -function, or monotone function They analyzed the central paths of these problems and proposed the continuation, or interior-point methods to solve them No polynomial global and/or local convergence result were given A number of other interior-point algorithms for monotone NCP has been developed, among them Potra and Ye [30], Andersen and Ye [1], Guller [7], Nesterov [23], Monteiro et al [21], Sun and Zhao [31], Tseng [33, 32], Wright and Ralph [35] Polynomial global convergence for many of the algorithms has been proven when the function is monotone and satisfies certain smoothness condition The most general one is a self-concordant condition of Nesterov and Nemirovskii [24] Other conditions include the relative Lipschitz condition of Jarre [9], and the scaled Lipschitz condition of Potra and Ye [30] In the linear case, that is for LCP, the above mentioned smoothness conditions are unnecessary to prove polynomial global and local convergence of the various interior-point methods Moreover, the convergence results have been proven for more general classes of functions than monotone functions Among others is a P* -LCP introduced by Kojima et al [12] See also Miao [20], Ji et al [10], Potra and Sheng [28], Anitescu et al [3, 2] In this paper we study the P* -NCP that generalizes monotone NCP in the similar way in which P* -LCP generalizes monotone LCP This class was introduced independently by the authors [18] and Jansen et al [8] There are few papers that study the class of P* -NCP Recently Peng et al [26] analyzed interior-point method for P* -NCP using self-regular proximities that they initially introduced for LP and LCP In Jansen et al [8] the definition of the P* -functions is indirect, it is based on the P* property of the Jacobian matrix, while our definition deals directly with the function We also provide the equivalency proof between the two definitions (Lemma 2.1) A similar approach is adopted by Peng et al [26] The second objective of the paper is to prove linear global and quadratic convergence of the interior-point method for the P* -NCP We use a long-step version of the homogeneous, self-dual, interior-point algorithm of [1] In [1] polynomial global convergence of the short-step version of the algorithm was analyzed but no local convergence result was established Based on the analysis in [31] and [37], we prove that iteration sequence converges to the strictly complementary solution with R-order at least 2, while primal-dual gap converges to zero with R-order and Q-order at least under the following list of assumptions described later in the text: the existence of a strictly complementary solution (ESCS), the modified scaled Lipschitz condition of G Le{aja / Long-Step Homogeneous Interior-Point Algorithm 19 Potra and Ye (SLC), and the nonsingularity of the Jacobian submatrix (NJS) This set of assumptions is weaker than the one in [31] We show that Assumption in [31] is a consequence of the scaled Lipschitz condition (Lemma 5.6) One more comment is in order Since most of the smoothness conditions were introduced for monotone functions, we have chosen to modify the scaled Lipschitz condition of Potra and Ye [30] to be able to handle P* -functions For the same purpose in [8] a different modification of scaled Lipschitz condition has been introduced (Condition 3.2) and its relation to some known conditions has been discussed On the other hand, Peng et al [26] used a generalization of Jarre's relative Lipschitz condition The paper is organized as follows: In Section we formulate P* -NCP In Section we discuss a homogeneous model for P* -NCP and introduce a long-step infeasible interior-point algorithm for this model Global convergence is analyzed in Section We end the paper with analysis of a local convergence contained in Section PROBLEM We consider a nonlinear complementarity problem (NCP) of the form ( NCP ) s = f ( x), x ≥ 0, xT s = , where x, s ∈ Rn and f is a C1 function f : R+n → Rn Denote a feasible set of NCP by F = {( x, s) ∈ R+2n : s = f ( x)} , and its solution set by F * = {( x* , s* ) ∈ F : x*T s* = 0} For any given ε > we define the set of ε -approximate solutions of NCP as Fε = {( x, s) ∈ R+2n : xT s < ε , || s − f ( x ) || < ε } If f is a linear function f ( x) = Mx + q , where M ∈ Rn×n and q ∈ Rn , then the problem reduces to LCP The LCP has been studied for many different classes of matrices M (see [4, 12]) We list some: • Skew-symmetric matrices (SS): (∀x ∈ Rn )( xT Mx = 0) • (2.1) Positive semidefinite matrices (PSD): (∀x ∈ Rn )( xT Mx ≥ 0) (2.2) 20 • G Le{aja / Long-Step Homogeneous Interior-Point Algorithm P -matrices: Matrices with all principal minors positive or equivalently: (∀x ∈ Rn , x ≠ 0) (∃i ∈ I )( xi ( Mx)i > 0) • (2.3) P0 -matrices: Matrices with all principal minors nonnegative or equivalently (∀x ∈ Rn , x ≠ 0) (∃i ∈ I )( xi ≠ and xi ( Mx)i ≥ 0) • Sufficient matrices (SU): Matrices which are column and row sufficient − Column sufficient matrices (CSU) (∀x ∈ Rn ) (∀i ∈ I )( xi ( Mx)i ≤ ⇒ xi ( Mx)i = 0) − • (2.4) (2.5) Row sufficient matrices (RSU): M is row sufficient if M T is column sufficient P* (κ ) : Matrices such that ∑+ (1 + 4κ ) xi ( Mx)i + i∈T ( x ) ∑− xi ( Mx)i ≥ 0, ∀x ∈ Rn i∈T ( x ) where T + ( x) = {i : xi ( Mx )i > 0}, T − ( x) = {i : xi ( Mx)i < 0} , or equivalently xT Mx ≥ −4κ ∑ i∈T + ( x ) xi ( Mx)i , ∀x ∈ Rn , (2.6) and P* = ∪ κ ≥0 P* (κ ) (2.7) The relationship between some of the above classes is as follows SS ⊂ PSD ⊂ P* = SU ⊂ CS ⊂ P0 , P ⊂ P* , P ∩ SS = ∅ (2.8) Some of these relations are obvious, like PSD = P* (0) ⊂ P* or P ⊂ P* , while others require a proof which can be found in [12, 4, 34] The above classes can be generalized for nonlinear functions as follows: • Monotone functions (∀x1 , x2 ∈ Rn ) (( x1 − x2 )T ( f ( x1 ) − f ( x2 )) ≥ 0) , (2.9) are a generalization of positive semidefinite matrices (PSD) • P -functions (∀x1 , x2 ∈ Rn , x1 ≠ x2 ) (∃i ∈ I ) (( x1i − xi2 )( fi ( x1 ) − fi ( x2 )) > 0) , (2.10) are a generalization of P -matrices A special case of P -function is uniform P function with parameter γ > (∀x1 , x2 ∈ Rn , x1 ≠ x2 ) (∃i ∈ I ) (( x1i − xi2 )( fi ( x1 ) − fi ( x2 )) ≥ γ || x1 − x2 ||2 ) (2.11) G Le{aja / Long-Step Homogeneous Interior-Point Algorithm • 21 P0 -functions (∀x1 , x2 ∈ Rn , x1 ≠ x2 ) (∃i ∈ I ) ( x1i − xi2 ≠ 0, ( x1i − xi2 )( fi ( x1 ) − fi ( x2 )) ≥ 0) , (2.12) are a generalization of P0 -matrices Below we give a definition of P* (κ ) -functions generalizing the definition of P* (κ ) matrices • P* (κ ) -functions A function f belongs to the class of P* (κ ) -functions if for each x1 , x2 ∈ Rn the following inequality holds ( x2 − x1 )T ( f ( x1 ) − f ( x2 )) ≥ −4κ ∑ i∈T f+ ( xi2 − xi1 )( fi ( x1 ) − fi ( x2 )) , where T f+ = {i ∈ {1, , n}: ( xi2 − xi1 )( fi ( x1 ) − fi ( x )) > 0} , and κ ≥ is a constant • P* -functions A function f is a P* -function if there exists κ ≥ such that f is a P* (κ ) -function This is equivalent to P* = ∪ κ ≥0 P* (κ ) The classes of P* (κ ) -functions and P* -functions were introduced independently in Jansen et al [8] and first in the author's Ph D thesis [18] Note that the class of monotone functions, considered in the most papers about NCP, is included as a special case for κ = , i.e as P* (0) case Throughout the paper we assume that the function f is a P* -function The following lemma establishes a relationship between P* (κ ) -property of the function f and its Jacobian matrix ∇f Lemma 2.1 The function f is a P* (κ ) -function iff ∇f is a P* (κ ) -matrix Proof: Suppose first that f is a P* (κ ) -function, ( x2 − x1 )T ( f ( x1 ) − f ( x2 )) ≥ −4κ ∑ i∈T + ( xi2 − xi1 )( fi ( x1 ) − fi ( x2 )) Since f is a C1 function, the following equations hold f ( x + h) − f ( x) = ∇f ( x) h + o( h), fi ( x + h ) − fi ( x ) = n ∑ (∇f ( x))ij h j + o( h) j =1 22 G Le{aja / Long-Step Homogeneous Interior-Point Algorithm If we denote h = x2 − x1 and if we use the above equations, then the left hand side of the above inequality becomes ( x2 − x1 )T ( f ( x1 ) − f ( x2 )) = hT ( f ( x + h) − f ( x)) = hT ∇f ( x) h + o( h2 ), while the right hand side can be written as −4κ ∑ i∈T + ( xi2 − xi1 )( fi ( xi ) − fi ( x2 )) = −4κ ∑ i∈T + hi ( fi ( x + h) − fi ( x)) n ∑ + ∑ (∇f ( x))ij h j hi + o( h2 ) = −4κ i∈T = −4κ j =1 ∑ + hi (∇f ( x) h)i + o( h2 ) i∈T We get hT ∇f ( x) h ≥ −4κ ∑ + hi (∇f ( x) h)i + o( h2 ) i∈T Given u take h = ε u The above inequality transforms to ε 2uT ∇f ( x)u ≥ −ε 4κ ∑ i∈T + ui (∇f ( x)u)i + o(ε ) Dividing the above inequality by ε and taking the limit as ε → we have hT ∇f ( x) h ≥ −4κ ∑ i∈T + hi (∇f ( x) h)i Hence ∇f ( x) is a P* (κ ) - matrix To prove the other implication, suppose that ∇f ( x) is a P* (κ ) -matrix, i.e., the above inequality holds Using the mean value theorem for the function f we have hT ( f ( x + h) − f ( x)) = hT ∫ ∇f ( x + th) h dt = ∫ hT ∇f ( x + th) h dt 1  ≥ ∫  −4κ ∑ hi (∇f ( x + th) h)i  dt   + i∈T  0 ∑ + hi ∫ (∇f ( x + th) h)i dt = −4κ i∈T = −4κ i∈T Hence ∇f ( x) is a P* (κ ) - function ∑ + hi ( fi ( x + h) − fi ( x)) G Le{aja / Long-Step Homogeneous Interior-Point Algorithm 23 In [22] it was shown that the existence of a strictly complementary solution is necessary and sufficient to prove quadratic local convergence of an interior-point algorithm for the monotone LCP (see also [37]) This implies that we need to make the same assumption for the P* -NCP Existence of a strict complementary solution (ESCS) NCP has a strictly complementary solution, i.e., there exists a point ( x, s) ∈ F * such that x+s>0 Unfortunately, even in the case of the monotone NCP the above assumptions are not sufficient to prove linear global and quadratic local convergence of the interior-point algorithm, thus additional assumptions are necessary Therefore, additional assumptions are necessary for P* -NCP as well They will be introduced as they are needed later in the text ALGORITHM In the development of the interior-point methods we can indicate two main approaches The first is the application of the interior-point method to the original problem In this case it is sometimes hard to deal with issues such as finding a feasible starting point detecting infeasibility or, more generally, determining nonexistence of the solution (it is known that monotone NCP may be feasible but still may not have a solution, which is not the case for the monotone LCP) Numerous procedures have been developed to overcome this difficulty ("big M" method, phase I - phase II methods, etc.) but none of them was completely satisfactory It has been shown that a successful way to handle the problem is to build an augmented homogeneous self-dual model which is always feasible and then apply the interior-point method to that model The "price" to pay is not that high (the dimension of the problem increases only by one) while on the other side benefits are numerous and important (the analysis is simplified, the size of the initial point or solutions is irrelevant due to the homogeneity, detection of infeasibility is solved in a natural way, etc.) This second approach originated in [38], and was successfully extended to LCP in [36], monotone NCP in [1], and SDP in [29] Motivated by the above discussion in this paper we consider the augmented homogeneous self-dual model of [1] to accompany the original NCP ( HNCP ) s = τ f ( x / τ ), σ = − xT f ( x / τ ), xT s + τσ = 0, ( x, τ , s, σ ) ≥ Lemma 3.1 HNCP is feasible and every feasible point is a solution point The solutions of HNCP is related to the solutions of the original NCP as follows 24 G Le{aja / Long-Step Homogeneous Interior-Point Algorithm Lemma 3.2 (i) If ( x* ,τ * , s* , σ * ) is a solution for HNCP and τ * > , then ( x* / τ * , s* / σ * ) is a solution for NCP (ii) If ( x* , s* ) is a solution for NCP, then ( x* , 1, s* , 0) is a solution for HNCP The immediate consequence of the above lemma is the existence of a strict complementary solution for HNCP with τ * > since in the previous section we assumed the existence of a strict complementary solution for NCP Using the first two equations in HNCP we can define an augmented transformation  τ f ( x / τ )  n +1 n +1 ψ ( x,τ ) =  T  : R++ → R  − x f ( x /τ )  (3.1) The augmented transformation has several important properties stated in the following lemma Lemma 3.3 (i) ψ is a C1 homogeneous function with degree satisfying ( x , τ ) T ψ ( x, τ ) = (ii) (3.2) The Jacobian matrix ∇ψ ( x,τ ) of the augmented transformation (3.1) is given by ∇f ( x / τ )  ∇ ψ ( x, τ ) =  T T T  − f ( x / τ ) − ( x / τ ) ∇f ( x / τ ) f ( x / τ ) − ∇f ( x / τ )( x / τ )   ( x / τ )T ∇f ( x / τ )( x / τ )  (3.3) and following equality holds ( x / τ )T ∇ψ ( x / τ ) = −ψ ( x / τ )T (3.4) The proofs of the Lemma 3.1-3.3 can be found in [1] Now we prove that if the augmented transformation ψ is a P* (κ ) -function then f is a P* (κ ) -function too Lemma 3.4 If ψ is a P* (κ ) -function, then f is also a P* (κ ) -function Proof: Using Lemma 2.1 we conclude that ∇ψ is P* (κ ) -matrix From (3.3) and the fact that every principal submatrix of P* (κ ) -matrix is also a P* (κ ) -matrix (see [12]), it follows that ∇f is a P* (κ ) -matrix Using again Lemma 2.1 we conclude that f is a P* (κ ) -function ♦ It would be very desirable if the reverse implication is true as it is the case for monotone NCP Unfortunately, that is not generally the case even for P* (κ ) -LCPs as shown by Peng et al [25] Thus, in what follows we will assume that ψ is a P* (κ ) function G Le{aja / Long-Step Homogeneous Interior-Point Algorithm 25 Note that not all of the nice properties of the homogeneous model for monotone NCP could have been preserved for P* (κ ) NCP However, the homogeneous model still has a merit primarily because of its feasibility In addition, the analysis that we provide in this paper holds if an interior-point method is used on the original problem rather than on the augmented homogeneous model The objective is to find ε -approximate solution of HNCP We will so by using a long-step primal-dual infeasible-interior-point algorithm To simplify the analysis in the remainder of this paper we let  x x =  , τ  s  s=  σ  (3.5) 2n+ belonging to A long-step algorithm produces the iterates ( x k , sk ) ∈ R++ xT s   N∞− ( β ) = ( x, s) ≥ : Xs ≥ βµ e, µ = , < β < , n +   which is the widest neighborhood of the central path C (t ) = {( x, s) ≥ : Xs = te, s − ψ ( x) = tr }, < t ≤ , where ( x0 , s0 ) > is an initial point on the central path, r denotes a residual of the point ( x, s) r = s − ψ ( x) , (3.6) so that r = s0 − ψ ( x0 ) , and X denotes a diagonal matrix corresponding to the vector x If β = , then N∞− ( β ) is the entire nonnegative orthant, and if β = , then N∞− ( β ) shrinks to the central path C Now we state the algorithm Algorithm 3.5 I (Initialization) Let ε > be a given tolerance, and let β ,η, γ ∈ (0,1) be the given constants Suppose a starting point ( x0 , s0 ) ∈ N∞− ( β ) is available Calculate µ0 = ( x0 )T s0 /(n + 1) and set k = S (Step) Given ( x k , sk ) ∈ N∞− ( β ) solve the system ∇ ψ ( x k ) ∆x − ∆s = η r k , k k (3.7) k k S ∆x + X ∆s = γµ k e − X s (3.8) x(θ ) = x k + θ∆x, s(θ ) = ψ ( x(θ )) + (1 − ηθ )r k , (3.9) Let 26 G Le{aja / Long-Step Homogeneous Interior-Point Algorithm and perform a line search to determine the maximal stepsize < θ k < such that ( x(θ k ), s(θ k )) ∈ N∞− ( β ) (3.10) and µ (θ k ) minimizes µ (θ ) Set x k+1 = x(θ k ), sk+1 = s(θ k ) (3.11) T (Termination) If ( x k+1 , sk+1 ) ∈ Ψ ε = {( x, s) ≥ : xT s ≤ ε , || s − ψ ( x) || ≤ ε } , (3.12) then stop, otherwise set k := k + and go to (S) In the next two sections we will prove that there exist the values of the parameters for which the algorithm has polynomial global convergence and quadratic local convergence, provided that some additional assumptions, stated later in the text, are satisfied Now we give some basic properties of the direction (∆x, ∆s) and update ( x(θ ), s(θ )) calculated in the Algorithm 3.5 Lemma 3.6 Let (∆x, ∆s) be a solution of the system (3.7)-(3.8) Then (∆x)T ∆s = (∆x)T ∆ψ ( x k )∆x + η (1 − η − γ )( n + 1) µ k The proof of the above lemma can be found in [1] The update (3.9) for s(θ ) is obtained by approximating the residual r = s − ψ ( x) with its first order Taylor polynomial s(θ ) − ψ ( x(θ )) ≈ sk − ψ ( x k ) + θ ( ∆s − ∇ψ ( x k ) ∆x) , (3.13) or by virtue of (3.7) s(θ ) ≈ ψ ( x(θ )) + r k − θη r k Thus we set s(θ ) := ψ ( x(θ )) + (1 − θη ) r k , as stated in (3.9) Using (3.13) we have X (θ ) s(θ ) = X (θ )( s k + θ∆s + ψ ( x(θ )) − ψ ( x k ) − θ∇ψ ( x k )∆x) = ( X k + θ∆X )( sk + θ∆s) + X (θ )(ψ ( x(θ )) − ψ ( x k ) − θ∇ψ ( x k )∆x) = X k sk + θ ( S k∆x + X k∆s) + θ ∆X∆s + ( X k + θ∆X )(ψ ( x (θ )) − ψ ( x k ) − θ∇ψ ( x k )∆x) If we denote the second order term in the above expression by 34 G Le{aja / Long-Step Homogeneous Interior-Point Algorithm T  | r |    ( x ) s ,ln O  (n + 1)vψ (α )δ (1 + 4κ )3 max ln   ε ε     iterations, where vψ (α ) is defined by (4.8), and δ = δ 1δ , where δ , δ are defined by (4.34) Proof: Substituting (4.32) and (4.33) into (4.7) we obtain || h(θ ) ||∞ ≤ vψ (α )θ 2δ (1 + 4κ )3 (n + 1) µ k , (4.36) where δ = δ 1δ Comparing (4.6) and (4.36) we derive θˆ = (1 − β )γ vψ (α )δ (1 + 4κ )3 (n + 1) , (4.37) provided that θˆ || ( X k ) −1 ∆x ||∞ ≤ α (4.38) holds To assure (4.38) we use (4.32) and the fact that ( x k , sk ) ∈ N∞− ( β ) , || ( X k )−1 ∆x ||∞ ≤ || ( X k )−1 ∆x || (4.39) = || ( X k S k )−1 / ( Dk )−1 ∆x || (4.40) ≤ || ( D k ) −1 ∆x || βµ k (4.41) ≤ δ1 (1 + 4κ )3 n + β (4.42) Substituting (4.42) into (4.38) we obtain (1 − β )γ vψ δ (1 + 4κ )3 (n + 1) β ≤α Since vψ > 1, δ1 > 1, δ > 1, β < and γ ≤ β , the above inequality implies β ≤α if there exists a constant c ≥ such that ak+1 ≤ cakt , ∀k (5.1) The above sequence converges to zero with Q-order exactly t if t = sup{t :{ak} converges with Q-order at least t } , (5.2) or equivalently iff t = liminf k→∞ log ak+1 log ak (5.3) A positive sequence {ak} is said to converge to zero with R-order at least t > if there exists a constant c ≥ and a constant b ∈ (0,1) such that k ak+1 ≤ cbt , ∀k (5.4) The key part in proving the local convergence result is relating the components of the iteration sequence ( x k , sk ) generated by Algorithm 3.5 to the primal-dual gap ( x k )T sk We have the following lemma 36 G Le{aja / Long-Step Homogeneous Interior-Point Algorithm Lemma 5.1 Let ( x* , s* ) be a strictly complementary solution of HNCP, and let ( x k , sk ) be the k-th iterate of the Algorithm 3.5 Then ( x* )T sk + ( s* )T x k ≤ ϕ ( x k )T sk , (5.5) where ϕ is defined by (5.6) Proof: Using (4.18), (4.19), (4.21), positivity of the initial point ( x0 , s0 ) > , P* property of ψ , and the fact that s* = ψ ( x* ), ( x* )T s* = 0, ( x* , s* ) ≥ we derive ( x* )T sk + ( s* )T x k = = −( x k − x* )T ( sk − s* ) + ( x k )T sk = −( x k − x* )T (ψ ( x k ) − ψ ( x* )) − ( r k )T ( x k − x* ) + ( x k )T sk ≤ 4κ ∑ + ( xik − xi* )(ψ i ( x k ) − ψ i ( x* )) − (r k )T x k + (r k )T x* + ( x k )T s k i∈Tψ = 4κ ∑ + ( xik − xi* )( sik − rik − si* ) − ( x k )T sk + Θk ( r0 )T x* + ( x k )T sk i∈Tψ = 4κ n +1 ∑ + ( xik − xi* )( sik − si* ) − 4κ ∑ + rik ( xik − xi* ) + Θk ∑ ri0 xi* i∈Tψ = 4κ i =1 i∈Tψ n +1 xi* i =1 xi0 ∑ + ( xik sik − xik si* − xi*sik + xi*si* ) − 4κ ∑ + rik ( xik − xi* ) + Θk ∑ ri0 xi0 i∈Tψ ≤ 4κ ( x k )T sk − 4κ i∈Tψ n +1 ∑ + rik ( xik − xi* )+ || ( X )−1 x* ||∞ Θ k ∑ ri0 xi0 i =1 i∈Tψ k T k = 4κ ( x ) s + 4κ ∑+ rik ( xi* − i∈Tψ k T k = 4κ ( x ) s + 4κ xik ) + || ( X )−1 x* ||∞ Θ k ( r )T x ∑ + rik ( xi* − xik ) + || ( X )−1 x* ||∞ Θ k ( x0 )T s0 i∈Tψ k T k ≤ 4κ ( x ) s + 4κ ∑ i∈Tψ+ ≤ 4κ ( x k )T sk + 4κΘ k Θ k | ri0 || xi* − xik | + || ( X )−1 x* ||∞ ( x k )T sk ∑ i∈Tψ+ | ri0 xi* | + 4κΘ k ∑ i∈Tψ+ | ri0 xik | + || ( X )−1 x* ||∞ ( x k )T sk = 4κ ( x k )T sk + 4κΘ k || X *r0 || + 4κΘ k || X kr ||1 + || ( X ) −1 x* ||∞ ( x k )T sk ≤ 4κ ( x k )T sk + 4κΘ k || X *r || ∞ n +1 ( x0 )T s0 ( x0 )T s0 + 4κΘ k || (S )−1 r || ∞ || X ks0 || + || ( X )−1 x* || ∞ ( x k )T sk = 4κ ( x k )T sk + 4κ || X *r || ∞ ( x k )T sk + 4κΘ k || ( S0 )−1 r || ∞ ( x k )T s0 µ0 + || ( X )−1 x* || ∞ ( x k )T sk G Le{aja / Long-Step Homogeneous Interior-Point Algorithm ≤ 4κ ( x k )T sk + 4κ 37 || X *r || ∞ ( x k )T sk + 4κΘ k || (S0 )−1 r || ∞ 2(1 + 4κ )( x0 )T s0 µ0 + || ( X )−1 x* || ∞ ( x k )T sk    || X *r || ∞ +2(1 + 4κ ) || ( S0 )−1 r || ∞   ( x k )T sk =  || ( X )−1 x* || ∞ +4κ  +   µ0    If we denote ζ = || ( X )−1 x* || ∞ , ρ= || X *r || ∞ , µ0 (5.6) −1 ν = || ( S ) r || ∞ , ϕ = ζ + 4κ (1 + ρ + 2(1 + 4κ )ν ), ♦ then we obtain (5.5) It has been shown that for LP a unique partition {B, N} of the set {1, , n} exists such that (i) there exists a solution ( x* , s* ) with x*B > 0, s*N > , (5.7) (ii) for each solution ( x, s) xN = 0, sB = (5.8) The result has been generalized for LCP with the assumption that a strict complementarity solution exists (even for P* case) Potra and Ye [30] showed that the same is true for NCP Suppose that NCP has a strictly complementary solution and let {Bf , N f } be the above mentioned partition Then by virtue of Lemma 3.2 (i) B = Bf ∪ {index for τ *} , (5.9) N = N f ∪ {index for σ *} (5.10) is a partition for HNCP Now we are ready to prove the following important lemma Lemma 5.2 Suppose that HNCP has a strictly complementary solution ( x* , s* ) Let ( x k , sk ) be the k-th iterate of the Algorithm 3.5 There exist three positive constants ξ= φ= (n + 1)ϕ min{min i∈B xi* , i∈N si*} β , ξ , (5.11) (5.12) 38 G Le{aja / Long-Step Homogeneous Interior-Point Algorithm ϑ= 2(1 + 4κ )( x0 )T s0 min{min i∈B xi0 ,min i∈N si0 } , (5.13) such that φ ≤ xik ≤ ϑ sik ≤ ξµ k , ∀i ∈ B , (5.14) φ ≤ sik ≤ ϑ xik ≤ ξµ k , ∀i ∈ N (5.15) Proof: Using Lemma 5.1 and partition {B, N} we obtain ∑ xi*sik + ∑ si* xik ≤ ϕ ( x k )T sk i∈B i∈N Since ( x k , sk ) ∈ N∞− ( β ) , from the above inequality we deduce for each i ∈ B xik = xik sik sik ≥ βµ k sik = β ( x k )T s k β ≥ =φ , ξ n +1 sik and sik ≤ ϕ ( x k )T s k xi* ≤ ξµ k Also, an immediate consequence of Lemma 4.5 is xik ≤ 2(1 + 4κ )( x0 )T s0 si0 ≤ ϑ , ∀i ∈ {1, , n + 1} Thus (5.14) is proved Similarly we prove (5.15) An immediate consequence of the above lemma is the following corollary: Corollary 5.3 Any accumulation point ( x*s* ) of the sequence obtained by Algorithm 3.5 is a strictly complementary solution of HNCP The above corollary together with (5.9), (5.10) assures that a strictly complementary solution of HNCP will be of the type as in Lemma 3.2 (ii), thus enabling us to find a strictly complementary solution of NCP To prove the local convergence result we modify Algorithm 3.5 in such a way that for a sufficiently large k , say K , we set γ = , i.e centering part of the direction is omitted and only an affine-scaling direction is calculated Hence the algorithm becomes an affine-scaling algorithm or, in other words, a damped Newton method The existence of the treshold value K will be established later in the text For now, without the loss of generality, we can assume K = In addition, instead of keeping a fixed neighborhood of the central path we enlarge it at each iteration Let β0 = β , β k+1 = β k − π k , ∀k , (5.16) G Le{aja / Long-Step Homogeneous Interior-Point Algorithm 39 where ∞ ∑ π k < ∞, k= π k > 0, ∀k (5.17) A particular choice of π k is as in [31] πk = β k +1 (5.18) Thus β < < β k +1 < β k < < β0 = β , and N∞− ( β ) ⊆ N∞− ( β k ) ⊆ N∞− ( β k+1 ) ⊆ N∞− ( β / 2) (5.19) With the above modifications Algorithm 3.5 is reduced to the following affinescaling algorithm Algorithm 5.4 I (Initialization) Let ε > be a given tolerance, and let β ∈ (0,1) be the given constant Set β0 = β Suppose starting point ( x0 , s0 ) ∈ N∞− ( β0 ) is available Calculate µ0 = ( x0 )T s0 /(n + 1) and set k = S (Step) Given ( x k , sk ) ∈ N∞− ( β k ) solve the system ∇ ψ ( x k ) ∆x − ∆s = r k , k k k k (5.20) S ∆x + X ∆s = − X s (5.21) x(θ ) = x k + θ∆x, s(θ ) = ψ ( x(θ )) + (1 − θ )r k , (5.22) Let and perform a line search to determine the maximal stepsize < θ k < such that ( x(θ k ), s(θ k )) ∈ N∞− ( β k+1 ) , (5.23) and µ (θ k ) minimizes µ (θ ) Set x k+1 = x(θ k ), sk+1 = s(θ k ) , (5.24) β (5.25) and β k +1 = β k − k +1 40 G Le{aja / Long-Step Homogeneous Interior-Point Algorithm T (Termination) If ( x k+1 , sk+1 ) ∈ Ψ ε = {( x, s) ≥ : xT s ≤ ε , || s − ψ ( x) || ≤ ε } , (5.26) then stop, otherwise set k := k + and go to (S) A similar modification was employed in [37] on the predictor-corrector algorithm for the monotone LCP, in [30] on the potential reduction algorithm for monotone NCP and in [31] on the path following algorithm for monotone NCP In the linear case, i.e for LCP, the above modifications, together with the existence of a strict complementary solution, were necessary and sufficient to prove the local convergence In the nonlinear case certain additional assumption on the nonsingularity of the Jacobian submatrix is necessary We adopt Assumption from [31] Nonsingularity of the Jacobian submatrix (NJS) Let the Jacobian matrix ∇ψ be partitioned as follows  ∇ψ BB ( x) ∇ψ BN ( x )  ∇ψ ( x ) =  , ∇ψ NB ( x) ∇ψ NN ( x)  (5.27) where {B, N} is partition of HNCP described by (5.7)-(5.10) We assume that matrix ∇ψ BB is nonsingular on the following compact set Γ = {x ≥ : x B ≥ φ eB , x ≤ ϑ e} , (5.28) where φ and ϑ are defined in Lemma 5.2 ♦ So far we have made the following assumptions: − function ψ is a P* -function, − function ψ satisfies the scaled Lipsschitz condition (SLC), − − the existence of the a strict complementary solution (ESCS), nonsingularity of the Jacobian submatrix (NJS), and we assume they hold throughout this section Since in this section γ = , i.e η = , equations (4.3) and (4.4) are reduced to µ k+1 = (1 − θ k ) µ k , r k+1 = (1 − θ k )r k (5.29) If we are able to prove − θ k = O( µ k ) , the local convergence result would follow In order to so we need to revisit the analysis performed for the global convergence and adjust it according to the modification and assumptions made above G Le{aja / Long-Step Homogeneous Interior-Point Algorithm 41 Note first that the lemmas proved so far in this section remain valid for Algorithm 5.4 Next we show that the direction calculated in the algorithm is bounded from above by µ k Lemma 5.5 Let (∆x, ∆s) be a solution of the system (5.20)-(5.21) Then || ∆x || ≤ c0 µ k , || ∆s || ≤ c0 µ k , (5.30) where c0 is a constant independent of k Proof: First we show that || (∆x) N || ≤ c0′ µ k , || (∆sB ) || ≤ c0′ µ k , (5.31) for some constant c0′ independent of k We have k k −1 || (∆x) N || = || DN ( DN ) (∆x) N || k k −1 || || ( DN ) (∆x) N || ≤ || DN k || δ (1 + 4κ )3 / n + µ k ≤ || DN k || Using The last inequality above is due to (4.32) Next, we need to estimate || DN (5.15) we obtain k || DN || = max i∈N xik sik ≤ ξµ k φ Hencey  ξ  || (∆x) N || ≤  δ (1 + 4κ )3 / n +  µ k  φ  Similarly, by virtue of (4.33) and (5.14) we have  ξ  || (∆s) B || ≤  δ (1 + 4κ )3 / n +  µ k φ   Since by (4.34) δ ≥ δ we can set c0′ = ξ δ (1 + 4κ )3 / n + , φ and (5.31) is proved We still need to prove || (∆x) B || ≤ c0 µ k , || (∆s) N || ≤ c0 µ k , (5.32) for some constant c0 independent of k Using (5.27) equation (5.20) can be partitioned into system 42 G Le{aja / Long-Step Homogeneous Interior-Point Algorithm ∇ψ BB ( x k )(∆x) B + ∇ψ BN ( x k )(∆x) N − (∆s) B = − rBk , k k ∇ψ NB ( x )(∆x ) B + ∇ψ NN ( x )(∆x) N − (∆s) N = − rNk (5.33) , (5.34) Hence || (∆x) B || ≤ || (∇ψ BB ( x k )) −1 || (|| (∆s) B || + || ∇ψ BN ( x k ) || || ( ∆x) N || + || rBk ||) , k || (∆s) N || ≤ || ∇ψ NB ( x ) || || (∆x) B || + || ∇ψ NN ( x k ) || || ( ∆x) N || + || rNk || (5.35) (5.36) From Lemma 5.2 it follows that the iterates ( x k , sk ) of the Algorithm 5.4 belong to the set Γ defined by (5.28) By (NJS) assumption ∇ψ BB ( x k ) is nonsingular on Γ Thus, since Γ is compact, all matrices above are uniformly bounded Also from (5.29) we have k r = r µk µ0 or || r k || = || r || µk µ0 (5.37) Using the uniform boundedness of the matrices and substituting (5.31) and (5.37) into (5.35) and (5.36) we obtain (5.32) completing the proof of the lemma ♦ Lemma 5.6 There exists a constant c1 , independent of k , such that || h(θ ) || ∞ ≤ c1 µ k2 , (5.38) where h(θ ) is defined by (3.14) Proof: From the definition (3.14) of h(θ ) we obtain for each i ∈ {1, , n + 1} the following inequality | hi (θ ) | ≤ θ | (∆x)i (∆s)i | + | xik + θ (∆x)i | | ψ i ( x k + θ∆x) − ψ i ( x k ) − θ∇ψ i ( x k )∆x | (5.39) Recall that ψ satisfies the scaled Lipschitz condition (SLC), i.e if θ || ( X k ) −1 ∆x || ∞ ≤ α , (5.40) || X k (ψ ( x k + θ∆x) − ψ ( x k ) − θ∇ψ ( x k ) ∆x) || ∞ ≤ v(α )θ | ∆xT ∇ψ ( x k ) ∆x | (5.41) then Substituting (5.40) and (5.41) into (5.39) we obtain | hi (θ ) | ≤ θ || ∆x || || ∆s || + (1 + α )v(α )θ | ∆xT ∇ψ ( x k )∆x | From Lemma 5.2 it follows that ( x k , sk ) ∈ Γ , where Γ is a compact set defined by (5.28) Therefore ∇ψ is uniformly bounded on Γ , i.e there exists a constant M such that G Le{aja / Long-Step Homogeneous Interior-Point Algorithm | hi (θ ) | ≤ θ || ∆x || || ∆s || + (1 + α )v(α )θ M || ∆x ||2 43 (5.42) Using (5.30) and the fact that θ < , from (5.42) we derive | hi (θ ) | ≤ c02 (1 + M (1 + α )v(α )) µ k2 , (5.43) providing that (5.40) holds To ensure (5.40) we take an index k sufficiently large, i.e k is the first index, say K1 , such that c0 µκ ≤ αφ , (5.44) where φ is defined in Lemma 5.2 Using (5.44), (5.30), the fact that θ < , and Lemma 5.2 we have for k ≥ K1 θ | (∆x)i | ≤ | (∆x)i | ≤ || ∆x || ≤ c0 µ k ≤ αφ ≤ α xik , ∀i ∈ B Thus, (5.40) holds for sufficiently large k Hence, we have proved (5.43) but only for i∈ B We still need to prove (5.43) for i ∈ N Since ∇ψ is uniformly bounded on Γ , we have | ψ i ( x k + θ∆x) − ψ i ( x k ) − θ∇ψ i ( x k )∆x | = θ ∫ ∇ψ i ( x k + t∆x)∆x dt − θ∇ψ i ( x k )∆x (5.45) ≤ Mθ || ∆x || ≤ Mc0 µ k Also, using (5.15), (5.30), and the fact that θ < , we obtain | xik + θ (∆x)i | ≤ (ξ + c0 ) µ k (5.46) Substituting (5.45) and (5.46) into (5.39) we get | hi (θ ) | ≤ ( c02 + c0 M (ξ + c0 )) µ k2 , ∀i ∈ N (5.47) From (5.43) and (5.47) we derive (5.38) ♦ Lemma 5.7 Let ( x k , sk ) be the k-th iterate of the Algorithm 5.4 Define µ θˆk = − c1 k , πk (5.48) where c1 is defined in Lemma 5.6 and π k is defined by (5.18) Then ( x(θˆk ), s(θˆk )) ∈ N∞− ( β k+1 ) Proof: Proof: From (4.5), (5.16), (5.29) and (5.38) we obtain X (θ ) s(θ ) − β k+1 µ (θ ) e = (1 − θ ) X ks k + h(θ ) − β k+1 (1 − θ ) µ ke ≥ (1 − θ ) β k µ k e − ( β k − π k )(1 − θ ) µ k e + h(θ ) ≥ π k (1 − θ ) µ k e − c1 µ k2 e (5.49) 44 G Le{aja / Long-Step Homogeneous Interior-Point Algorithm If we take θ as in (5.48), then the above inequality implies (5.49) Note that an immediate consequence of the above lemma is θˆk → , which means that Algorithm 5.4 is approaching the pure Newton method Now we have all the ingredients to prove the following local convergence result Theorem 5.8 Let {( x k , sk )} be a sequence generated by the Algorithm 5.4 Then (i) µ k → with Q-order and R-order at least (ii) ( x k , sk ) → ( x* , s* ) with R-order at least Proof: (i) Using the rule for selecting stepsize in Algorithm 5.4 and from Lemma 5.7 we have µ k+1 = µ (θ k ) ≤ µ (θˆ ) k = (1 − θˆk ) µ k = c1 µk µk πk = c1 k +1 µk β (5.50)  k+1  =  c1 µk   β     c ≤  µ0  β   2k+1 Let k = K2 be such that c1 µ K2 < β (5.51) Now, using (5.44) and (5.51) we can define K = max{K1 , K2 } , (5.52) and set µ0 = µ K , b = c1 µ0 β (5.53) Hence, from (5.50) we have k µ k ≤ b2 , k ≥ K (5.54) G Le{aja / Long-Step Homogeneous Interior-Point Algorithm 45 Next, observe that from (5.50) we obtain log µ k+1 ≤ log µ k + ( k + 1)log + log( c1 / β ) , (5.55) and from (5.53) we obtain log µ k ≤ k log b < , (5.56) | log µ k | ≥ k | log b | (5.57) i.e Thus, using (5.57) we derive lim k→∞ ( k + 1) log + log( c1 / β ) | ( k + 1)log + log( c1 / β ) | ≤ lim =0 k→∞ log µ k 2k | log b | (5.58) Hence, taking into account (5.57) and (5.58), we obtain from (5.55) liminf k→∞ log µ k+1 ≥2 log µ k (5.59) Using definitions (5.3) and (5.4) we conclude from (5.54) and (5.59) that µ k → with Q-order and R-order at least (ii) First we show that {( x k , sk )} is a Cauchy sequence Take any m > k ≥ K Then || x m − x k || ≤ m −1 ∞ i= i= ∑ || xk+ i+1 − x k+ i || ≤ ∑ || x k+ i+1 − xk+ i || (5.60) Using (5.30) and (5.54) we have || x k+ i+1 − x k+ i || = θ k+ i || ∆x || ≤ c0 µ k+ i ≤ c0 b2 k+i (5.61) Substituting (5.61) into (5.60) we obtain || x m − x k || ≤ c0 b2 k ∞ ∑ b2 i= i ≤ c0 k b 1−b (5.62) We have a similar estimate for sk proving that {( x k , sk )} is a Cauchy sequence Hence the sequence must be convergent, and by (5.9), (5.10) and Corollary 5.3 it converges to such a strict complementary solution ( x* , s* ) of HNCP from which we can derive a strict complementary solution of NCP using Lemma 3.2 If we let m → ∞ , then from (5.62) we obtain || x* − x k || ≤ c0 k b , 1− b and similarly for sk Thus, ( x k , sk ) → ( x* , s* ) with R-order at least (5.63) ♦ 46 G Le{aja / Long-Step Homogeneous Interior-Point Algorithm We have proved that if k ≥ K , where K is the treshold value defined by (5.52), then it is not necessary to calculate the centering part of the direction in the Algorithm 3.5 because the algorithm will produce iterates which are not only centered but also converge to a strictly complementary solution R-quadraticaly The treshold value K is a theoretical one because some constants used in its calculation may not be known in advance In practice, as discussed in [37, 31], various heuristic procedures can be developed to determine when to switch from Algorithm 3.5 to Algorithm 5.4 Thus, practical implementation of the algorithm would be a hybrid algorithm which starts with Algorithm 3.5 and then use heuristic "switch time check" procedure to switch to Algorithm 5.4 when suitable REFERENCES [1] [2] [3] Andersen, E., Ye, Y., "On a homogeneous algorithm for the monotone complementarity problem", Mathematical Programming, 84 (1999) 375-399 Anitescu, M., Lesaja, G., and Potra, F.A., "Equivalence between different formulations of the linear complementarity problem", Optimization Methods and Software, (1997) 265-290 Anitescu, M., Lesaja, G., and Potra, F.A., "An infeasible interior-point predictor-corrector algorithm for the p* -geometric LCP", Applied Mathematics and Optimization, 36 (1997) 203- 228 Cottle, R.W., Pang, J.-S., and Stone, R.E., The Linear Complementarity Problem, Academic Press, Boston, MA, 1992 [5] Dikin, I.I., "Iterative solution of problems of linear and quadratic programming", Soviet Mathematics Doklady, (1967) 674-675 [6] Ferris, M.C., and Pang, J.-S., (eds.), Complementarity and Variational Problems: State of the Art, SIAM Publishing, Philadelphia, Pennsylvania, 1997 [7] G•ler, O., "Existence of interior points and interior paths in nonlinear monotone complementarity problems", Mathematics of Operations Research, 18 (1993) 148-162 [8] Jansen, B., Roos, K., Terlaky, T., and Yoshise, A., "Polynomiality of primal-dual affine scaling algorithms for nonlinear complementarity problems", Mathematical Programming, 78 (1997) 315-345 [9] Jarre, F., "Interior-point methods via self-concordance or relative Lipschitz condition", Optimization Methods and Software, (1995) 75-104 [10] Ji, J., Potra, F.A., and Sheng, R., "A predictor-corrector method for solving the P* -matrix [4] LCP from infeasible starting points", Optimization Methods and Software, (1995) 109-126 [11] Kojima, M., Megiddo, N., and Mizuno, S., "A general framework of continuation methods for complementarity problems", Mathematics of Operations Research, 18 (4) (1993) 945-963 [12] Kojima, M., Megiddo, N., Noma, T., and Yoshise, A., "A unified approach to interior point algorithms for linear complementarity problems", Lecture Notes in Comput Scie., 538, 1991 [13] Kojima, M., Megiddo, N., and Noma, T., "Homotopy continuation method for nonlinear complementarity problems", Mathematics of Operations Research, 16 (1991) 754-774 [14] Kojima, M., Mizuno, S., and Noma, T., "A new continuation method for complementarity problems with uniform P-function", Mathematical Programming, 43 (1989) 107-113 [15] Kojima, M., Mizuno, S., and Noma, T., "Limiting behaviour of trajectories by a continuation method for monotone complementarity problems", Mathematics of Operations Research, 15 (1990) 662-675 [16] Kojima, M., Mizuno, S., and Yoshise, A., "A convex property of monotone complementarity problems", Research Reports on Information Sciences B-267, department of Information G Le{aja / Long-Step Homogeneous Interior-Point Algorithm 47 Sciences, Tokyo Institute of Technology, 2-12-1 Oh-Okayama, Meguro-ku, Tokyo 152, Japan, March 1993 [17] Kojima, M., Noma, T., and Yoshise, A., "Global convergence in infeasible-interior-point algorithms", Mathematical Programming, 65 (1994) 43-72 [18] Lesaja, G., "Interior point methods for P* -complementarity problems", PhD Thesis, University of Iowa, Iowa City, IA, USA, 1996 [19] McLinden, L., "The analogue of Moreau's proximation theorem, with applications to the nonlinear complementarity problem", Pacific Journal of Mathematics, 88 (1980) 101-161 [20] Miao, J., "A quadratically convergent O((1 + κ ) nL ) -iteration algorithm for the P* (κ ) - [21] [22] [23] [24] [25] matrix linear complementarity problem", Research Report RRR 93, RUTCOR-Rutgers Center for Operations Research, Rutgers University, P.O Box 5063, New Brunswick, NJ USA, 1993 Monteiro, R.D.C., Pang, J.-S., and Wang, T., "A positive algorithm for the nonlinear complementarity problem", SIAM Journal on Optimization, (1995) 129-148 Monteiro, R.D.C., and Wright, S.J., "Local convergence of interior-point algorithms for degenerate monotone LCP", Computational Optimization and Applications, (1994) 131-155 Nesterov, Y.E., "Long-step strategies in interior point potential-reduction methods", Working paper, Department SES COMIN, Geneva University, 1993 (to appear in Mathematical Programming) Nesterov, Y.E., and Nemirovsky, A.S., Interior Point Polynomial Methods in Convex Programming: Theory and Algorithms, SIAM Publications, SIAM, Philadelphia, USA, 1994 Peng, J., Roos, C., and Terlaky, T., "New complexity analysis of primal-dual Newton methods for P* (κ ) linear complementarity problems", in: H Frenk, C Roos, T Terlaky, and S Zhang (eds.), High Performance Optimization, Kluwer Academic Publishers, Boston, USA, 1999, 245-265 [26] Peng, J., Roos, C., Terlaky, T., and Yoshise, A., "Self-regular proximities and new search directions for nonlinear P* (κ ) complementarity problems", Preprint, Department of Computing and Software, McMaster University, Hamilton, Ontario, Canada, December 2000 [27] Potra, F.A., "On Q-order and R-order of convergence", Journal of Optimization Theory and Applications, 63 (1989) 415-431 [28] Potra, F.A., and Sheng, R., "A large-step infeasible interior point method for the P* -matrix LCP", SIAM Journal on Optimization, (1997) 318-335 [29] Potra, F.A., and Sheng, R., "Homogeneous interior-point algorithms for semidefinite programming", Optimization Methods and Software, (1998) 161-184 [30] Potra, F.A., and Ye, Y., "Interior point methods for nonlinear complementarity problems", Journal on Optimization Theory and Applications, 88 (1996) 617-647 [31] Sun, J., and Yhao, G., "A quadratically convergent polynomial long-step algorithm for a class of nonlinear complementarity problems", Working paper, National University of Singapore, Republic of Singapore 119260, December 1995 [32] Tseng, P., "Analysis of an infeasible interior path-following method for complementarity problems", Report, Department of Mathematics, University of Washington, Seattle, Washington, USA, September 1997 [33] Tseng, P., "An infeasible path-following method for monotone complementarity problems", SIAM Journal on Optimization, (1997) 386-402 [34] V⌂liaho, H., " P* -matrices are just sufficient", Linear Algebra and its Applications, 239 (1996) 103-108 [35] Wright, S., and Ralph, D., "Superlinear convergence of an interior-point method for 391 monotone variational inequalities", Mathematics of Operations Research, 21 (1996) 815-838 [36] Ye, Y., "On homogeneous and self-dual algorithm for LCP", Mathematical Programming, 76 (1997) 211-222 48 G Le{aja / Long-Step Homogeneous Interior-Point Algorithm [37] Ye, Y., and Anstreicher, K., "On quadratic and O( nL ) convergence of predictor-corrector algorithm for LCP", Mathematical Programming, 62 (3) (1993) 537-551 [38] Ye, Y., Todd, M., and Mizuno, S., "An O( nL ) -iteration homogeneous and self-dual linear programming algorithm", Mathematics of Operations Research, 19 (1994) 53-67 ... [26] The second objective of the paper is to prove linear global and quadratic convergence of the interior- point method for the P* -NCP We use a long-step version of the homogeneous, self-dual, interior- point. .. needed later in the text ALGORITHM In the development of the interior- point methods we can indicate two main approaches The first is the application of the interior- point method to the original... / Long-Step Homogeneous Interior- Point Algorithm The interior- point methods, originally developed for the linear programming problem (LP), have been successfully extended to LCP, NCP, and the

Ngày đăng: 05/02/2020, 00:27

Từ khóa liên quan

Tài liệu cùng người dùng

  • Đang cập nhật ...

Tài liệu liên quan