DSpace at VNU: Farkas'''' lemma: three decades of generalizations for mathematical optimization tài liệu, giáo án, bài giản...
TOP DOI 10.1007/s11750-014-0319-y INVITED PAPER Farkas’ lemma: three decades of generalizations for mathematical optimization N Dinh · V Jeyakumar © Sociedad de Estadística e Investigación Operativa 2014 Abstract In this paper we present a survey of generalizations of the celebrated Farkas’s lemma, starting from systems of linear inequalities to a broad variety of non-linear systems We focus on the generalizations which are targeted towards applications in continuous optimization We also briefly describe the main applications of generalized Farkas’ lemmas to continuous optimization problems Keywords Generalized Farkas’ lemma · Optimality · Duality · Mathematical optimization Mathematics Subject Classification (2000) 90C60, 90C56, 90C26 Introduction The celebrated Farkas’ lemma (Farkas 1902) states that given any vectors a1 , a2 , , am and c in Rn , the linear inequality c T x ≥ is a consequence of the linear system This invited paper is discussed in the comments available at doi:10.1007/s11750-014-0315-2; doi:10 1007/s11750-014-0316-1; doi:10.1007/s11750-014-0317-0; doi:10.1007/s11750-014-0318-z Research was partially supported by grants from the Australian Research Council and from NAFOSTED, Vietnam N Dinh (B) Department of Mathematics, International University Vietnam National University-Ho Chi Minh City, Ho Chi Minh City, Vietnam e-mail: ndinh02@yahoo.fr V Jeyakumar Department of Applied Mathematics, University of New South Wales, Sydney 2052, Australia e-mail: v.jeyakumar@unsw.edu.au 123 N Dinh, V Jeyakumar aiT x ≥ 0, i = 1, 2, , m, if and only if there exist multipliers λi ≥ such that c = m i=1 λi The Farkas’ lemma can also be expressed as a theorem of the alternative: Exactly one of the following alternatives is true: (i) ∃x ∈ Rn , aiT x ≥ 0, i = 1, 2, , m, c T x < 0; m λi (ii) ∃λi ≥ 0, i = 1, 2, , m, c = i=1 Farkas’ lemma is the key result underpinning the linear programming duality and has played a central role in the development of mathematical optimization (alternatively, mathematical programming) Mathematical optimization together with sophisticated computer models, is nowadays a widely used technique to address major challenges of modern problems A large variety of proofs of the lemma can be found in the literature (see Craven 1978; Mangasarian 1969; Prekopa 1980) The proof that relies on the separation theorems has led to numerous extensions over the last 30 years These extensions cover wide range of systems including conic-linear, conic-sublinear systems, convex inequality systems, conic-convex systems, classes of non-convex systems such as the difference of convex (DC) systems, composite convex systems and quadratic systems Most recent extensions cover inequality systems involving uncertain vectors, matrices, and polynomials Applications of Farkas’ lemma, especially, in continuous optimization range from classical non-linear programming and non-smooth optimization to modern areas of optimization such as conic programming (Boyd and Vandenberghe 2004; Ben-Tal and Nemirovski 2000) and robust optimization (Ben-Tal and El Ghaoui 2009; Bertsimas et al 2011) However, applications of the Farkas’ lemma or its generalizations are not limited to mathematical optimization It has also been used in many other fields, such as mathematical economics and finance (e.g see Elliott and Kopp 2005; Franklin 1983) In this survey, we only describe certain main generalizations of Farkas’ lemma and their applications to problems in various areas of continuous optimization, including conic optimization and robust optimization An earlier brief survey of generalizations of Farkas’ lemma can be found in Jeyakumar (2008) Linear systems The Farkas’ lemma for a finite system of linear inequalities has been generalized to systems involving arbitrary convex cones and continuous linear mappings between spaces of arbitrary dimensions In this case the lemma holds under a crucial closed cone condition The main version of such an extension is given as follows: Theorem 2.1 (Craven (1978) Conical linear systems) Let X and Y be Banach spaces and let A : X → Y be a continuous linear operator Let further S be a closed convex cone having the dual cone S + Suppose that A∗ (S + ) is weak∗ -closed, where A∗ is the adjoint operator of A Then the following equivalence holds: [A(x) ∈ S ⇒ c(x) ≥ 0] ⇔ c ∈ A∗ (S + ), (1) The closed cone condition holds when S is a polyhedral cone in some finite dimensional space For simple examples of non-polyhedral convex cones in finite dimensions 123 The Farkas lemma where the closure condition does not hold, see Ben-Israel (1969), and Craven (1978) However, the following asymptotic version of Farkas’ lemma holds without a closure condition: [A(x) ∈ S ⇒ c(x) ≥ 0] ⇔ c ∈ cl(A∗ (S + )), (2) where cl(A∗ (S + )) is the closure of A∗ (S + ) in the appropriate topology These extensions resulted in the development of asymptotic and non-asymptotic first-order necessary optimality conditions for infinite dimensional smooth-constrained optimization problems involving convex cones and duality theory for infinite dimensional linear programming problems (see e.g Goberna et al 1981) Smooth optimization refers to the optimization of a differentiable function A non-asymptotic form of an extension of Farkas’ lemma that is different from the one in (1) is given in Lasserre (1997) without the usual closure condition An approach to the study of semi-infinite programming, which is based on generalized Farkas’ lemma for infinite linear inequalities is given in Goberna et al (1981) Uncertain linear systems The data of many real-world optimization problems are often uncertain and so the study of optimization problems in the face of data uncertainty is becoming increasingly important in mathematical optimization Robust optimization has emerged as a powerful approach for treating optimization problems under data uncertainty Such a study requires a robust form of Farkas’ lemma for uncertain linear systems A robust form of Farkas’ lemma was given in Jeyakumar and Li (2011) Theorem 2.2 (Jeyakumar and Li (2011) Robust Farkas’ lemma) Let X , Y be locally convex Hausdorff spaces and let L(X ; Y ) be the space of all continuous linear mappings from X to Y Let U ⊆ L(X ; Y ) and V ⊆ X ∗ be closed convex uncertainty sets, where X ∗ is the continuous dual space of X Let S be a closed convex cone in Y If co A∈U A∗ (S + ) is weak∗ -closed, then, the following statements are equivalent: (i) ∀A ∈ U Ax ∈ −S ⇒ c(x) ≥ ∀c ∈ V, (ii) −V ⊆ co A∈U A∗ (S + ) As a special case, we obtain the following robust Farkas’ lemma involving interval uncertainty sets In this case, the closed convex cone condition of the preceding theorem is automatically satisfied Theorem 2.3 (Jeyakumar and Li (2009) Robust Farkas’ lemma with interval uncertainty) Let a i , a i ∈ Rn with a i ≤ a i , for i = 0, 1, , m Then, the following statements are equivalent: (i) ∀a1 ∈ [a , a ], a1T x ≤ 0, , ∀am ∈ [a m , a m ], amT x ≤ ⇒ a0T x ≥ 0, ∀a0 ∈ [a , a ], m m (ii) (∀a0 ∈ [a , a ]) (∃λ ∈ Rm + ) a0 + i=1 λi a i ≥ and a0 + i=1 λi a i ≤ A generalization of the robust form of Farkas’ lemma for uncertain infinite linear systems was given in Goberna et al (2013) Consider an infinite uncertain linear system of the form at , x ≥ bt , ∀t ∈ T, 123 N Dinh, V Jeyakumar where T is an arbitrary index set, c, at ∈ Rn , and bt ∈ R, t ∈ T The linear system in the face of data (or parameter) uncertainty can be captured by the parameterized system at (vt ), x ≥ bt (wt ), ∀t ∈ T, where at : Vt → Rn , bt : Wt → R, Vt ⊂ Rq1 , Wt ⊂ Rq2 , q1 , q2 ∈ N The uncertain set-valued mapping U : T ⇒ Rq , q = q1 + q2 is defined as Ut := Vt × Wt for all t ∈ T An arbitrary element of Ut or a variable ranging on Ut is represented by u t := (vt , wt ) ∈ Vt × Wt Note that u ∈ U means that u is a selection of U, i.e., that u : T → Rq and u t ∈ Ut for all t ∈ T (u can be also represented as (u t )t∈T ) A simple case of the robust semi-infinite Farkas’ lemma is given below using robust moment cone M which is given by co cone{(at (vt ), bt (wt )), t ∈ T ; (0n , −1)} M := u=(vt ,wt )t∈T ∈U We use R(T ) to denote the space of real tuples λ = (λt )t∈T with only finitely many (T ) λt = and let R+ denote the non-negative cone in R(T ) , that is ) (T ) : λt ≥ 0, for each t ∈ T } R(T + := {(λt )t∈T ∈ R Theorem 2.4 (Goberna et al (2013) Robust Farkas’ lemma–semi-infinite case) Let T be an arbitrary index set, at : Vt → Rn , bt : Wt → R, Vt ⊂ Rq1 , Wt ⊂ Rq2 , q1 , q2 ∈ N Let c ∈ Rn and r ∈ R Suppose that the robust moment cone M is closed and convex Then the following statements are equivalent (i) at (vt ), x ≥ bt (wt ), ∀(t, u t ) ∈ gph U ⇒ c, x ≥ r , (T ) (ii) (∃(λt )t∈T ∈ R+ and (vt , wt ) ∈ Ut , t ∈ T ) − c + 0, t∈T λt bt (wt ) ≥ r t∈T λt at (vt ) = Sublinear systems The success of linear programming duality and the practical nature of the Lagrange multiplier conditions for smooth optimization have led to extensions of Farkas’ lemma to systems involving non-linear functions Convex analysis allowed one to obtain extensions in terms of subdifferentials replacing the linear systems by sublinear (convex and positively homogeneous) systems (Glover 1982; Z˘alinescu 1978) A simple form of such an extension states as follows: Theorem 3.1 (Glover (1982) Conical sublinear systems) Let f : X → R be a realvalued continuous sublinear function and let g : X → Y be sublinear with respect to the cone S and vg is lower semi-continuous for each v ∈ S + If the convex cone ∗ λ∈S + ∂(λg)(0) is weak -closed then the following statements are equivalent: 123 The Farkas lemma − g(x) ∈ S ⇒ f (x) ≥ (3) ∂(λg)(0), (4) ∈ ∂ f (0) + λ∈S + where ∂h(0) denotes the subdifferential of the function h at In the absence of the closed cone condition (3) in Theorem 3.1 can be replaced by ⎤ ⎡ ∂(λg)(0)⎦ ∈ ∂ f (0) + cl ⎣ λ∈S + This extension was used to obtain optimality conditions for convex optimization problems and quasi-differentiable problems in the sense of Pshenichnyi (1971) A review of results of Farkas type for systems involving sublinear functions is given in Gwinner (1987a,b) A generalization of Farkas’ lemma of the form in Theorem 3.1 holds without the closed cone condition for a separable sublinear system as given in the following Theorem Recall that the function f : R n → R is said to be separable-sublinear whenever f (x) = nj=1 f j (x j ), x = (x1 , , xn ), and each f j (.) is a sublinear function on R Theorem 3.2 (Jeyakumar and Li (2009) Separable-sublinear Farkas lemma) Let f : Rn → R be sublinear and let f i : Rn → R, i = 1, 2, , m, be separable-sublinear Then the following statements are equivalent: (i) x ∈ Rn , f (x) ≤ 0, , f m (x) ≤ ⇒ f (x) ≥ 0, m λi f i (x) ≥ (ii) (∃λi ≥ 0) (∀x ∈ Rn ) f (x) + i=1 Difference of sublinear systems Difference of sublinear (DSL) functions which arise frequently in non-smooth optimization provide useful approximations for many classes of non-convex non-smooth functions This has led to the investigation of results of Farkas type for systems involving DSL functions A mapping g : X → Y is said to be Difference Sublinear (DSL) (with respect to S) if, for each v ∈ S + , there are (weak*) compact convex sets, here denoted ∂(vg)(0) and ∂(vg)(0), such that, for each x ∈ X , vg(x) = max u∈∂(vg)(0) u(x) − max w∈∂(vg)(0) w(x), where X and Y are Banach spaces If Y = R and S = R+ then this definition coincides with the usual notion of a difference sublinear real-valued function Thus a mapping g is DSL iff vg is a DSL function for each v ∈ S + The sets ∂(vg)(0) and ∂(vg)(0) are the sub- and superdifferential of vg respectively For a DSL mapping g : X → Y we shall often require a selection from the class of sets {∂(vg)(0) : v ∈ S + } This is a set, denoted (wv ), in which we select a single element wv ∈ ∂(vg)(0) for each v ∈ S + Let B := cl cone co[ v∈S + (∂(vg)(0) − wv )] An extension of Farkas’ lemma for DSL systems was given in Glover et al (1994), and Jeyakumar and Glover (1995) as follows: 123 N Dinh, V Jeyakumar Theorem 3.3 (Glover et al (1994) Conical DSL systems) Let X and Y be Banach spaces Let g : X → Y is said to be Difference Sublinear (DSL) with respect to a closed convex cone S Let f : X → R be continuous and sublinear Then the following statements are equivalent (i) −g(x) ∈ S ⇒ f (x) ≥ 0, (ii) For each selection (wv ) with wv ∈ ∂(vg)(0), v ∈ S + , ∂ f (0) ⊆ ∂ f (0) + B, A unified approach to generalizing Farkas’ lemma for sublinear systems which uses multivalued functions is given (Borwein 1983; Jeyakumar 1987, 1990) Given that the optimality of a constrained global optimization problem can be viewed as the solvability of appropriate inequality systems, it is easy to see that an extension of Farkas’ lemma again provides a mechanism for characterizing global optimality of a range of non-linear optimization problems The -subdifferential analysis here allowed one to obtain a new version of Farkas’ lemma replacing the linear inequality c(x) ≥ by a reverse convex inequality h(x) ≤ 0, where h is a convex function with h(0) = This extension for systems involving DSL functions was given as follows: Theorem 3.4 (Jeyakumar and Glover (1993) DSL-convex systems) Let X and Y be Banach spaces Let g : X → Y be Difference Sublinear (DSL) with respect to a closed convex cone S Let h : X → R be continuous and convex function with h(0) = Then the following statements are equivalent (i) −g(x) ∈ S ⇒ h(x) ≤ 0, (ii) For each selection (wv ) with wv ∈ ∂(vg)(0), v ∈ S + and for each ⎡ ≥ 0, ⎤ ∂ h(0) ⊆ cl cone co ⎣ (∂(vg)(0) + wv )⎦ v∈S + Such an extension has led to the development of conditions which characterize optimal solutions of various classes of global optimization problems such as convex maximization problems and fractional programming problems (see Jeyakumar and Glover 1993, 1995) However, simple examples show that the asymptotic forms of above results of Farkas type not hold if we replace the DSL (or sublinear) system by a convex system Ha (1979) established a version of Farkas’ lemma for convex systems in terms of epigraphs of conjugate functions A general from of such a result for systems involving convex functions is stated in the following theorem Theorem 3.5 (Jeyakumar et al (1996) Epigraphical forms of Farkas’ lemma) Let f , h and gi , i ∈ I , be lower semi-continuous convex functions and let I be an arbitrary index set Suppose that the system i ∈ I, gi (x) ≤ has a solution Then the following statements are equivalent: (i) (∀i ∈ I ) gi (x) ≤ 123 ⇒ h(x) − f (x) ≤ 0, The Farkas lemma (ii) epi h ∗ ⊆ cl[epi f ∗ + cl cone co( i∈I epi gi∗ )], where f ∗ , h ∗ and gi∗ are conjugate functions of f , h and gi , respectively An epigraphical form of Farkas’ lemma involving H -convex functions with application to global non-linear optimization is given in Rubinov et al (1995) Convex systems A version of Farkas’ lemma for cone-convex systems in infinite dimensional spaces appeared in the seventies in Holmes (1975, Corollary 2, 14F) under the Slater condition It was then extended in Gwinner (1987a) with applications to duality theory in homogeneous programming In the following we present generalizations of Farkas’ lemma for cone-convex systems To state theses extensions, let us introduce some notations and definitions Let X and Y be locally convex Hausdorff spaces, C ⊂ X be a non-empty closed convex set, K ⊂ Y be a closed convex cone which generates a partial order ≤ K on Y We add to Y a greatest element with respect to ≤ K , denoted by ∞ K , i.e., in the enlarged space Y • = Y ∪ {∞ K } we have y ≤ K ∞ K for every y ∈ Y • We also adopt the following conventions with respect to the operations in Y • : y + ∞ K = ∞ K + y = ∞ K , for all y ∈ Y • , and α∞ K = ∞ K if α ≥ The mapping g : X → Y • is said to be a K-convex whenever x1 , x2 ∈ X, μ ∈ [0, 1] ⇒ g((1 − μ)x1 + μx2 ) ≤ K (1 − μ)g(x1 ) + μg(x2 ), where ”≤ K ” is the binary relation extended to Y • Similarly g is said to be K −epiclosed whenever the set epi K g := {(x, y) ∈ X × Y : y ∈ g(x) + K } is a closed set in the product space Recall that (X ) denotes the set of all proper, lower semicontinuous and convex functions on X Moreover, the set U ⊂ X ∗ × R is said to be closed (w.r.t weak∗ topology) regarding a set V ⊂ X ∗ × R if cl∗ U ∩ V = U ∩ V (see Bot 2010) For f ∈ (X ) with dom f ∩ C ∩ g −1 (−K ) = ∅, let epi( f + y ∗ ◦ g + i C )∗ , C := (5) y ∗ ∈K + where i C is the indicator function of C, i.e., i C (x) = if x ∈ C and i C (x) = +∞ if x ∈ C Cone-convex systems The following generalized version of Farkas’ lemma and its stable form for coneconvex systems was given in Dinh et al (2013c), and Dinh et al (2013d, Corollaries 4, 5) The results can be derived from Bot et al (2009, Theorems 5, 6) as well Theorem 4.1 (Dinh et al (2013c) Cone-convex systems) Assume that g : X → Y • be a K -convex and K -epi-closed, and f ∈ (X ) Assume that C is closed regarding the set {0 X ∗ } × R Then for any β ∈ R, the following statements are equivalent: 123 N Dinh, V Jeyakumar (i) x ∈ C, g(x) ∈ −K ⇒ f (x) ≥ β, (ii) ∃y ∗ ∈ K + such that f + y ∗ ◦ g ≥ β onC An extension of Farkas’ lemma to cone-convex systems, called stable Farkas’ lemma, was first given in Jeyakumar and Lee (2008a) A recent extension of the stable Farkas’ lemma is given below Theorem 4.2 (Dinh et al (2013c) Stable Farkas’ lemma for cone-convex systems) Under the same assumptions as in Theorem 4.1, if C is weak*-closed in X ∗ × R then the following statements are equivalent: (i) ∀x ∗ ∈ X ∗ , ∀β ∈ R, x ∈ C, g(x) ∈ −K ⇒ f (x) − x ∗ , x ≥ β, (ii) ∃y ∗ ∈ K + such that f − x ∗ + y ∗ ◦ g ≥ β on C The closedness conditions given in the previous theorems (such as C is weak*closed in X ∗ × R in the second one) are much weaker than other generalized interiortype ones (see, e.g., Bot 2010; Dinh et al 2009; Jeyakumar et al 2004) They give complete characterizations of Farkas’ lemma and its stable form given in the mentioned theorems (see Jeyakumar et al 2008b; Dinh et al 2013c) For related closed cone conditions, see Bot (2010), Bot et al (2004, 2009), Dinh et al (2009, 2007, 2010d,e,f), Jeyakumar et al (2006, 2008b), Burachik and Jeyakumar (2005a,b), Jeyakumar (2008), and Li et al (2009) The generalized Farkas’ lemma for the cone-convex systems led to a scheme for establishing optimality conditions and Lagrange strong dualities for a class of convex programming problems with a cone-convex constraint and a geometrical set constraint Applications of these versions of Farkas’ lemma to semi-definite systems, semi-infinite systems, infinite dimensional linear systems yield optimality conditions and (stable) duality for semi-definite problems, semi-infinite problems, infinite dimensional linear problems, respectively, as shown in Bot et al (2009), Jeyakumar et al (2004, 2005, 2006), Dinh et al (2006, 2007, 2013c,d), Bot (2010), Jeyakumar and Lee (2008a,c) For an application to multi-objective optimization problems for identifying efficient/weakly efficient solutions or approximate solutions, see, e.g., Lee et al (2012) The following generalization of Farkas’ lemma for convex systems using conjugate functions can be deduced from Theorem in Bot et al (2009) A related form in finite dimensional spaces under interior-type constraint qualifications was given in Bot et al (2006) Theorem 4.3 (Bot et al (2009) Cone-convex systems and conjugate functions) Assume that the function p → η( p) := inf y ∗ ∈K + ( f + y ∗ ◦ g + i C )∗ ( p) is lower semi-continuous, and there is (x¯ ∗ , y¯ ∗ ) ∈ X ∗ × K + such that f ∗ (x¯ ∗ ) + ( y¯ ∗ ◦ g + i C )∗ (−x¯ ∗ ) ≤ inf y ∗ ∈K + 123 f + y ∗ ◦ g + iC ∗ (0 X ∗ ) The Farkas lemma Then the following statements are equivalent: (i) x ∈ C, g(x) ∈ −K ⇒ f (x) ≥ β, (ii) (∀y ∗ ∈ K + )(∀x ∗ ∈ X ∗ ) − f ∗ (x ∗ ) − (y ∗ ◦ g + i C )∗ (−x ∗ ) ≥ β This extension leads to strong duality results for general cone-convex problems and its dual problem, called Fenchel–Lagrange dual in Bot (2010), and Bot et al (2009) Convex infinite systems Generalizations of Farkas’ lemma for convex semi-infinite/infinite systems, i.e., systems defined by infinitely many convex inequalities, have been given in parallel to the extensions to conic convex systems (see for instance Dinh et al 2006, 2007; Fang et al 2010; Goberna et al 1981; Bot et al 2004; Jeyakumar et al 2003) Let C be a non-empty convex subset of a locally convex Hausdorff space X and f, f t : X → R ∪ {+∞}, for all t ∈ T , be proper convex functions Here T is possibly infinite index set Assume that A := {x ∈ C : f t (x) ≤ 0, ∀t ∈ T } = ∅ A generalization of Farkas’ lemma for infinite convex systems was established recently by Fang et al (2010) Theorem 4.4 (Fang et al (2010) Convex infinite systems) Let α ∈ R Assume that epi( f + i A )∗ ∩ ({0 X ∗ } × R) = λ∈R(T ) epi( f + i C + t∈T λt f t )∗ ∩ ({0 X ∗ } × R) + Then the following statements are equivalent: (i) x ∈ C, f t (x) ≤ 0, ∀t ∈ T ⇒ f (x) ≥ α, (T ) (ii) (∃(λt )t ∈ R+ )(∀x ∈ C) f (x) + t∈T λt f t (x) ≥ α A stable form of this type of Farkas’ lemma was also given in Fang et al (2010) They extend the earlier versions given in Dinh et al (2006, 2007), Goberna et al (1981), Bot et al (2004), Jeyakumar et al (2003) This generalization was used to establish stability results for convex infinite problems with perturbation in the right hand side Dinh et al (2007) and for parametric DC programs under an infinitely many convex constraints Dinh et al (2010d) Moreover, it was also used as a tool for studying stability of feasible sets and optimal solution sets for classes of convex infinite programming problems in Dinh et al (2010a, 2012) Convex uncertain systems A robust form of Farkas’ lemma for convex inequality systems under parameter uncertainty was given in Jeyakumar and Li (2010) and is stated as follows Theorem 4.5 (Jeyakumar and Li (2010) Robust convex Farkas’ lemma) Let f : Rn → R be a convex function and let gi : Rn × Rq → R, i = 1, , m be continuous functions such that for each vi ∈ Rq , gi (·, vi ) is a convex function Let Vi ⊆ Rq be compact, and let F := {x : gi (x, vi ) ≤ 0, ∀ vi ∈ Vi , i = 1, , m} = ∅ Then the following two statements are equivalent: (i) gi (x, vi ) ≤ 0, ∀ vi ∈ Vi , i = 1, , m (ii) (0, 0) ∈ epi f ∗ + cl co vi ∈Vi ,λi ≥0 epi ⇒ f (x) ≥ 0, m i=1 λi gi (·, vi ) ∗ This generalization has paved the way to the development of robust duality theory (Jeyakumar and Li 2010) for convex programming problems under data uncertainty within the framework of robust optimization 123 N Dinh, V Jeyakumar Convex polynomial systems Note that a real polynomial f on Rm is sum of squares if there exist real polynomials f j , j = 1, , r , such that f (x) = rj=1 f j2 (x) for all x ∈ Rm The set consisting of all sum of squares real polynomials is denoted by [x] Moreover, the set consisting of all sum of squares real polynomials with degree at most d is denoted by d2 [x] A real polynomial f on Rn is SOS-convex if and only if σ (x, y) := f (y) − f (x) − ∇ f (x)T (y − x) is a sum of squares polynomial For details see Ahmadi and Parrilo (2012), and Helton and Nie (2010) The class of SOS-convex polynomials include the class of separable convex polynomials and the class of convex quadratic functions Clearly, a SOS-convex polynomial is convex However, the converse is not true An interesting feature of a SOS-convex polynomial is that whether a polynomial is SOS-convex or not can be checked by solving a related semi-definite programming problem while checking the convexity of a polynomial is in general a very hard problem As we see in the following theorem, SOS-convex polynomial systems enjoy a generalization of Farkas’ lemma without qualifications (Jeyakumar and Vicente-Perez 2013) Theorem 4.6 (Jeyakumar and Vicente-Perez (2013) Farkas’ lemma for SOS-convex polynomial systems) Let f, g j : Rn → R be SOS-convex polynomials with degree at most d for j = 1, 2, , p Suppose that A = {x ∈ Rn : g j (x) ≤ 0} = ∅ Then, the following two statements are equivalent: (i) g j (x) ≤ 0, j = 1, 2, , p p (ii) (∀ > 0) (∃ λ ∈ R+ , σ0 ∈ ⇒ f (x) ≥ 0, p f + j=1 λ j g j + = σ0 d [x]) A further generalization to convex polynomial system was given as follows Let p p be the simplex in R p , that is, := {δ ∈ R+ : j=1 δ j = 1} Theorem 4.7 (Jeyakumar and Li (2013) Farkas’ lemma for convex polynomial systems) Let p j and gi be convex polynomials for each j = 1, 2, , p and i = 1, 2, , m, with A := {x ∈ R n : gi (x) ≤ 0, i = 1, 2, , m} = ∅ Then, the following statements are equivalent: (i) gi (x) ≤ 0, i = 1, 2, , m ⇒ max1≤ j≤ p p j (x) ≥ 0, p n ¯ (ii) (∀ε > 0) (∃ δ¯ ∈ , λ¯ ∈ Rm + )(∀x ∈ R ) j=1 δ j p j (x) + m ¯ i=1 λi gi (x) + ε > Sublinear-convex systems Again, let X and Y be locally convex Hausdorff spaces and C ⊂ X be closed and convex The function S : Y → R ∪ {+∞} is (extended) sublinear if ∀y, y1 , y2 ∈ Y, α > 0, S(y1 + y2 ) ≤ S(y1 ) + S(y2 ) and S(αy) = αS(y), and S(0Y ) = Such a sublinear function allows us to introduce in Y • the following partial order: y1 ≤ S y2 if y1 ≤ K y2 where K := {y ∈ Y : S(−y) ≤ 0} 123 (6) The Farkas lemma Recall that a mapping h : X →Y • is called (extended) S-convex (Simons 2007) if x1 , x2 ∈ X, μ ∈ [0, 1] ⇒ h((1 − μ)x1 + μx2 ) ≤ S (1 − μ)h(x1 ) + μh(x2 ), where “≤ S ” is the binary relation defined in (6) Now let f ∈ (X ), ψ ∈ (R), and assume the existence of x¯ ∈ C ∩ dom f and α¯ ∈ dom ψ such that the set (S ◦ g)(x) ¯ ≤ α ¯ Let further D := {(0 X ∗ , γ , r ) : (γ , r ) ∈ epi ψ ∗ } (u ∗ , −μ, r ) : (u ∗ , r ) ∈ epi (f + y∗ ◦ g + iC )∗ E := y ∗ ∈Y ∗ ,μ≥0 y ∗ ≤μS Assume that C ∩ dom f ∩ dom (S ◦ g) = ∅ Theorem 4.8 (Dinh et al (2013c) Sublinear-convex systems) Let C ⊂ X be a closed convex set Let S : Y → R be a lsc sublinear function, g : X → Y • be a S-convex function such that the set {(x, y, λ) ∈ X × Y × R : S(g(x) − y) ≤ λ} (7) is closed in the product space X × Y × R Let β ∈ R If D + E is weak* -closed in the product space X ∗ × R × R then the following statements are equivalent: (i) x ∈ C, α ∈ R, (S ◦ g)(x) ≤ α ⇒ f (x) + ψ(α) ≥ β, (ii) There exist μ ≥ 0, y ∗ ∈ Y ∗ such that μ ∈ dom ψ ∗ , y ∗ ≤ μS on Y , and f + y ∗ ◦ g ≥ ψ ∗ (μ) + β on C It is well-known that some early generalization of Farkas’ lemma are equivalent to the Hahn–Banach theorem (Holmes 1975) and that the celebrated Hahn–Banach theorem fails in the case where the sublinear function (that dominates the linear functional) accepts the infinite values (i.e., +∞) as shown in Simons (2007, Remark 2.3) In Dinh et al (2013c), the previous version of Farkas’ lemma for S-convex systems is proved to be equivalent to an extension of the Hahn–Banach theorem, known as “Hahn–Banach–Lagrange” theorem It leads also to extensions of the Mazur–Orlicz theorem, the sandwich theorems to the cases with extended sublinear functions We give one of them to illustrate the significance of generalized Farkas’ lemma in fundamental mathematics For more details, see Dinh et al (2013c) Theorem 4.9 (Dinh et al (2013c) Generalized Hahn–Banach theorem) Let X be a locally convex Hausdorff space, S : X → R be a lsc sublinear function, M be a closed subspace of X such that M ∩ dom S = ∅, and be a continuous linear function on M Set G := K + + M ⊥ × {0} with K + := {(x ∗ , −μ) : μ ≥ and x ∗ ≤ μSonX } 123 N Dinh, V Jeyakumar Assume that G is weak∗ -closed in the product space X ∗ × R If exists y ∗ ∈ X ∗ such that = (y ∗ )| M and y ∗ ≤ S ≤ S| M , then there Systems with composite functions Let X, Z be locally convex Hausdorff spaces with its topological dual X ∗ equipped with the weak∗ -topology, and Z is partially ordered by the non-empty convex cone K ⊂ Z Let f : X → R∪{+∞} be proper and convex, g : Z • → R∪{+∞} be proper convex and K -increasing function with the convention that g(∞ K ) = +∞, and let h : X → Z • be a proper, K -convex function such that h(dom f ∩dom h)∩dom g = ∅ The following generalized Farkas’ lemma for systems involving composite convex functions follows from a duality result in Bot (2010, page 39) Theorem 5.1 (Composite convex systems I) Assume that f and g are lower semicontinuous (lsc), λ ◦ h is lsc for all λ ∈ K + and epi f ∗ + (0, g ∗ (λ)) + epi(λ ◦ h)∗ λ∈domg ∗ is closed in X ∗ × R Then the following statements are equivalent: (i) ∀x ∈ X, f (x) + g ◦ h(x) ≥ β, (ii) (∃x ∗ ∈ X ∗ ) (∃λ ∈ K + ) −g ∗ (λ) − f ∗ (x ∗ ) − (λ ◦ g)∗ (−x ∗ ) ≥ β Other versions of generalized Farkas’ lemma for such a system with a set constraint can be found in Dinh et al (2013d) while a more general version in finite dimensional was proposed by Bot et al (2006, 2007) To state this generalization, let C be a non-empty convex subset of Rn Consider the functions f : Rk → R ∪ {+∞}, F : Rn → Rk , F = (F1 , F2 , , Fk ) and G : Rn → Rm , G = (G , G , , G m ) such that f is proper, Rk+ -increasing and convex, while F1 , , Fk and G , , G m are convex Assume that F −1 (dom f ) ∩ C = ∅ Consider the following constraint qualification: ⎧ ⎨ F(x ) ∈ ri(dom f ) − intRk+ , (C Q) ∃x ∈ ri(C) such that G i (x ) ≤ 0, i ∈ L ⎩ G i (x ) < 0, i ∈ N , where L = {i ∈ {1, 2, , m} : G i is an affine function} and N = {1, 2, , m}\L The following version of Farkas’ lemma was established in Bot et al (2006) Theorem 5.2 (Bot et al (2006) Composite convex systems II) Assume that (CQ) holds Then the following statements are equivalent: (i) ∀x ∈ C, G(x) ∈ −Rm + ⇒ f (F(x)) ≥ 0, n (ii) There exist p ∈ R , β ∈ Rk+ , α ∈ Rm + such that f ∗ (β) + (β T F)∗ ( p) + (α T G + i C )∗ (− p) ≤ 0, 123 The Farkas lemma The previous version of generalized Farkas’ lemma for composite convex systems were used to establish strong duality results for composite convex problems of the form inf x∈X [ f (x) + g ◦ h(x)] or of another related form inf x∈C,G(x)∈−Rm+ f (F(x)) in Bot (2010) and Bot et al (2006) As before, let X, Z be locally convex Hausdorff spaces with its topological dual X ∗ equipped with the weak∗ -topology, f, g, h : X → R ∪ {+∞} be extended real-valued functions Let H : dom H ⊂ X → Z be a mapping defined on a non-void subset dom H of X , and let k : Z → R∪{+∞} be an extended real-valued function Assume that f + g + k ◦ H is proper Consider the inequality (I) f (x) + g(x) + (k ◦ H )(x) ≥ h(x), ∀x ∈ X In the case of composite functions, by a generalized Farkas’ lemma we mean that a pair of (I) and its dual characterization Set A := epi f ∗ + epig ∗ + epi λ ◦ H − k ∗ (λ) ∗ λ∈dom k ∗ In a pure algebraic setting (i.e., without any assumption on the convexity and any topological assumptions on spaces, functions involved), the following version of Farkas’ lemma was proved in Dinh et al (2013d) Theorem 5.3 (Dinh et al (2013d) Composite non-convex systems) Assume that epi( f + g + k ◦ H )∗ = A Then for all h ∈ (X ), the following assertions are equivalent: (i) ∀x ∈ X, f (x) + g(x) + (k ◦ H )(x) ≥ h(x), (ii) (∀w ∈ dom h ∗ )(∃v ∈ dom f ∗ )(∃v1 ∈ dom g ∗ )(∃λ ∈ dom k ∗ ) f ∗ (v) + g ∗ (v1 ) + (λ ◦ H )∗ (w − v − v1 ) + k ∗ (λ) ≤ h ∗ (w) Applications of special cases of this generalization to optimization problems can be found in Bot (2010), Bot et al (2007), Dinh and Mo (2012), Long et al (2010) while in Dinh et al (2013d), taking advantages of the composite structure, many classes of optimization problems involving composite functions are examined The real-world models of these classes of problems can be found in An and Tao (2005) In some special cases, Theorem 5.3 yields generalized Farkas’ lemma for systems involving DC functions as shown in the next Section Non-convex systems Following the notation of the previous section, let f, g ∈ (X ), k ∈ (Z ), and λ ◦ H ∈ (X ) for all λ ∈ dom k ∗ Then epi( f + g + k ◦ H )∗ = A if and only if A is weak∗ -closed If, in addition, K is a closed convex cone in Z , C ⊂ X is closed and 123 N Dinh, V Jeyakumar convex, H is K -convex, and if we set g = i C , k = i −K , then the system (I) in the previous section is equivalent to: x ∈ C, H (x) ∈ −K ⇒ f (x) − h(x) ≥ Difference of convex systems The following version of the Farkas’ lemma for systems involving DC functions (i.e., functions that can be expressed as difference of two convex functions) is given in Dinh et al (2013d) Theorem 6.1 (Dinh et al (2008, 2013d) Systems involving DC functions I) Assume that the set epi f ∗ + epig ∗ + epi λ ◦ H − k ∗ (λ) ∗ λ∈K + is weak∗ -closed in X ∗ × R Then the following statements are equivalent: (i) x ∈ C, H (x) ∈ −K ⇒ f (x) − h(x) ≥ 0, (ii) (∀w ∈ dom h ∗ )(∃v ∈ dom f ∗ )(∃v1 ∈ dom i C∗ )(∃λ ∈ K + ) f ∗ (v) + i C∗ (v1 ) + (λ ◦ H )∗ (x ∗ − v − v1 ) ≤ h ∗ (w) Theorem 6.1 gives the so-called Toland–Fenchel-Lagrange duality and optimality conditions for DC problems under a cone-convex constraint and a geometric set constraint Dinh et al (2008, 2010e): inf H (x)∈−S,x∈C [ f (x) − h(x)] Applications of the Theorem 6.1 to equilibrium problems with DC cost functions were given in Dinh et al (2010f) and Sun and Li (2013) Versions of Farkas’ lemma that are similar to Theorem 6.1 were proposed earlier by Goberna et al (2008), and Jeyakumar et al (1996), where the cone-convex system is replaced by an infinite system of convex (and DC) inequalities These generalizations are given below Theorem 6.2 (Jeyakumar et al (1996) Systems involving DC functions II) Let X be a locally convex Hausdorff vector space, I be an arbitrary index set, and let pi , f, g ∈ (X ), for all i ∈ I The following statements are equivalent: (i) (∀i ∈ I ) pi (x) ≤ ⇒ g(x) − f (x) ≤ 0, ∗ (ii) epi g ∗ ⊂ cl[epi f ∗ + cl cone co i∈I epi pi ] Theorem 6.3 (Jeyakumar et al (1996) Difference of convex systems) Let X be a locally convex Hausdorff vector space, I be an arbitrary index set, and let fi , gi , f, g ∈ (X ), for all i ∈ I Let further C := {x ∈ X : f i (x) − gi (x) ≤ 0, ∀i ∈ I } = ∅ and D(gi ) be the set of all x ∈ X such that ∂gi (x) = ∅ Assume that C ⊂ D(gi ) for all i ∈ I Then the following statements are equivalent: 123 The Farkas lemma (i) (∀i ∈ I ) f i (x) − gi (x) ≤ ⇒ g(x) − f (x) ≤ 0, (ii) For every h = (h i )i∈I ∈ i∈I gph(gi∗ ), epig ∗ ⊂ cl(epi f ∗ ) + cone co epi f i∗ − h i i∈I These results were applied to get optimality conditions for approximate solutions of classes of global optimization problems such as DC problems involving infinitely many convex/DC inequalities (Jeyakumar et al 1996) Other versions of generalized Farkas’ lemmas for systems of finitely many DC functions of the form, gi (x) − h i (x) ≤ 0, i = 1, 2, , m, x ∈ C ⇒ g(x) − h(x) ≥ 0, can be found in Bot et al (2007) with application to DC problems under DC constraints A generalization of Farkas’ lemma to systems involving DC fractional functions with applications to fractional programming problems with DC constraints was given in Wang and Cheng (2011) Positively homogeneous systems Let X ⊂ Rn be a non-empty convex cone and be a cone of positively homogeneous functions on X Consider a semi-infinite system of the type σ := {ϕt (x) ≤ bt , t ∈ T }, with {ϕt , t ∈ T } ⊂ , {bt , t ∈ T } ⊂ R and arbitrary (non-empty) index set T Let Fσ := {x ∈ X : ϕt (x) ≤ bt , ∀t ∈ T } The following extended versions of the Farkas’ lemma for positively homogeneous semi-infinite systems and as a special case, for min-type inequalities (Glover et al 1996) were proved by Lopez and Martinez-Legaz (2005) Theorem 6.4 (Lopez and Martinez-Legaz (2005) Positively homogeneous systems) Let be a cone of positively homogeneous functions on X Then for any ϕt , ϕ ∈ and bt , b ∈ R, t ∈ T , the following assertions are equivalent: (i) x ∈ X, ϕt (x) ≤ bt , ∀t ∈ T (ii) i F∗σ (ϕ) ≤ b ⇒ ϕ(x) ≤ b, Theorem 6.5 (Lopez and Martinez-Legaz (2005) Systems with min-type inequalities) If the system σ := {ϕt (x) ≤ bt , t ∈ T } with ϕt (x) = mini∈I ati xi , at ∈ Rn++ , bt ∈ R++ , t ∈ T , and x ∈ Rn++ , is consistent, for any a ∈ Rn++ and b ∈ R++ the following assertions are equivalent: (i) x ∈ Rn , ϕt (x) ≤ bt , ∀t ∈ T ⇒ ϕ(x) ≤ b, with ϕ(x) = mini∈I xi (ii) b ≥ inf t∈T (maxi∈I (ai /ati ))bt , where I := {1, 2, , n} 123 N Dinh, V Jeyakumar Extensions of the previous versions of the generalized Farkas’ lemma for positively homogeneous systems and systems with min-type inequalities to infinite dimensional spaces were given recently in Doagooei (2012) Non-convex quadratic systems Recently, a generalization of Farkas’ lemma for non-convex quadratic systems, known as generalized S-lemma, was given in Jeyakumar et al (2009) in the following theorem For a survey of S-lemma, see Pólik and Terlaky (2007) Theorem 6.6 (Jeyakumar et al (2009) Non-convex quadratic systems) Let S0 be a subspace and let a0 ∈ Rn Let f, g : Rn → R be defined by f (x) = 21 x T A f x + b Tf x + c f and g(x) = 21 x T A g x + bgT x + cg , where A f , A g are n × n symmetric matrices, b f , bg ∈ Rn , c f , cg ∈ R Suppose that there exists x0 ∈ a0 + S0 such that g(x0 ) < Then the following statements are equivalent: (i) g(x) ≤ 0, x ∈ a0 + S0 ⇒ f (x) ≥ 0, (ii) (∃λ ≥ 0) (∀x ∈ a0 + S0 ) f (x) + λg(x) ≥ This generalization was used to establish complete characterization of global optimality of non-convex quadratic optimization problems with equality constraints in Jeyakumar et al (2009) Sequential forms of generalized Farkas’ lemma In this section we present sequential forms of generalized Farkas’ lemma without any qualification conditions (see, e.g., Dinh et al 2005, 2010b,c; Jeyakumar et al 2003, 2006) Convex systems Let X, U be locally convex Hausdorff topological vector spaces whose topological duals are X ∗ and U ∗ , respectively The only topology we consider on X ∗ and on U ∗ is the w∗ -topology and the topology considered on the product space X ∗ × U ∗ is the the product of the weak∗ -topologies on X ∗ and U ∗ Similar convention applies to product spaces with the presence of dual spaces A general version of sequential Farkas’ lemma for convex system expressed as a characterization of a general convex functional inequality was given recently in Dinh et al (2010b) Theorem 7.1 (Dinh et al (2010b) Sequential Farkas’ lemma for convex systems I) Let F ∈ (U × X ) with {x ∈ X : F(0, x) < +∞} = ∅ For any h ∈ (X ), the following statements are equivalent (i) F(0, x) ≥ h(x), for all x ∈ X, (ii) for every x ∗ ∈ dom h ∗ , there exists a net (u i∗ , xi∗ , εi )i∈I ⊂ U ∗ × X ∗ × R such that F ∗ (u i∗ , xi∗ ) ≤ h ∗ (x ∗ ) + εi , for all i ∈ I, and (xi∗ , εi ) → (x ∗ , 0+ ) Consider a special case when f, g ∈ (X ), and k ∈ (Z ) where Z is another locally convex Hausdorff space Let H : X −→ Z be a mapping such that z ∗ ◦ H ∈ (X ) 123 The Farkas lemma for all z ∗ ∈ dom k ∗ (which yields k ◦ H ∈ (Z ) when dom k ∩ H (X ) = ∅) Then one comes up with an asymptotic characterization of the system (I) of Sect with U := X × X × Z Moreover, when Z is partially ordered by a closed convex cone K ⊂ Z , C is a closed convex subset of X with dom f ∩ C ∩ H −1 (−K ) = ∅ and g := i C , k := i −K , the mentioned system be comes H (x) ∈ −K , x ∈ C ⇒ f (x) − h(x) ≥ 0, and one gets a version of sequential Farkas’ lemma for systems involving DC functions In turn, this yields sequential optimality conditions for DC programs with coneconstraints and DC programs with semi-definite constraints (Dinh et al 2010b) Theorem 7.2 (Dinh et al (2010b) Sequential Farkas’ lemma for convex systems II) Assume that f, C, H, and S satisfy the above conditions Then the following statements are equivalent (i) H (x) ∈ −K , x ∈ C ⇒ f (x) ≥ 0, (ii) there exists a net (z i∗ )i∈I ⊂ K + such that f (x) + lim inf (z i∗ ◦ H )(x) ≥ 0, ∀x ∈ C i∈I Earlier versions of Theorem 7.2 were proved in Dinh et al (2005), Jeyakumar et al (2003, 2006) The sequential Farkas’ lemmas are the key tools in the study of optimization problems for which constraint qualification conditions are not satisfied As applications, sequential Lagrange multiplier rules have been established in Dinh et al (2005), Jeyakumar et al (2003, 2006) for convex programs and in Dempe et al (2013a) for convex programs under equilibrium constraints Related sequential Lagrange multiplier rules for convex programs were also given in Bot et al (2008), Thibault (1997) A sequential Farkas’ lemma for linear systems in infinite dimensional spaces is also given in Dinh et al (2010b) with applications to infinite linear problems Assume that X, Z , K are as above Theorem 7.3 (Dinh et al (2010b) Sequential Farkas’ lemma for Linear systems) Let A : X → Z be a linear mapping such that for all z ∗ ∈ S + we have A∗ z ∗ ∈ X ∗ , where A∗ is the adjoint operator of A, and let b ∈ Z be such that the linear system Ax ≤ b (i.e., Ax − b ∈ −K ) is consistent Then, for any x ∗ ∈ X ∗ , r ∈ R, the following statements are equivalent (i) x ∈ X and Ax ≤ b ⇒ x ∗ , x ≤ r, (ii) there exists a net (z i∗ , εi )i∈I ⊂ K + × R such that z i∗ , b ≤ r + εi , ∀i ∈ I, and (A∗ z i∗ , εi ) → (x ∗ , 0+ ) Non-convex case and approximate Hahn–Banach theorem Let X, U be as in the previous subsection The following result extends Theorem 7.1 to the cases where no convexity nor lower semi-continuity is assumed Surprisingly, the result is equivalent to a version of approximate Hahn–Banach theorem 123 N Dinh, V Jeyakumar Theorem 7.4 (Dinh et al (2010c) Non-convex Sequential Farkas’ lemma I) Let F : U × X → R ∪ {+∞} such that F(0, ·) is proper and dom F ∗ = ∅ Assume that F ∗∗ (0, ·) = (F(0, ·))∗∗ Then for any h ∈ (X ), the following statements are equivalent: (i) ∀x ∈ X , F(0, x) ≥ h(x), (ii) ∀x ∗ ∈ dom h ∗ there exists a net (u i∗ , xi∗ , εi )i∈I ⊂ U ∗ × X ∗ × R such that F ∗ (u i∗ , xi∗ ) ≤ h ∗ (x ∗ ) + εi , ∀i ∈ I, and lim(xi∗ , εi ) = (x ∗ , 0+ ) i∈I Theorem 7.4 can be applied to get sequential optimality conditions for some classes of equilibrium problems and an extension of the Hiriart–Urruty–Phelps formula (Hiriart-Urruty and Phelps 1993) and subdifferential sum formula (Dinh et al 2010c) In a more concrete circumstance where H : dom H ⊂ X → Z and g : Z → R∪{+∞}, the following sequential Farkas’ lemma is formulated in Dinh et al (2010c) Theorem 7.5 (Dinh et al (2010c) Sequential Farkas’ lemma for non-convex systems II) Consider f : X → R ∪ {+∞}, C ⊂ X , H : dom H ⊂ X → Z , and S a cone in Z Assume that (dom f ) ∩ C ∩ H −1 (−S) = ∅, ∃(z 0∗ , x0∗ , η0 ) ∈ S + × X ∗ × R such that f (x) + (z 0∗ ◦ H )(x) ≥ x0∗ , x − η0 , ∀x ∈ C, and that ( f +i C +i −S ◦ H )∗∗ = supz ∗ ∈S + ( f +i C + z ∗ ◦ H )∗∗ Then for any h ∈ (X ), the following statements are equivalent: (α) (β) x ∈ C, H (x) ∈ −S ⇒ f (x) − h(x) ≥ 0, ⎧ ∗ ∀x ∈ dom h ∗ , there exists a net ⎪ ⎪ ⎨ ∗ ∗ (z i , xi , εi )i∈I ⊂ S + × X ∗ × R ( f + i C + z i∗ ◦ H )∗ (xi∗ ) ≤ h ∗ (x ∗ ) + εi , ∀i ∈ I, ⎪ ⎪ ⎩ such that and limi∈I (xi∗ , εi ) = (x ∗ , 0+ ) This last sequential Farkas’ lemma yields sequential optimality conditions and sequential duality results for classes of non-convex optimization problems (Bot et al 2008; Dinh et al 2010c) To conclude this section, we would like to introduce a concrete application of one of the previous sequential versions of the Farkas’ lemma for non-convex systems It was shown that Theorem 7.4 is equivalent to a version of approximate Hahn–Banach theorem for positively homogeneous functions Dinh et al (2013b).1 This, together with Theorem 4.9, shows an important link between generalized Farkas’ lemmas and extensions of Hahn–Banach theorem In Dinh et al (2013b), it was proved that Theorem 7.4 implies approximate Hahn–Banach theorem but in fact, the two theorems are equivalent 123 The Farkas lemma To see this, let X be a locally convex Hausdorff space, p : X → R ∪ {+∞} be a positively homogenous function, and V be a closed linear subspace of X A continuous linear function on V , : V → R, is said to be p-dominated if (x) ≤ p(x) for all x ∈ V For a continuous linear function on V , we say that can approximatively be p-extended to X if is the σ (V ∗ , V )-limit of a net ( i )i∈I of p-dominated linear functions on V , every one of which can be p-extended to X Theorem 7.6 (Dinh et al (2013b) Approximate Hahn–Banach theorem for positively homogeneous functions) Under the above notions, the two following sentences are equivalent: (i) any linear, continuous and p-dominated function be p-extended to X , (ii) p ∗∗ |V = ( p|V )∗∗ : V → R can approximately References Ahmadi AA, Parrilo PA (2012) A convex polynomial that is not sos-convex Math Program Ser A 135(1–2):275–292 An LTH, Tao PD (2005) The DC (difference of convex functions) programming and DCA revisited with DC models of real world nonconvex optimization problems Ann Oper Res 133:23–46 Ben-Israel A (1969) Linear inequalities and inequalities on finite dimensional real or complex vector spaces: a unified theory J Math Anal Appl 27:367–389 Ben-Tal A, El Ghaoui L, Nemirovski A (2009) Robust optimization Princeton Series in Applied Mathematics Ben-Tal A, Nemirovski A (2000) Lectures on modern convex optimization: analysis Algorithms and engineering applications, SIAM-MPS, Philadelphia Bertsimas D, Brown D, Caramanis C (2011) Theory and applications of robust optimization SIAM Rev 53:464–501 Borwein JM (1983), Adjoint process duality Math Oper Res 8:403–437 Bot RI (2010) Conjugate duality in convex optimization Springer, Berlin Bot RI, Wanka G (2004) Farkas-type results with conjugate functions SIAM J Optim 15:540–554 Bot RI, Hodrea IB, Wanka G (2006) Farkas-type results for inequality systems with composed convex functions via conjugate duality J Math Anal Appl 322:316–328 Bot RI, Grad SM, Wanka G (2007) New constraint qualification and conjugate duality for composed convex optimization problems J Optim Theory Appl 135:241–255 Bot RI, Hodrea IB, Wanka G (2007) Some new Farkas-type results for inequality systems with DC functions J Glob Optim 39:595–608 Bot RI, Csetnek ER, Wamke G (2008) Sequential optimality conditions in convex programming via perturbation approach J Convex Anal 15(1):149–164 Bot RI, Grad SM, Wanka G (2009) New regularity conditions for Lagrange and Fenchel–Lagrange duality in infnite dimensional spaces Math Inequal Appl 12(1):171–189 Boyd SP, Vandenberghe L (2004) Convex optimization Cambridge University Press, Cambridge Burachik S, Jeyakumar V (2005) A dual condition for the convex subdifferential sum formula with applications J Convex Anal 12:279–290 Burachik S, Jeyakumar V (2005) A new geometric condition for Fenchel’s duality in infinite dimensional spaces Math Program Ser B 104:229–233 Craven BD (1978) Mathematical programming and control theory Chapman and Hall, London Dempe S, Dinh N, Dutta J (2013a) Optimality conditions for a simple MPEC problem (submitted) Dinh N, Jeyakumar V, Lee GM (2005) Sequential Lagrangian conditions for convex programs with applications to semidefinite programming J Optim Theory Appl 125:85–112 Dinh N, Goberna MA, Lopez MA (2006) From linear to convex systems: consistency, Farkas lemma and applications J Convex Anal 13:279–290 123 N Dinh, V Jeyakumar Dinh N, Goberna MA, Lopez MA, Son TQ (2007) New Farkas type constraint qualifications in convex infinite programming ESAIM Control Optim Calculus Var 13(3):580–597 Dinh N, Vallet G, Nghia TTA (2008) Farkas-type resuts and duality for DC programs with convex constraints J Convex Anal 15:235–262 Dinh N, Mordukhovich B, Nghia TTA (2009) Qualification and optimality conditions for DC programs with infinite constraints Acta Math Vietnamica 34(1):123–153 Dinh N, Strodiot J-J, Nguyen VH (2010) Duality and optimality conditions for generalized equilibrium problems involving DC functions J Glob Optim 48:183–208 Dinh N, Goberna MA, Lopez MA (2012) On the stability of the optimal value and the optimal set in optimization problems J Convex Anal 19(4):927–953 Dinh N, Mo TH (2012) Qualification conditions and Farkas-type results for systems involving composite functions Vietnam J Math 40(4):407–437 Dinh N, Ernst E, Lopez MA, Volle M (2013b) An approximate Hahn–Banach theorem for positively homogeneous functions Optimization (to appear) doi:10.1080/02331934.2013.864290 http://www tandfonline.com/doi/abs/10.1080/02331934.2013.864290 Dinh N, Goberna MA, Lopez MA (2010a) On the stability of the feasible set in mathematical programming SIAM J Optim 20(5):2254–2280 Dinh N, Goberna MA, Lopez MA, Mo TH (2013c) From Farkas to Hahn–Banach theorem SIAM J Optim (to appear) Dinh N, Goberna MA, Lopez MA, Volle M (2010b) Convex inequalities without constraint qualification nor closedness condition, and their applications in optimization Set Valued Anal 18:423–445 Dinh N, Lopez MA, Volle M (2010) Functional inequalities in the absence of convexity and lower semicontinuity with applications to optimization SIAM J Optim 20(5):2540–2559 Dinh N, Mordukhovich B, Nghia TTA (2010) Subdifferentials of value functions and optimality conditions for some classes of DC and bilevel infinite and semi-infinite programs Math Program Ser B 123(1):101– 138 Dinh N, Nghia TTA, Vallet G (2010) A closedness condition and its applications to DC programs with convex constraints Optimization 59(4):541–560 Dinh N, Vallet G, and Volle M (2013d) Functional inequalities and theorems of the alternative involving composite functions with applications J Glob Optim (to appear) doi:10.1007/s10898-013-0100-z Doagooei AR (2012) Farkas-type theorems for positively homogeneous systems in ordered topological vector spaces Nonlinear Anal 75:5541–5548 Elliott RJ, Kopp PE (2005) Mathematics of financial markets Springer, Berlin Fang DH, Li C, Ng KF (2010) Constraint qualifications for extebded Farkas’s lemmas and Lagrangian dualities in convex infinite programming SIAM J Optim 20(3):1311–1332 Farkas J (1902) Theorie der einfachen Ungleichungen Journal fur ă die Reine und Angewandte Mathematik 124:1–27 Franklin J (1983) Mathematical methods of economics Am Math Monthly 90:229–244 Glover BM (1982) A generalized Farkas lemma with applications to quasidifferentiable programming Zeitschrift für Oper Res 26:125–141 Glover BM, Jeyakumar V, Oettli W (1994) Farkas lemma for difference sublinear systems and quasidifferentiable programming Math Program 63:333–349 Glover BM, Ishizuka Y, Jeyakumar V, Tuan HD (1996) Complete characterization of global optimality for problems involving the piontwise minimum of sublinear functions SIAM J Optim 6:362–372 Goberna MA, Lopez MA, Pastor J (1981) Farkas–Minkowski systems in semi-infinite programming Appl Math Optim 7:295–308 Goberna MA, Jeyakumar V, Lopez MA (2008) Necessary and sufficient constraint qualifications for solvability of systems of infinite convex inequalities Nonlinear Anal Theory Methods Appl 68(5):1184–1194 Goberna MA, Jeyakumar V, Li G, Lopez MA (2013) Robust linear semi-infinite programming duality Math Program Ser B 139:185–203 Gwinner J (1987a) Results of Farkas type Numer Funct Anal Optim 9:471–520 Gwinner J (1987b) Corrigendum and addendum to ‘Results of Farkas type’ Numer Funct Anal Optim 10:415–418 Ha Ch-W (1979) On systems of convex inequalities J Math Anal Appl 68:25–34 Helton JW, Nie J (2010) Semidefinite representation of convex sets Math Program Ser A 122(1):21–64 Hiriart-Urruty J-B, Phelps RR (1993) Subdifferential calculus using epsilon-subdifferentials J Funct Anal 118:154–166 123 The Farkas lemma Holmes RB (1975) Geometrical functional analysis and its applications Springer, New York Jeyakumar V (2008) Farkas lemma: generalizations Encyclopedia of optimization Kluwer Academic Publishers, The Netherlands, pp 87–91 Jeyakumar V (1987) A general Farkas lemma and characterization of optimality for a nonsmooth program involving convex processes J Optim Theory Appl 55:449–461 Jeyakumar V (1990) Duality and infinite dimensional optimization Nonlinear Anal Theory Methods Appl 15:1111–1122 Jeyakumar V, Glover BM (1993) A new version of Farkas’ lemma and global convex maximization Appl Math Lett 6(5):39–43 Jeyakumar V, Glover BM (1995) Nonlinear extensions of Farkas’ lemma with applications to global optimization and least squares Math Oper Res 20:818–837 Jeyakumar V, Rubinov AM, Glover BM, Ishizuka Y (1996) Inequality systems and global optimization J Math Anal Appl 202:900–919 Jeyakumar V, Lee GM, Dinh N (2003) New sequential Lagrange multiplier conditions characterizing optimality without constraint qualification for convex programs SIAM J Optim 14:34–547 Jeyakumar V, Wu ZY, Lee GM, Dinh N (2006) Liberating the subgradient optimality conditions from constraint qualifications J Glob Optim 36:127–137 Jeyakumar V (2008) Constraint qualifications characterizing Lagrangian duality in convex optimization J Optim Theory Appl 136:31–41 Jeyakumar V, Lee GM (2008) Complete characterizations of stable Farkas’ lemma and cone-convex programming duality Math Program Ser A 114:335–347 Jeyakumar V, Kum S, Lee GM (2008) Necessary and sufficient conditions for Farkas’ lemma for cone systems and second-order cone programming duality J Convex Anal 15(1):63–71 Jeyakumar V, Lee GM (2008) Complete characterizations of stable Farkas’ lemma and cone-convex programming duality Math Program Ser A 114:335–347 Jeyakumar V, Lee GM, Li GY (2009) Alternative theorems for quadratic inequality systems and global quadratic optimization SIAM J Optim 20:983–1001 Jeyakumar V, Li GY (2009) Farkas’ Iemma for separable sublinear inequalities without qualifications Optim Lett 3:537–545 Jeyakumar V, Li GY (2010) Strong duality in robust convex programming: complete characterizations SIAM J Optim 20(6):3384–3407 Jeyakumar V, Li G (2011) Robust farkas lemma for uncertain linear systems with applications Positivity 159(2):331–342 Jeyakumar V, Dinh N, Lee GM (2004) A new closed cone constraint qualification for convex optimization Applied Mathematics Research Report AMR04/8, School of Mathematics, University of New South Wales, Sydney (unpublished paper) Jeyakumar V, Li G (2010) Characterizing robust set containments and solutions of uncertain linear programs without qualifications Oper Res Lett 38:188–194 Jeyakumar V, Li G (2013) A new class of alternative theorems for SOS-convex inequalities and robust optimization Appl Anal doi:10.1080/00036811.2013.859251 Jeyakumar V, Song W, Dinh N, Lee GM (2005) Stable strong duality in convex optimization Applied Mathematics Research Report AMR05/22, School of Mathematics, University of New South Wales, Sydney (unpublished paper) Jeyakumar V, Vicente-Perez J (2013) Dual semidefinite programs without Duality gaps for a class of convex minimax programs J Optim Theor Appl doi:10.1007/s10957-013-0496-0 Lasserre JB (1997) A Farkas lemma without a standard closure condition SIAM J Control Optim 35:265– 272 Lee GM, Kim GS, Dinh N (2012) Optimality conditions for approximate solutions of convex semi-Infinite vector optimization problems optimality conditions for approximate solutions In: Ansari QH, Yao J-C (eds) Recent developments in vector optimization Springer, Berlin, pp 275–295 Li C, Fang DH, Lopez G, Lopez MA (2009) Stable and total Fenchel duality for convex optimization peoblems in locally convex spaces SIAM J Optim 20(2):1032–1051 Long X-J, Huang N-J, O’Regan D (2010) Farkas-type results for general composed convex optimization problems with inequality constraints Math Inequal Appl 13(1):135–143 Lopez MA, Martinez-Legaz J-E (2005) Farkas theorems for positively homogeneous semi-infinite systems Optimization 54(4&5):421–431 Mangasarian OL (1969) Nonlinear programming McGraw-Hill, New York 123 N Dinh, V Jeyakumar Pólik I, Terlaky T (2007) A survey of the S-lemma SIAM Rev 49:371–418 Prekopa A (1980) On the development of optimization theory Am Math Monthly 87:527–542 Pshenichnyi BN (1971) Necessary conditions for an extremum Marcel-Dekker, New York Rubinov AM, Glover BM, Jeyakumar V (1995) A general approach to dual characterizations of solvability of inequality systems with applications J Convex Anal 2(2):309–344 Simons S (2007) From Hahn–Banach to monotonicity Springer, Berlin Simons S (2007) The Hahn-Banach-Lagrange theorem Optimization 56:149–169 Sun XK, Li SJ (2013) Duality and Farkas-type results for extended Ky Fan inequalities with DC functions Optim Lett 7:499–510 Thibault L (1997) Sequential convex subdifferential calculus and sequential Lagrange multipliers SIAM J Control Optim 35(4):1434–1444 Wang H-J, Cheng C-Z (2011) Duality and Farkas-type results for DC fractional programming with DC constraints Math Comput Model 53:1026–1034 Z˘alinescu C (1978) A generalization of the Farkas lemma applications to convex programming J Math Anal Appl 66:651–678 123 ... played a central role in the development of mathematical optimization (alternatively, mathematical programming) Mathematical optimization together with sophisticated computer models, is nowadays a... real-world optimization problems are often uncertain and so the study of optimization problems in the face of data uncertainty is becoming increasingly important in mathematical optimization Robust optimization. .. generalizations of Farkas’ lemma and their applications to problems in various areas of continuous optimization, including conic optimization and robust optimization An earlier brief survey of