fixed point algorithms for inverse problems in science and engineering bauschke, burachik, combettes, elser, luke wolkowicz 2011 06 01 Cấu trúc dữ liệu và giải thuật

415 47 0
fixed point algorithms for inverse problems in science and engineering bauschke, burachik, combettes, elser, luke   wolkowicz 2011 06 01 Cấu trúc dữ liệu và giải thuật

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

Thông tin tài liệu

CuuDuongThanCong.com Fixed-Point Algorithms for Inverse Problems in Science and Engineering For further volumes: http://www.springer.com/series/7393 CuuDuongThanCong.com Springer Optimization and Its Applications VOLUME 49 Managing Editor Panos M Pardalos (University of Florida) Editor–Combinatorial Optimization Ding-Zhu Du (University of Texas at Dallas) Advisory Board J Birge (University of Chicago) C.A Floudas (Princeton University) F Giannessi (University of Pisa) H.D Sherali (Virginia Polytechnic and State University) T Terlaky (McMaster University) Y Ye (Stanford University) Aims and Scope Optimization has been expanding in all directions at an astonishing rate during the last few decades New algorithmic and theoretical techniques have been developed, the diffusion into other disciplines has proceeded at a rapid pace, and our knowledge of all aspects of the field has grown even more profound At the same time, one of the most striking trends in optimization is the constantly increasing emphasis on the interdisciplinary nature of the field Optimization has been a basic tool in all areas of applied mathematics, engineering, medicine, economics and other sciences The series Springer Optimization and Its Applications publishes undergraduate and graduate textbooks, monographs and state-of-the-art expository works that focus on algorithms for solving optimization problems and also study applications involving such problems Some of the topics covered include nonlinear optimization (convex and nonconvex), network flow problems, stochastic optimization, optimal control, discrete optimization, multi-objective programming, description of software packages, approximation techniques and heuristic approaches CuuDuongThanCong.com Heinz H Bauschke • Regina S Burachik Patrick L Combettes • Veit Elser D Russell Luke • Henry Wolkowicz Editors Fixed-Point Algorithms for Inverse Problems in Science and Engineering ABC CuuDuongThanCong.com Editors Heinz H Bauschke Department of Mathematics and Statistics University of British Columbia Okanagan Campus Kelowna, British Columbia Canada heinz.bauschke@ubc.ca Regina S Burachik School of Mathematics & Statistics Division of Information Technology Engineering & the Environment University of South Australia Mawson Lakes Campus Mawson Lakes Blvd 5095 Mawson Lakes South Australia regina.burachik@unisa.edu.au Patrick L Combettes Universit´e Pierre et Marie Curie Laboratoire Jacques-Louis Lions 4, Place Jussieu 75005 Paris France plc@math.jussieu.fr Veit Elser Laboratory of Atomic and Solid State Physics Cornell University Clark Hall 14853–2501 Ithaca, New York USA ve10@cornell.edu D Russell Luke Institut făur Numerische und Angewandte Mathematik Universităat Găottingen Lotzestr 16-18, 37073 Găottingen Germany r.luke@math.uni-goettingen.de Henry Wolkowicz Department of Combinatorics & Optimization Faculty of Mathematics University of Waterloo Waterloo, Ontario Canada hwolkowicz@uwaterloo.ca ISSN 1931-6828 ISBN 978-1-4419-9568-1 e-ISBN 978-1-4419-9569-8 DOI 10.1007/978-1-4419-9569-8 Springer New York Dordrecht Heidelberg London Library of Congress Control Number: 2011928237 c Springer Science+Business Media, LLC 2011 All rights reserved This work may not be translated or copied in whole or in part without the written permission of the publisher (Springer Science+Business Media, LLC, 233 Spring Street, New York, NY 10013, USA), except for brief excerpts in connection with reviews or scholarly analysis Use in connection with any form of information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed is forbidden The use in this publication of trade names, trademarks, service marks, and similar terms, even if they are not identified as such, is not to be taken as an expression of opinion as to whether or not they are subject to proprietary rights Printed on acid-free paper Springer is part of Springer Science+Business Media (www.springer.com) CuuDuongThanCong.com Preface This book brings together 18 carefully refereed research and review papers in the broad areas of optimization and functional analysis, with a particular emphasis on topics related to fixed-point algorithms The volume is a compendium of topics presented at the Interdisciplinary Workshop on Fixed-Point Algorithms for Inverse Problems in Science and Engineering, held at the Banff International Research Station for Mathematical Innovation and Discovery (BIRS), on November 1–6, 2009 Forty experts from around the world were invited Participants came from Australia, Austria, Brazil, Bulgaria, Canada, France, Germany, Israel, Japan, New Zealand, Poland, Spain, and the United States Most papers in this volume grew out of talks delivered at this workshop, although some contributions are from experts who were unable to attend We believe that the reader will find this to be a valuable state-of-the-art account on emerging directions related to first-order fixed-point algorithms The editors thank BIRS and their sponsors – Natural Sciences and Engineering Research Council of Canada (NSERC), US National Science Foundation (NSF), Alberta Science Research Station (ASRA), and Mexico’s National Council for Science and Technology (CONACYT) – for their financial support in hosting the workshop, and Wynne Fong, Brent Kearney, and Brenda Williams for their help in the preparation and realization of the workshop We are grateful to Dr Mason Macklem for his valuable help in the preparation of this volume Finally, we thank the dedicated referees who contributed significantly to the quality of this volume through their instructive and insightful reviews Kelowna (Canada) Adelaide (Australia) Paris (France) Ithaca (U.S.A.) Găottingen (Germany) Waterloo (Canada) December 2010 Heinz H Bauschke Regina S Burachik Patrick L Combettes Veit Elser D Russell Luke Henry Wolkowicz v CuuDuongThanCong.com CuuDuongThanCong.com Contents Chebyshev Sets, Klee Sets, and Chebyshev Centers with Respect to Bregman Distances: Recent Results and Open Problems Heinz H Bauschke, Mason S Macklem, and Xianfu Wang Self-Dual Smooth Approximations of Convex Functions via the Proximal Average 23 Heinz H Bauschke, Sarah M Moffat, and Xianfu Wang A Linearly Convergent Algorithm for Solving a Class of Nonconvex/Affine Feasibility Problems 33 Amir Beck and Marc Teboulle The Newton Bracketing Method for Convex Minimization: Convergence Analysis 49 Adi Ben-Israel and Yuri Levin Entropic Regularization of the Function 65 Jonathan M Borwein and D Russell Luke The Douglas–Rachford Algorithm in the Absence of Convexity 93 Jonathan M Borwein and Brailey Sims A Comparison of Some Recent Regularity Conditions for Fenchel Duality .111 Radu Ioan Bot¸ and Ernăo Robert Csetnek Non-Local Functionals for Imaging .131 J´erˆome Boulanger, Peter Elbau, Carsten Pontow, and Otmar Scherzer vii CuuDuongThanCong.com viii Contents Opial-Type Theorems and the Common Fixed Point Problem .155 Andrzej Cegielski and Yair Censor 10 Proximal Splitting Methods in Signal Processing 185 Patrick L Combettes and Jean-Christophe Pesquet 11 Arbitrarily Slow Convergence of Sequences of Linear Operators: A Survey .213 Frank Deutsch and Hein Hundal 12 Graph-Matrix Calculus for Computational Convex Analysis 243 Bryan Gardiner and Yves Lucet 13 Identifying Active Manifolds in Regularization Problems 261 W.L Hare 14 Approximation Methods for Nonexpansive Type Mappings in Hadamard Manifolds .273 Genaro L´opez and Victoria Mart´ın-M´arquez 15 Existence and Approximation of Fixed Points of Bregman Firmly Nonexpansive Mappings in Reflexive Banach Spaces 301 Simeon Reich and Shoham Sabach 16 Regularization Procedures for Monotone Operators: Recent Advances 317 J.P Revalski 17 Minimizing the Moreau Envelope of Nonsmooth Convex Functions over the Fixed Point Set of Certain Quasi-Nonexpansive Mappings .345 Isao Yamada, Masahiro Yukawa, and Masao Yamagishi 18 The Br´ezis-Browder Theorem Revisited and Properties of Fitzpatrick Functions of Order n .391 Liangjin Yao CuuDuongThanCong.com Contributors Heinz H Bauschke Department of Mathematics, Irving K Barber School, University of British Columbia, Kelowna, B.C V1V 1V7, Canada, heinz.bauschke@ubc.ca Amir Beck Department of Industrial Engineering, Technion, Israel Institute of Technology, Haifa 32000, Israel, becka@ie.technion.ac.il Adi Ben-Israel RUTCOR – Rutgers Center for Operations Research, Rutgers University, 640 Bartholomew Road, Piscataway, NJ 08854-8003, USA, adi.benisrael@gmail.com Jonathan M Borwein CARMA, School of Mathematical and Physical Sciences, University of Newcastle, NSW 2308, Australia, jonathan.borwein@newcastle.edu.au Radu Ioan Bot¸ Faculty of Mathematics, Chemnitz University of Technology, 09107 Chemnitz, Germany, radu.bot@mathematik.tu-chemnitz.de J´erˆome Boulanger Johann Radon Institute for Computational and Applied Mathematics, Austrian Academy of Sciences, Altenbergerstraße 69, 4040 Linz, Austria, jerome.boulanger@ricam.oeaw.ac.at Andrzej Cegielski Faculty of Mathematics, Computer Science and Econometrics, University of Zielona G´ora, ul Szafrana 4a, 65-514 Zielona G´ora, Poland, a.cegielski@wmie.uz.zgora.pl Yair Censor Department of Mathematics, University of Haifa, Mt Carmel, Haifa 31905, Israel, yair@math.haifa.ac.il Patrick L Combettes UPMC Universit´e Paris 06, Laboratoire Jacques-Louis Lions UMR CNRS 7598, 75005 Paris, France, plc@math.jussieu.fr Ernăo Robert Csetnek Faculty of Mathematics, Chemnitz University of Technology, 09107 Chemnitz, Germany, robert.csetnek@mathematik.tu-chemnitz.de Frank Deutsch Department of Mathematics, Pennsylvania State University, University Park, PA 16802, USA, deutsch@math.psu.edu ix CuuDuongThanCong.com 388 I Yamada et al 92 Mainge, P.E.: Extension of the hybrid steepest descent method to a class of variational inequalities and fixed point problems with nonself-mappings Numer Funct Anal Optim 29, 820–834 (2008) 93 Mangasarian, O.L., Muscicant, D.R.: Robust linear and support vector regression IEEE Trans Pattern Anal Mach Intell 22, 950–955 (2000) 94 Mann, W.: Mean value methods in iteration Proc Amer Math Soc 4, 506–510 (1953) 95 Mehta, N.B., Molisch, A.F.: MIMO System Technology for Wireless Communications, chapter 6, CRC Press (2006) 96 Michelot, C., Bougeard, M.L.: Duality results and proximal solutions of the Huber M-estimator problem Appl Math Optim 30, 203–221 (1994) 97 Molisch, A.F., Win, M.Z.: MIMO systems with antenna selection IEEE Microw Mag 5, 46–56 (2004) 98 Moreau, J.J.: Fonctions convexes duales et points proximaux dans un espace hilbertien C R Acad Sci Paris Ser A Math 255, 2897–2899 (1962) 99 Moreau, J.J.: Proximit´e et dualit´e dans un espace hilbertien Bull Soc Math France 93, 273–299 (1965) 100 Nikolova, M.: Minimizing of cost functions involving nonsmooth data-fidelity terms – Application to the processing of outliers SIAM J Numer Anal 40, 965–994 (2002) 101 Ogura, N., Yamada, I.: Non-strictly convex minimization over the fixed point set of the asymptotically shrinking nonexpansive mapping Numer Funct Anal Optim 23, 113–137 (2002) 102 Ogura, N., Yamada, I.: Non-strictly convex minimization over the bounded fixed point set of nonexpansive mapping Numer Funct Anal Optim 24, 129–135 (2003) 103 Ogura, N., Yamada, I.: A deep outer approximating half space of the level set of certain Quadratic Functions J Nonlinear Convex Anal 6, 187–201 (2005) 104 Passty, G.B.: Ergodic convergence to a zero of the sum of monotone operators in Hilbert space J Math Anal Appl 72, 383–390 (1979) 105 Pierra, G.: Eclatement de contraintes en parall`ele pour la minimisation d’une forme quadratique Lecture Notes in Computer Science 41, 200–218, Springer (1976) 106 Pierra, G.: Decomposition through formalization in a product space Math Program 28, 96–115 (1984) 107 Polyak, B.T.: Minimization of unsmooth functionals USSR Comput Maths Phys 9, 14–29 (1969) 108 Rockafellar, R.T.: Monotone operators and proximal point algorithm SIAM J Control Optim 14, 877–898 (1976) 109 Rockafellar, R.T., Wets, R.J.-B.: Variational Analysis, 1st edn Springer (1998) 110 Sabharwal, A., Potter, L.C.: Convexly constrained linear inverse problems: Iterative leastsquares and regularization IEEE Trans Signal Process 46, 2345–2352 (1998) 111 Sanayei, S., Nosratinia, A.: Antenna selection in MIMO systems IEEE Commun Mag 42, 68–73 (2004) 112 Sayed, A.H.: Fundamentals of Adaptive Filtering Wiley-IEEE Press (2003) 113 Slavakis, K., Yamada, I.: Robust wideband beamforming by the hybrid steepest descent method IEEE Trans Signal Process 55, 4511–4522 (2007) 114 Slavakis, K., Yamada, I., Ogura, N.: The adaptive projected subgradient method over the fixed point set of strongly attracting nonexpansive mappings Numer Funct Anal Optim 27, 905–930 (2006) 115 Slavakis, K., Theodoridis, S., Yamada, I.: Online kernel-based classification using adaptive projection algorithms IEEE Trans Signal Process 56, 2781–2796 (2008) 116 Slavakis, K., Theodoridis, S., Yamada, I.: Adaptive constrained filtering in reproducing kernel Hilbert spaces: the beamforming case IEEE Trans Signal Process 57, 4744–4764 (2009) 117 Starck, J.-L., Murtagh, F.: Astronomical Image and Data Analysis, 2nd.edn Springer (2006) 118 Starck, J.-L., Murtagh, F., Fadili, J.M.: Sparse Image and Signal Processing – Wavelets, Curvelets, Morphological Diversity Cambridge Univ Press (2010) 119 Stark, H., Yang, Y.: Vector Space Projections – A Numerical Approach to Signal and Image Processing, Neural Nets, and Optics Wiley (1998) CuuDuongThanCong.com 17 Minimizing the Moreau Envelope of Nonsmooth Convex Functions 389 120 Suzuki, T.: A sufficient and necessary condition for Halpern-type strong convergence to fixed points of nonexpansive mappings Proc Amer Math Soc 135, 99–106 (2007) 121 Takahashi, W.: Nonlinear Functional Analysis – Fixed Point Theory and its Applications Yokohama Publishers (2000) 122 Takahashi, N., Yamada, I.: Parallel algorithms for variational inequalities over the Cartesian product of the intersections of the fixed point sets of nonexpansive mappings J Approx Theory 153, 139–160 (2008) 123 Takahashi, N., Yamada, I.: Steady-state mean-square performance analysis of a relaxed setmembership NLMS algorithm by the energy conservation argument IEEE Trans Signal Process 57, 3361–3372 (2009) 124 Telatar, I.E.: Capacity of multi-antenna Gaussian channels Eur Trans Telecomm 10, 585–595 (1999) 125 Theodoridis, S., Slavakis, K., Yamada, I.: Adaptive learning in a world of projections – A unifying framework for linear and nonlinear classification and regression tasks IEEE Signal Processing Mag 28, 97–123 (2011) 126 Tseng, P.: Applications of a splitting algorithm to decomposition in convex programming and variational inequalities SIAM J Control Optim 29, 119–138 (1991) 127 Vasin, V.V., Ageev, A.L.: Ill-Posed Problems with A Priori Information VSP (1995) 128 Widrow, B., Stearns, S.D.: Adaptive Signal Processing Prentice Hall (1985) 129 Wittmann, R.: Approximation of fixed points of nonexpansive mappings Arch Math 58, 486–491 (1992) 130 Xu, H.K., Kim, T.H.: Convergence of hybrid steepest descent methods for variational inequalities J Optim Theory Appl 119, 185–201 (2003) 131 Yamada, I.: Approximation of convexly constrained pseudoinverse by Hybrid Steepest Descent Method Proc 1999 IEEE ISCAS, Florida (1999) 132 Yamada, I.: The hybrid steepest descent method for the variational inequality problem over the intersection of fixed point sets of nonexpansive mappings In: D Butnariu, Y Censor, S Reich (eds.) Inherently Parallel Algorithm for Feasibility and Optimization and Their Applications, Elsevier, 473–504 (2001) 133 Yamada, I.: Adaptive projected subgradient method: A unified view for projection based adaptive algorithms The Journal of IEICE 86, 654–658 (2003) (in Japanese) 134 Yamada, I.: Kougaku no Tameno Kansu Kaiseki (Functional Analysis for Engineering), Suurikougaku-Sha/Saiensu-Sha (2009) 135 Yamada, I., Ogura, N.: Adaptive projected subgradient method for asymptotic minimization of sequence of nonnegative convex functions Numer Funct Anal Optim 25, 593–617 (2004) 136 Yamada, I., Ogura, N.: Hybrid steepest descent method for variational inequality problem over the fixed point set of certain quasi-nonexpansive mappings Numer Funct Anal Optim 25, 619–655 (2004) 137 Yamada, I., Ogura, N., Yamashita, Y., Sakaniwa, K.: An extension of optimal fixed point theorem for nonexpansive operator and its application to set theoretic signal estimation Technical Report of IEICE DSP96-106, 63–70 (1996) 138 Yamada, I., Ogura, N., Yamashita, Y., Sakaniwa, K.: Quadratic optimization of fixed points of nonexpansive mappings in Hilbert space Numer Funct Anal Optim 19, 165–190 (1998) 139 Yamada, I., Ogura, N., Shirakawa, N.: A numerically robust hybrid steepest descent method for the convexly constrained generalized inverse problems In: Z Nashed, O Scherzer (eds.) Inverse Problems, Image Analysis, and Medical Imaging, Contemporary Mathematics 313, 269–305 (2002) 140 Yamada, I., Slavakis, K., Yamada, K.: An efficient robust adaptive filtering algorithm based on parallel subgradient projection techniques IEEE Trans Signal Process 50, 1091–1101 (2002) 141 Yamagishi, M., Yamada, I.: A deep monotone approximation operator based on the best quadratic lower bound of convex functions IEICE Trans Fundam Electron Commun Comput Sci E91-A, 1858–1866 (2008) CuuDuongThanCong.com 390 I Yamada et al 142 Yosida, K.: Functional Analysis, 4th edn Springer (1974) 143 Youla, D.C., Webb, H.: Image restoration by the method of convex projections: Part – Theory IEEE Trans Med Imaging 1, 81–94 (1982) 144 Yukawa, M., Yamada, I.: Pairwise optimal weight realization – acceleration technique for settheoretic adaptive parallel subgradient projection algorithm IEEE Trans Signal Process 54, 4557–4571 (2006) 145 Yukawa, M., Yamada, I.: Minimal antenna-subset selection under capacity constraint for power-efficient MIMO systems: a relaxed -minimization approach Proc 2010 IEEE ICASSP, Dallas (2010) 146 Yukawa, M., Cavalcante, R., Yamada, I.: Efficient blind MAI suppression in DS/CDMA by embedded constraint parallel projection techniques IEICE Trans Fundam Electron Commun Comput Sci E88-A, 2427–2435 (2005) 147 Yukawa, M., Slavakis, K., Yamada, I.: Adaptive parallel quadratic-metric projection algorithms IEEE Trans Audio Speech Lang Process 15, 1665–1680 (2007) 148 Z˘alinescu, C.: Convex Analysis in General Vector Spaces World Scientific (2002) 149 Zeidler, E.: Nonlinear Functional Analysis and its Applications, III – Variational Methods and Optimization Springer (1985) 150 Zeidler, E.: Nonlinear Functional Analysis and its Applications, II/B – Nonlinear Monotone Operators Springer (1990) 151 Zeng, L.C., Schaible, S., Yao, J.C.: Hybrid steepest descent methods for zeros of nonlinear operators with applications to variational inequalities J Optim Theory Appl 141, 75–91 (2009) 152 Zhang, B., Fadili, J.M., Starck, J.-L.: Wavelet, ridgelet, and curvelets for Poisson noise removal IEEE Trans Image Process 17, 1093–1108 (2008) CuuDuongThanCong.com Chapter 18 The Br´ezis-Browder Theorem Revisited and Properties of Fitzpatrick Functions of Order n Liangjin Yao Abstract In this paper, we study maximal monotonicity of linear relations (set-valued operators with linear graphs) on reflexive Banach spaces We provide a new and simpler proof of a result due to Br´ezis–Browder which states that a monotone linear relation with closed graph is maximal monotone if and only if its adjoint is monotone We also study Fitzpatrick functions and give an explicit formula for Fitzpatrick functions of order n for monotone symmetric linear relations Keywords Adjoint · Convex function · Convex set · Fenchel conjugate · Fitzpatrick function · Linear relation · Maximal monotone operator · Multifunction · Monotone operator · Set-valued operator · Symmetric operator AMS 2010 Subject Classification: 47A06, 47H05 18.1 Introduction Monotone operators play important roles in convex analysis and optimization [12, 15, 22, 24–26, 32, 33] In 1978, Br´ezis–Browder gave some characterizations of a monotone operator with closed linear graph [14, Theorem 2] in reflexive Banach spaces The Br´ezis–Browder Theorem states that a monotone linear relation with closed graph is maximal monotone if and only if its adjoint is monotone if and only if its adjoint is maximal monotone, which demonstrates the connection between the monotonicity of a linear relation and that of its adjoint In this paper, we give a new and simpler proof of the hard part of the Br´ezis–Browder Theorem (Theorem 18.5): a monotone linear relation with closed graph is maximal monotone if its adjoint is monotone The proof relies on a recent characterization of maximal monotonicity due to Simons and Z˘alinescu Our proof does not require any renorming L Yao ( ) Department of Mathematics, Irving K Barber School, University of British Columbia, Kelowna, B.C V1V 1V7, Canada e-mail: ljinyao@interchange.ubc.ca H.H Bauschke et al (eds.), Fixed-Point Algorithms for Inverse Problems in Science and Engineering, Springer Optimization and Its Applications 49, DOI 10.1007/978-1-4419-9569-8 18, c Springer Science+Business Media, LLC 2011 CuuDuongThanCong.com 391 392 L Yao We suppose throughout this note that X is a real reflexive Banach space with norm · , that X ∗ is its continuous dual space with norm · ∗ and dual product ·, · We now introduce some notation Let A : X ⇒ X ∗ be a set-valued operator or multifunction whose graph is defined by graA := {(x, x∗ ) ∈ X × X ∗ | x∗ ∈ Ax} The inverse operator of A, A−1 : X ∗ ⇒ X, is given by graA−1 := {(x∗ , x) ∈ X ∗ × X | x∗ ∈ Ax}; the domain of A is dom A := {x ∈ X | Ax = ∅} The Fitzpatrick function of A (see [19]) is given by FA : (x, x∗ ) → sup (a,a∗ )∈gra A x, a∗ + a, x∗ − a, a∗ (18.1) For every n ∈ {2, 3, }, the Fitzpatrick function of A of order n (see [1, Definition 2.2 and Proposition 2.3]) is defined by FA, n(x, x∗ ) := x, x∗ + sup ( (a1 ,a∗1 ),··· , an−1 ,a∗n−1 ) ⊆gra A n−2 ai+1 − , a∗i i=1 + x − an−1, a∗n−1 + a1 − x, x∗ Clearly, FA, = FA We set FA, ∞ = supn∈{2,3,···} FA, n If Z is a real reflexive Banach space with dual Z ∗ and a set S ⊆ Z, we denote S⊥ by S⊥ := {z∗ ∈ Z ∗ | z∗ , s = 0, ∀s ∈ S} Then the adjoint of A, denoted by A∗ , is defined by graA∗ := {(x, x∗ ) ∈ X × X ∗ | (x∗ , −x) ∈ (graA)⊥ } Note that A is said to be a linear relation if graA is a linear subspace of X × X ∗ (See [18] for further information on linear relations.) Recall that A is said to be monotone if for all (x, x∗ ), (y, y∗ ) ∈ graA we have x − y, x∗ − y∗ ≥ 0, and A is maximal monotone if A is monotone and A has no proper monotone extension (in the sense of graph inclusions) We say (x, x∗ ) ∈ X × X ∗ is monotonically related to graA if (for every (y, y∗ ) ∈ graA) x − y, x∗ − y∗ ≥ Recently, linear relations have been become an interesting object and comprehensively studied in Monotone Operator Theory: see [1–3, 5–10, 23, 29–31] We can now precisely describe the Br´ezis–Browder Theorem Let A be a monotone linear relation with closed graph Then A is maximal monotone ⇔ A∗ is maximal monotone ⇔ A∗ is monotone CuuDuongThanCong.com 18 The Br´ezis-Browder Theorem Revisited 393 The original proof of Br´ezis–Browder Theorem is based on the application of Zorn Lemma by constructing a series of finite-dimensional subspaces, which is complicated Our goal of this paper is to give a simpler proof of Br´ezis–Browder Theorem and to derive more properties of Fitzpatrick functions of order n The paper is organized as follows The first main result (Theorem 18.5) is proved in Sect 18.2 providing a new and simpler proof of the Br´ezis–Browder Theorem In Sect 18.3, some explicit formulas for Fitzpatrick functions are given Recently, Fitzpatrick functions of order n [1] have turned out to be a useful tool in the study of ncyclic monotonicity (see [1, 3, 4, 13]) Theorem 18.14 gives an explicit formula for Fitzpatrick functions of order n associated with symmetric linear relations, which generalizes and simplifies [1, Example 4.4] and [3, Example 6.4] Our notation is standard The notation A : X → X ∗ means that A is a single-valued mapping (with full domain) from X to X ∗ Given a subset C of X, C is the norm closure of C The indicator function ιC : X → ]−∞, +∞] of C is defined by x→ 0, if x ∈ C; (18.2) +∞, otherwise Let x ∈ X and C∗ ⊆ X ∗ We write x,C∗ := { x, c∗ | c∗ ∈ C∗ } If x,C∗ = {a} for some constant a ∈ R, then we write x,C∗ = a for convenience For a function f : X → ]−∞, +∞], dom f = {x ∈ X | f (x) < +∞} and f ∗ : X ∗ → [−∞, +∞] : x∗ → supx∈X ( x, x∗ − f (x)) is the Fenchel conjugate of f Recall that f is said to be proper if dom f = ∅ If f is convex, ∂ f : X ⇒ X ∗ : x → {x∗ ∈ X ∗ | (∀y ∈ X) y − x, x∗ + f (x) ≤ f (y)} is the subdifferential operator of f Denote J by the duality map, i.e., the subdifferential of the function 12 · 2, by [22, Example 2.26], Jx := {x∗ ∈ X ∗ | x∗ , x = x∗ ∗· x , with x∗ ∗ = x } 18.2 A New Proof of the Br´ezis–Browder Theorem Fact 18.1 (Simons) (See [26, Lemma 19.7 and Sect 22].) Let A : X ⇒ X ∗ be a monotone operator such that graA is convex with gra A = ∅ Then the function g : X × X ∗ → ]−∞, +∞] : (x, x∗ ) → x, x∗ + ιgra A (x, x∗ ) (18.3) is proper and convex Fact 18.2 (Simons-Z˘alinescu) (See [27, Theorem 1.2] or [25, Theorem 10.6].) Let A : X ⇒ X ∗ be monotone Then A is maximal monotone if and only if graA + gra(−J) = X × X ∗ CuuDuongThanCong.com 394 L Yao Remark 18.3 When J and J −1 are single valued, Fact 18.2 yields Rockafellar’s characterization of maximal monotonicity of A See [27, Theorem 1.3] and [26, Theorem 29.5 and Remark 29.7] Now we state the Br´ezis–Browder Theorem Theorem 18.4 (Br´ezis–Browder) (See [14, Theorem 2].) Let A : X ⇒ X ∗ be a monotone linear relation with closed graph Then the following statements are equivalent (The hard part is to show (iii)⇒(i)) (i) A is maximal monotone (ii) A∗ is maximal monotone (iii) A∗ is monotone Proof (i)⇒(iii): Suppose to the contrary that A∗ is not monotone Then there exists (x0 , x∗0 ) ∈ gra A∗ such that x0 , x∗0 < Now we have −x0 − y, x∗0 − y∗ = −x0 , x∗0 + y, y∗ + x0 , y∗ + −y, x∗0 = −x0 , x∗0 + y, y∗ > 0, ∀(y, y∗ ) ∈ graA (18.4) Thus, (−x0 , x∗0 ) is monotonically related to graA By maximal monotonicity of A, (−x0 , x∗0 ) ∈ graA Then −x0 − (−x0 ), x∗0 − x∗0 = 0, which contradicts (18.4) Hence, A∗ is monotone (iii)⇒(i): See Theorem 18.5 below (i)⇔(ii): Apply directly (iii)⇔(i) by using A∗∗ = A (since graA is closed) In Theorem 18.5, we provide a new and simpler proof to show the hard part (iii)⇒(i) in Theorem 18.4 The proof was inspired by that of [33, Theorem 32.L] Theorem 18.5 Let A : X ⇒ X ∗ be a monotone linear relation with closed graph Suppose A∗ is monotone Then A is maximal monotone Proof By Fact 18.2, it suffices to show that X × X ∗ ⊆ graA + gra(−J) For this, let (x, x∗ ) ∈ X × X ∗ and we define g : X × X ∗ → ]−∞, +∞] by (y, y∗ ) → ∗ y 2 ∗+ y 2 + y∗ , y + ιgra A (y − x, y∗ − x∗ ) Since graA is closed, g is lower semicontinuous on X × X ∗ Note that (y, y∗ ) → y∗ , y + ιgraA (y − x, y∗ − x∗ ) = y∗ , y + ιgra A+(x,x∗ ) (y, y∗ ) By Fact 18.1, g is convex and coercive According to [32, Theorem 2.5.1(ii)], g has minimizers Suppose that (z, z∗ ) is a minimizer of g Then (z − x, z∗ − x∗ ) ∈ gra A, that is, (x, x∗ ) ∈ gra A + (z, z∗ ) (18.5) On the other hand, since (z, z∗ ) is a minimizer of g, (0, 0) ∈ ∂ g(z, z∗ ) By a result of Rockafellar (see [17, Theorem 2.9.8] and [32, Theorem 3.2.4(ii)]), there exist CuuDuongThanCong.com 18 The Br´ezis-Browder Theorem Revisited 395 (z∗0 , z0 ) ∈ ∂ (ιgra A (·−x, ·−x∗ ))(z, z∗ ) = ∂ ιgra A (z−x, z∗ −x∗ ) = (graA)⊥ , and (v, v∗ ) ∈ X × X ∗ with v∗ ∈ Jz, z∗ ∈ Jv such that (0, 0) = (z∗ , z) + (v∗ , v) + (z∗0 , z0 ) Then − (z + v), z∗ + v∗ ∈ graA∗ Since A∗ is monotone, z∗ + v∗ , z + v = z∗ , z + z∗ , v + v∗ , z + v∗ , v ≤ Note that since z∗ , v = z∗ z 2 + ∗ z 2 ∗ = v , v∗ , z = v ∗ ∗+ z∗ , z + ∗ v 2 ∗+ ∗ v (18.6) = z , by (18.6), we have + v, v∗ ≤ Hence, z∗ ∈ −Jz By (18.5), (x, x∗ ) ∈ graA + gra(−J) Thus, X × X ∗ ⊆ graA + gra(−J) Hence, A is maximal monotone Remark 18.6 Haraux provides a very simple proof of Theorem 18.5 in Hilbert spaces in [20, Theorem 10], but the proof could not be adapted to reflexive Banach spaces 18.3 Fitzpatrick Functions and Fitzpatrick Functions of Order n Now we introduce some properties of monotone linear relations Fact 18.7 (See [6].) Assume that A : X ⇒ X ∗ is a monotone linear relation Then the following hold (i) The function dom A → R : y → y, Ay is convex (ii) dom A ⊆ (A0)⊥ For every x ∈ (A0)⊥ , the function domA → R : y → x, Ay is linear Proof (i): See [6, Proposition 2.3] (ii): See [6, Proposition 2.2(i)(iii)] Definition 18.8 Suppose A : X ⇒ X ∗ is a linear relation We say A is symmetric if graA ⊆ gra A∗ By the definition of A∗ , we have (∀x, y ∈ domA) x, Ay is single valued and x, Ay = y, Ax CuuDuongThanCong.com 396 L Yao For a monotone linear relation A : X ⇒ X ∗ (where A is not necessarily symmetric), it will be convenient to define (as in, e.g., [3]) qA : x ∈ X → x, Ax , if x ∈ dom A; +∞, otherwise (18.7) By Fact 18.7(i), qA is well defined and is at most single-valued and convex According to the definition of qA , dom qA = domA Moreover, by (0, 0) ∈ graA and A is monotone, we have that qA ≥ The following generalizes a result of Phelps–Simons (see [23, Theorem 5.1]) from symmetric monotone linear operators to symmetric monotone linear relations We write f for the lower semicontinuous hull of f Proposition 18.9 Let A : X ⇒ X ∗ be a monotone symmetric linear relation Then (i) qA is convex, and qA + ιdomA = qA (ii) graA ⊆ gra ∂ qA If A is maximal monotone, then A = ∂ qA Proof Let x ∈ dom A (i): Since A is monotone, qA is convex Let y ∈ dom A Since A is monotone, by Fact 18.7(ii), 0≤ 1 Ax − Ay, x − y = Ay, y + Ax, x − Ax, y , 2 (18.8) we have qA (y) ≥ Ax, y − qA (x) Take lower semicontinuous hull on y and then deduce that qA (y) ≥ Ax, y − qA (x) For y = x, we have qA (x) ≥ qA (x) On the other hand, qA (x) ≤ qA (x) Altogether, qA (x) = qA (x) Thus, (i) holds (ii): Let y ∈ dom A By (18.8) and (i), qA (y) ≥ qA (x) + Ax, y − x = qA (x) + Ax, y − x (18.9) Since domqA ⊆ dom qA = dom A, by (18.9), qA (z) ≥ qA (x) + Ax, z − x , ∀z ∈ dom qA Hence Ax ⊆ ∂ qA (x) If A is maximal monotone, A = ∂ qA Thus (ii) holds Definition 18.10 (Fitzpatrick family) Let A : X ⇒ X ∗ be a maximal monotone operator The associated Fitzpatrick family FA consists of all functions F : X × X ∗ → ]−∞, +∞] that are lower semicontinuous and convex, and that satisfy F ≥ ·, · , and F = ·, · on graA Following [21], it will be convenient to set F : X ∗ × X → ]−∞, +∞] : (x∗ , x) → F(x, x∗ ), when F : X × X ∗ → ]−∞, +∞], and similarly for a function defined on X ∗ × X Fact 18.11 (Fitzpatrick) (See [19, Theorem 3.10] or [16, Corollary 4.1].) Let A : X ⇒ X ∗ be a maximal monotone operator Then for every (x, x∗ ) ∈ X × X ∗ , FA (x, x∗ ) = min{F(x, x∗ ) | F ∈ FA } and FA∗ (x, x∗ ) = max{F(x, x∗ ) | F ∈ FA } (18.10) CuuDuongThanCong.com 18 The Br´ezis-Browder Theorem Revisited 397 Proposition 18.12 Let A : X ⇒ X ∗ be a maximal monotone and symmetric linear relation Then 1 FA (x, x∗ ) = qA (x) + x, x∗ + q∗A (x∗ ), 2 ∀(x, x∗ ) ∈ X × X ∗ Proof Define function k : X × X ∗ → ]−∞, +∞] by 1 (z, z∗ ) → qA (z) + z, z∗ + q∗A (z∗ ) 2 Claim FA = k on dom A × X ∗ Let (x, x∗ ) ∈ X × X ∗, and suppose that x ∈ dom A Then FA (x, x∗ ) = sup (y,y∗ )∈gra A = sup x, y∗ + y, x∗ − y, y∗ x, Ay + y, x∗ − 2qA(y) y∈dom A = qA (x) + sup y∈dom A = 1 qA (x) + sup 2 y∈dom A Ax, y + y, x∗ − qA (x) − 2qA(y) Ax, 2y + 2y, x∗ − qA (x) − 4qA(y) 1 Ax, z + z, x∗ − qA (x) − qA(z) qA (x) + sup 2 z∈dom A 1 z, x∗ − qA (z − x) = qA (x) + sup 2 z∈dom A 1 z − x, x∗ − qA (z − x) = qA (x) + x, x∗ + sup 2 z∈dom A 1 = qA (x) + x, x∗ + q∗A (x∗ ) 2 = k(x, x∗ ) (by Proposition 18.9(i)) = Claim k is convex and proper lower semicontinuous on X × X ∗ Since FA is convex, 12 qA + 12 ·, · + 12 q∗A is convex on dom A × X ∗ Now we show that k is convex Let {(a, a∗ ), (b, b∗ )} ⊆ dom k, and t ∈ ]0, 1[ Then we have {a, b} ⊆ domqA ⊆ dom A Thus, there exist (an ), (bn ) in dom A such that an → a, bn → b with qA (an ) → qA (a), qA (bn ) → qA (b) Since 12 qA + 12 ·, · + 12 q∗A is convex on domA × X ∗, we have 1 qA + ·, · + q∗A (tan + (1 − t)bn,ta∗ + (1 − t)b∗) 2 1 1 1 ≤t qA + ·, · + q∗A (an , a∗ ) + (1 − t) qA + ·, · + q∗A (bn , b∗ ) 2 2 2 (18.11) CuuDuongThanCong.com 398 L Yao Take lim inf on both sides of (18.11) to see that k ta + (1 − t)b,ta∗ + (1 − t)b∗ ≤ tk(a, a∗ ) + (1 − t)k(b, b∗) Hence, k is convex on X × X ∗ Thus, k is convex and proper lower semicontinuous Claim FA = k on X × X ∗ To this end, we first observe that dom ∂ k∗ = gra A−1 (18.12) We have (w∗ , w) ∈ dom ∂ k∗ ⇔ (w∗ , w) ∈ dom ∂ (2k)∗ ⇔ (a, a∗ ) ∈ ∂ (2k)∗ (w∗ , w), ∃(a, a∗ ) ∈ X × X ∗ ⇔ (w∗ , w) ∈ ∂ (2k)(a, a∗ ), ∃(a, a∗ ) ∈ X × X ∗ ⇔ (w∗ − a∗ , w − a) ∈ ∂ (qA ⊕ q∗A )(a, a∗ ), ∃(a, a∗ ) ∈ X × X ∗ ⇔ w∗ − a∗ ∈ ∂ qA (a), w − a ∈ ∂ q∗A (a∗ ), ∃(a, a∗ ) ∈ X × X ∗ ⇔ w∗ − a∗ ∈ ∂ qA (a), a∗ ∈ ∂ qA (w − a), ∃(a, a∗ ) ∈ X × X ∗ ⇔ w∗ − a∗ ∈ Aa, a∗ ∈ A(w − a), ∃(a, a∗ ) ∈ X × X ∗ ∗ ∗ (18.13) (18.14) −1 ⇔ (w, w ) ∈ graA ⇔ (w , w) ∈ gra A , where (18.13) follows from [32, Theorem 3.2.4(vi)(ii)] and (18.14) from Proposition 18.9(ii) Next, we observe that k∗ (z, z∗ ) = z, z∗ , ∀(z, z∗ ) ∈ graA (18.15) Since k(z, z∗ ) ≥ z, z∗ and k(z, z∗ ) = z, z∗ ⇔ qA (z) + q∗A(z∗ ) = z, z∗ ⇔ z∗ ∈ ∂ qA (z) = Az by Proposition 18.9(ii), Fact 18.11 implies that FA ≤ k ≤ FA∗ Hence FA ≤ k∗ ≤ FA∗ Then by Fact 18.11, (18.15) holds Now using (18.15), (18.12) and a result by Borwein (see [11, Theorem 1] or [32, Theorem 3.1.4(i)]), we have k = k∗∗ = (k∗ + ιdom ∂ k∗ )∗ = ( ·, · + ιgra A−1 )∗ = FA Fact 18.13 (Recursion) (See [4, Proposition 2.13].) Let A : X ⇒ X ∗ be monotone, and let n ∈ {2, 3, } Then FA, n+1(x, x∗ ) = sup (a,a∗ )∈gra A FA, n (a, x∗ ) + x − a, a∗ , ∀(x, x∗ ) ∈ X × X ∗ Theorem 18.14 Let A : X ⇒ X ∗ be a maximal monotone and symmetric linear relation, let n ∈ {2, 3, }, and let (x, x∗ ) ∈ X × X ∗ Then FA, n(x, x∗ ) = CuuDuongThanCong.com n−1 n−1 ∗ ∗ q (x ) + x, x∗ , qA (x) + n n A n (18.16) 18 The Br´ezis-Browder Theorem Revisited consequently, FA, n (x, x∗ ) = 399 2(n−1) 2−n ∗ n FA (x, x ) + n x, x∗ Moreover, FA, ∞ = qA ⊕ q∗A = 2FA − ·, · (18.17) Proof Let (x, x∗ ) ∈ X × X ∗ The proof is by induction on n If n = 2, then the result follows for Proposition 18.12 Now assume that (18.16) holds for n ≥ Using Fact 18.13, we see that FA, n+1(x, x∗ ) = = = sup (a,a∗ )∈gra A sup (a,a∗ )∈gra A (FA, n (a, x∗ ) + x − a, a∗ ) n−1 ∗ ∗ n−1 q (x ) + qA (a) + a, x∗ + x − a, a∗ n A n n n−1 n−1 ∗ ∗ q (x ) + sup a, a∗ + a, x∗ + x, a∗ − a, a∗ , n A 2n n ∗ (a,a )∈gra A (18.18) n+1 a, x∗ + x, a∗ − a, a∗ n 2n = n−1 ∗ ∗ q (x ) + sup n A (a,a∗)∈gra A = 2n n−1 ∗ ∗ qA (x ) + sup n n + (a,a∗ )∈gra A n+1 n+1 ∗ a, a 2n 2n 2n n−1 ∗ ∗ q (x ) + sup = n A n + (b,b∗ )∈gra A n+1 ∗ n+1 ∗ a, x + x, a 2n n 2n − b, x∗ + x, b∗ − b, b∗ n 2n n−1 ∗ ∗ q (x ) + FA x, x∗ n A n+1 n n ∗ ∗ n n−1 ∗ ∗ qA (x) + qA (x ) + qA x + x∗ , x = n n+1 n n+1 n+1 n−1 ∗ ∗ n = qA (x) + qA (x ) + q∗A (x∗ ) + x∗ , x n (n + 1)n n+1 n+1 n ∗ ∗ n q (x ) + x, x∗ , = qA (x) + n+1 A n+1 n+1 = (18.19) (18.20) which is the result for n + 1, where (18.18) follows from Proposition 18.9(i) and (18.19) from Proposition 18.12 Thus, by Proposition 18.12, FA, n (x, x∗ ) = 2(n − 1) 2−n FA (x, x∗ ) + x, x∗ n n By (18.16), dom FA, n = dom(qA ⊕ q∗A) Now suppose that (x, x∗ ) ∈ dom FA, n By qA (x) + q∗A(x∗ ) − FA, n(x, x∗ ) = n qA (x) + q∗A (x∗ ) − x, x∗ FA, n (x, x∗ ) → (qA ⊕ q∗A)(x, x∗ ), n → ∞ Thus, (18.17) holds CuuDuongThanCong.com ≥ and 400 L Yao Remark 18.15 Theorem 18.14 generalizes and simplifies [1, Example 4.4] and [3, Example 6.4] See Corollary 18.17 Remark 18.16 Formula Identity (18.16) does not hold for nonsymmetric linear relations See [3, Example 2.8] for an example when A is skew linear operator and (18.16) fails Corollary 18.17 Let A : X → X ∗ be a maximal monotone and symmetric linear operator, let n ∈ {2, 3, }, and let (x, x∗ ) ∈ X × X ∗ Then FA, n(x, x∗ ) = n−1 n−1 ∗ ∗ qA (x) + q (x ) + x, x∗ , n n A n and, FA, ∞ = qA ⊕ q∗A (18.21) (18.22) If X is a Hilbert space, then FId, n (x, x∗ ) = n−1 x 2n + n−1 ∗ x 2n + x, x∗ , n (18.23) and, FId, ∞ = · 2 ⊕ · 2 (18.24) Definition 18.18 Let F1 , F2 : X × X ∗ → ]−∞, +∞] Then the partial inf-convolution F1 ✷2 F2 is the function defined on X × X ∗ by F1 ✷2 F2 : (x, x∗ ) → ∗inf ∗ F1 (x, x∗ − y∗ ) + F2 (x, y∗ ) y ∈X Theorem 18.19 (nth order Fitzpatrick function of the sum) Let A, B : X ⇒ X ∗ be maximal monotone and symmetric linear relations, and let n ∈ {2, 3, · · ·} Suppose that dom A − dom B is closed Then FA+B, n = FA, n ✷2 FB, n Moreover, FA+B, ∞ = FA, ∞ ✷2 FB, ∞ Proof By [28, Theorem 5.5] or [30], A + B is maximal monotone Hence, A + B is a maximal monotone and symmetric linear relation Let (x, x∗ ) ∈ X × X ∗ Then by Theorem 18.14, FA, n ✷2 FB, n (x, x∗ ) 2(n − 1) 2−n 2(n − 1) FA (x, y∗ ) + x, y∗ + FB (x, x∗ − y∗ ) n n n 2−n + x, x∗ − y∗ n 2−n 2(n − 1) = x, x∗ + ∗inf ∗ (FA (x, y∗ ) + FB(x, x∗ − y∗)) y ∈X n n = ∗inf ∗ y ∈X CuuDuongThanCong.com 18 The Br´ezis-Browder Theorem Revisited 401 2−n 2(n − 1) x, x∗ + FA ✷2 FB (x, x∗ ) n n 2(n − 1) 2−n = x, x∗ + FA+B(x, x∗ ), (by [6, Theorem 5.10]) n n = FA+B, n(x, x∗ ) (by Theorem 18.14) = Similarly, using (18.17), we have FA+B, ∞ = FA, ∞ ✷2 FB, ∞ Remark 18.20 Theorem 18.19 generalizes [3, Theorem 5.4] Acknowledgements The author thanks Dr Heinz Bauschke and Dr Xianfu Wang for valuable discussions The author also thanks the two anonymous referees for their careful reading and their pertinent comments References Bartz, S., Bauschke, H.H., Borwein, J.M., Reich, S., Wang, X.: Fitzpatrick functions, cyclic monotonicity and Rockafellar’s antiderivative Nonlinear Anal 66, 1198–1223 (2007) Bauschke, H.H., Borwein, J.M.: Maximal monotonicity of dense type, local maximal monotonicity, and monotonicity of the conjugate are all the same for continuous linear operators Pacific J Math 189, 1–20 (1999) Bauschke, H.H., Borwein, J.M., Wang, X.: Fitzpatrick functions and continuous linear monotone operators SIAM J Optim 18, 789–809 (2007) Bauschke, H.H., Lucet, Y., Wang, X.: Primal-dual symmetric antiderivatives for cyclically monotone operators SIAM J Control Optim 46, 2031–2051 (2007) Bauschke, H.H., Wang, X., Yao, L.: An answer to S Simons’ question on the maximal monotonicity of the sum of a maximal monotone linear operator and a normal cone operator Set-Valued Var Anal 17, 195–201 (2009) Bauschke, H.H., Wang, X., Yao, L.: Monotone linear relations: maximality and Fitzpatrick functions J Convex Anal 16, 673–686 (2009) Bauschke, H.H., Wang, X., Yao, L.: Autoconjugate representers for linear monotone operators Math Program Ser B 123, 5–24 (2010) Bauschke, H.H., Wang, X., Yao, L.: Examples of discontinuous maximal monotone linear operators and the solution to a recent problem posed by B.F Svaiter J Math Anal Appl 370, 224–241 (2010) Bauschke, H.H., Wang, X., Yao, L.: On Borwein-Wiersma Decompositions of monotone linear relations SIAM J Optim 20, 2636–2652 (2010) 10 Bauschke, H.H., Wang, X., Yao, L.: On the maximal monotonicity of the sum of a maximal monotone linear relation and the subdifferential operator of a sublinear function To appear in Proceedings of the Haifa Workshop on Optimization Theory and Related Topics Contemp Math., Amer Math Soc., Providence, RI (2010) http://arxiv.org/abs/1001.0257v1 11 Borwein, J.M.: A note on ε -subgradients and maximal monotonicity Pacific J Math 103, 307–314 (1982) 12 Borwein, J.M., Vanderwerff, J.D.: Convex Functions Cambridge University Press (2010) 13 Bot¸, R.I., Csetnek, E.R.: On extension results for n-cyclically monotone operators in reflexive Banach spaces J Math Anal Appl 367, 693–698 (2010) 14 Br´ezis, H., Browder, F.E.: Linear maximal monotone operators and singular nonlinear integral equations of Hammerstein type In: Nonlinear analysis (collection of papers in honor of Erich H Rothe), Academic, 31–42 (1978) CuuDuongThanCong.com 402 L Yao 15 Burachik, R.S., Iusem, A.N.: Set-Valued Mappings and Enlargements of Monotone Operators Springer (2008) 16 Burachik, R.S., Svaiter, B.F.: Maximal monotone operators, convex functions and a special family of enlargements Set-Valued Anal 10, 297–316 (2002) 17 Clarke, F.H.: Optimization and Nonsmooth Analysis SIAM, Philadelphia (1990) 18 Cross, R.: Multivalued Linear Operators Marcel Dekker (1998) 19 Fitzpatrick, S.: Representing monotone operators by convex functions In: Workshop/Miniconference on Functional Analysis and Optimization (Canberra 1988), Proceedings of the Centre for Mathematical Analysis 20, 59–65 Australian National University, Canberra, Australia (1988) 20 Haraux, A.: Nonlinear Evolution Equations – Global Behavior of Solutions Springer, Berlin (1981) 21 Penot, J.-P.: The relevance of convex analysis for the study of monotonicity Nonlinear Anal 58, 855–871 (2004) 22 Phelps, R.R.: Convex functions, Monotone Operators and Differentiability, 2nd edn Springer (1993) 23 Phelps, R.R., Simons, S.: Unbounded linear monotone operators on nonreflexive Banach spaces J Convex Anal 5, 303–328 (1998) 24 Rockafellar, R.T., Wets, R.J-B.: Variational Analysis Springer (2004) 25 Simons, S.: Minimax and Monotonicity Springer (1998) 26 Simons, S.: From Hahn-Banach to Monotonicity Springer (2008) 27 Simons, S., Z˘alinescu, C.: A new proof for Rockafellar’s characterization of maximal monotone operators Proc Amer Math Soc 132, 2969–2972 (2004) 28 Simons, S., Z˘alinescu, C.: Fenchel duality, Fitzpatrick functions and maximal monotonicity J Nonlinear Convex Anal 6, 1–22 (2005) 29 Svaiter, B.F.: Non-enlargeable operators and self-cancelling operators J Convex Anal 17, 309–320 (2010) 30 Voisei, M.D.: The sum theorem for linear maximal monotone operators Math Sci Res J 10, 83–85 (2006) 31 Voisei, M.D., Z˘alinescu, C.: Linear monotone subspaces of locally convex spaces Set-Valued Var Anal 18, 29–55 (2010) 32 Z˘alinescu, C.: Convex Analysis in General Vector Spaces World Scientific Publishing (2002) 33 Zeidler, E.: Nonlinear Functional Analysis and its Application, Vol II/B Nonlinear Monotone Operators Springer, Berlin (1990) CuuDuongThanCong.com ... Waterloo, Ontario Canada hwolkowicz@uwaterloo.ca ISSN 193 1-6 828 ISBN 97 8-1 -4 41 9-9 56 8-1 e-ISBN 97 8-1 -4 41 9-9 56 9-8 DOI 10.1007/97 8-1 -4 41 9-9 56 9-8 Springer New York Dordrecht Heidelberg London Library... believe that the reader will find this to be a valuable state-of-the-art account on emerging directions related to first-order fixed-point algorithms The editors thank BIRS and their sponsors... sarah.moffat@ubc.ca J.-C Pesquet Laboratoire d’Informatique Gaspard Monge, UMR CNRS 8049, Universit´e Paris-Est, 77454 Marne la Vall´ee Cedex 2, France, jean-christophe.pesquet@univ-paris-est.fr Carsten

Ngày đăng: 29/08/2020, 18:20

Từ khóa liên quan

Mục lục

  • Cover

  • Springer Optimization and Its Applications 49

  • Fixed-Point Algorithms forInverse Problems in Scienceand Engineering

  • ISBN 9781441995681

  • Preface

  • Contents

    • Contributors

    • Chapter 1 Chebyshev Sets, Klee Sets, and Chebyshev Centerswith Respect to Bregman Distances: Recent Resultsand Open Problems

      • 1.1 Introduction

        • 1.1.1 Legendre Functions and Bregman Distances

        • 1.1.2 Nearest Distance, Nearest Points, and Chebyshev Sets

        • 1.1.3 Farthest Distance, Farthest Points, and Klee Sets

        • 1.1.4 Chebyshev Radius and Chebyshev Center

        • 1.1.5 Goal of the Paper

        • 1.1.6 Organization of the Paper

        • 1.2 Auxiliary Results

        • 1.3 Chebyshev Sets

        • 1.4 Klee Sets

        • 1.5 Chebyshev Centers: Uniqueness and Characterization

        • 1.6 Chebyshev Centers: Two Examples

          • 1.6.1 Diagonal-Symmetric Line Segments in the Strictly Positive Orthant

          • 1.6.2 Intervals of Real Numbers

          • 1.7 Generalizations and Variants

          • 1.8 List of Open Problems

Tài liệu cùng người dùng

Tài liệu liên quan