Self-Regularity

Self-Regularity: A New Paradigm for Primal-Dual Interior-Point Algorithms

Jiming Peng
Cornelis Roos
Tamás Terlaky
Copyright Date: 2002
Pages: 208
Stable URL: http://www.jstor.org/stable/j.ctt7sf0f
  • Cite this Item
  • Book Info
    Self-Regularity
    Book Description:

    Research on interior-point methods (IPMs) has dominated the field of mathematical programming for the last two decades. Two contrasting approaches in the analysis and implementation of IPMs are the so-called small-update and large-update methods, although, until now, there has been a notorious gap between the theory and practical performance of these two strategies. This book comes close to bridging that gap, presenting a new framework for the theory of primal-dual IPMs based on the notion of the self-regularity of a function.

    The authors deal with linear optimization, nonlinear complementarity problems, semidefinite optimization, and second-order conic optimization problems. The framework also covers large classes of linear complementarity problems and convex optimization. The algorithm considered can be interpreted as a path-following method or a potential reduction method. Starting from a primal-dual strictly feasible point, the algorithm chooses a search direction defined by some Newton-type system derived from the self-regular proximity. The iterate is then updated, with the iterates staying in a certain neighborhood of the central path until an approximate solution to the problem is found. By extensively exploring some intriguing properties of self-regular functions, the authors establish that the complexity of large-update IPMs can come arbitrarily close to the best known iteration bounds of IPMs.

    Researchers and postgraduate students in all areas of linear and nonlinear optimization will find this book an important and invaluable aid to their work.

    eISBN: 978-1-4008-2513-4
    Subjects: Mathematics
    × Close Overlay

Table of Contents

Export Selected Citations
  1. Front Matter (pp. i-iv)
  2. Table of Contents (pp. v-vi)
  3. Preface (pp. vii-viii)
  4. Acknowledgments (pp. ix-x)
    Jiming Peng, Cornelis Roos and Tamás Terlaky
  5. Notation (pp. xi-xiv)
  6. List of Abbreviations (pp. xv-xvi)
  7. Chapter 1 Introduction and Preliminaries (pp. 1-26)

    There is no doubt that the major breakthroughs in the field of mathematical programming are always inaugurated in linear optimization. Linear optimization, hereafter LO, deals with a simple mathematical model that exhibits a wonderful combination of two contrasting aspects: it can be considered as both a continuous and a combinatorial problem. The continuity of the problem is finding a global minimizer of a continuous linear function over a continuous convex polyhedral constrained set, and its combinatorial character is looking for optimality over a set of vertices of a polyhedron. The Simplex algorithm [17], invented by Dantzig in the mid-1940s, explicitly...

  8. Chapter 2 Self-Regular Functions and Their Properties (pp. 27-46)

    As already mentioned earlier in Section 1.3.2, proximity measures or potential functions play an important role in both the theoretical study and the practical implementation of IPMs. From the outset, the pioneering work of Karmarkar [52] employed a potential function to keep the iterative sequence in the interior of the feasible set. The sharp observation by Gill et al. [28] made it clear that Karmarkar’s potential function is essentially related to the classicallogarithmic barrier function[23]. This led to a rebirth of some classical algorithms for nonlinear programming.

    It was the discovery of the central path (and the related...

  9. Chapter 3 Primal-Dual Algorithms for Linear Optimization Based on Self-Regular Proximities (pp. 47-66)

    We start with the definition of self-regular functions on$\Re _{ + + }^n$. For simplicity, if no confusion occurs, sometimes we abuse some of the notation such as the function$\psi ( \cdot )$.

    Definition 3.1.1A function$\psi (x):\;\Re _{ + + }^n \to \Re _ + ^n$defined by

    $\psi (x)\; = \;{(\psi ({x_1}),\; \ldots ,\;\psi ({x_n}))^{\text{T}}}$, (3.1)

    is said to be self-regular if the kernel function$\psi (t):\;{\Re _{ + + }} \to \;{\Re _ + }$is selfregular.

    Note that from (3.1), we can define the powerxrfor any real numberrwhen$x\; > \;0,\;\Psi '(x)\; \in \;{\Re ^n}$and$\Psi ''(x)\; \in \;{\Re ^n}$.

    To maintain consistency of notation in this context, we denote by$\Psi (x):\;\Re _{ + + }^n \to \;{\Re _ + }$the function

    $\Psi (x): = \sum\limits_{i = 1}^n {\psi ({x_i})} $. (3.2)

    From definitions (3.1) and (3.2), one can easily see that self-regular functions in$\Re _{ + + }^n$also enjoy...

  10. Chapter 4 Interior-Point Methods for Complementarity Problems Based on Self-Regular Proximities (pp. 67-98)

    The classical complementarity problem (CP) is to find anxin${\Re ^n}$such that

    $x\: \ge \:0,\:\:\:f(x)\: \ge \:0,\:\:\:{x^{\rm{T}}}f(x)\: = \:0$,

    wheref(x) = (f1(x), ...,fn(x))Tis a mapping from${\Re ^n}$into itself. To be more specific, we also call it a linear complementarity problem (LCP) if the involved mappingf(x) is affine, that is,f(x) =Mx+cfor some$M\; \in \;{\Re ^{n \times n}},$$c\; \in \;{\Re ^n};$otherwise we call it a nonlinear complementarity problem (NCP) whenf(x) is nonlinear.

    In view of the nonnegativity requirements onxand f(x) at the solution point of an NCP, since${x^{\text{T}}}f(x)\; = \;\sum\nolimits_{i = 1}^n {{x_i}{f_i}(x)} $, a standard CP can be equivalently stated as

    $x \ge 0,\:\:\:f(x) \ge 0,\:\:\:xf(x)\: = \:0$,

    where...

  11. Chapter 5 Primal-Dual Interior-Point Methods for Semidefinite Optimization Based on Self-Regular Proximities (pp. 99-124)

    The first paper dealing with SDO problems dates back to the early 1960s [11]. For the next many years, the whole topic of SDO stayed silent except for a few isolated results scattered in the literature. The situation changed dramatically around the beginning of the 1990s when SDO started to emerge as one of the fastest developing areas of mathematical programming.

    Several reasons can be given why SDO was out of interest for such a long time. One of the main reasons was the lack of robust and efficient algorithms for solving SDO before 1990s. The thrust of SDO research...

  12. Chapter 6 Primal-Dual Interior-Point Methods for Second-Order Conic Optimization Based on Self-Regular Proximities (pp. 125-158)

    Mathematically, a typical second-order cone can be defined by

    $K = \left\{ {({x_1},\:{x_2},\: \ldots ,\:{x_n})\: \in \:{\Re ^n}:\:x_1^2 - \:\sum\limits_{i = 2}^n {x_i^2} \ge \:0,\:{x_1} \ge 0} \right\}$.

    This cone is often referred to as the Lorentz cone in physics and verbally we also use its descriptive nickname: the ice-cream cone.¹ Second-order conic optimization (SOCO) is the problem of minimizing a linear objective function subject to the intersection of an affine set and the direct product of several second-order cones. Hence, from a pure mathematical viewpoint, the constraint functions defining the second-order cone are nothing more than some specific quadratic functions.

    In light of the above-mentioned relation, SOCO is always recognized as a generalization of LO. Several important...

  13. Chapter 7 Initialization: Embedding Models for Linear Optimization, Complementarity Problems, Semidefinite Optimization and Second-Order Conic Optimization (pp. 159-168)

    Like the algorithms proposed in the present work, many IPMs start with a strictly feasible initial point. However, obtaining such a point is usually as difficult as solving the underlying problem itself. On the other hand, it is desirable for a complete algorithm not only to identify the optimal solution when the problem is solvable, but also to detect infeasibility when the problem is either primal or dual infeasible. A robust and efficient way for handling these issues is to apply IPMs to an elegant augmented model: the so-called self-dual embedding model.

    Ye, Todd and Mizuno [128] introduced the homogeneous...

  14. Chapter 8 Conclusions (pp. 169-174)

    Starting from Karmarkar’s remarkable paper [52], the study of IPMs has flourished in many areas of mathematical programming and greatly changed the state of the art in numerous areas of optimization theory and applications. As we mentioned in Chapter 1, Karmarkar’s original scheme is closely associated with the classical logarithmic barrier approach to nonlinear problems. This vital observation led later to the development of the outstandingself-concordancytheory for classes of optimization problems [83]. Since the middle of the 1990s, the research on IPMs has shifted its focus from the theory of complexity to the application side of conic linear...

  15. References (pp. 175-182)
  16. Index (pp. 183-185)

Access

You are not currently logged in.

Login through your institution for access.

login

Log in to your personal account or through your institution.