Results 1  10
of
139
Interiorpoint Methods
, 2000
"... The modern era of interiorpoint methods dates to 1984, when Karmarkar proposed his algorithm for linear programming. In the years since then, algorithms and software for linear programming have become quite sophisticated, while extensions to more general classes of problems, such as convex quadrati ..."
Abstract

Cited by 598 (15 self)
 Add to MetaCart
(Show Context)
The modern era of interiorpoint methods dates to 1984, when Karmarkar proposed his algorithm for linear programming. In the years since then, algorithms and software for linear programming have become quite sophisticated, while extensions to more general classes of problems, such as convex quadratic programming, semidefinite programming, and nonconvex and nonlinear problems, have reached varying levels of maturity. We review some of the key developments in the area, including comments on both the complexity theory and practical algorithms for linear programming, semidefinite programming, monotone linear complementarity, and convex programming over sets that can be characterized by selfconcordant barrier functions.
DETERMINANT MAXIMIZATION WITH LINEAR MATRIX INEQUALITY CONSTRAINTS
"... The problem of maximizing the determinant of a matrix subject to linear matrix inequalities arises in many fields, including computational geometry, statistics, system identification, experiment design, and information and communication theory. It can also be considered as a generalization of the s ..."
Abstract

Cited by 223 (18 self)
 Add to MetaCart
The problem of maximizing the determinant of a matrix subject to linear matrix inequalities arises in many fields, including computational geometry, statistics, system identification, experiment design, and information and communication theory. It can also be considered as a generalization of the semidefinite programming problem. We give an overview of the applications of the determinant maximization problem, pointing out simple cases where specialized algorithms or analytical solutions are known. We then describe an interiorpoint method, with a simplified analysis of the worstcase complexity and numerical results that indicate that the method is very efficient, both in theory and in practice. Compared to existing specialized algorithms (where they are available), the interiorpoint method will generally be slower; the advantage is that it handles a much wider variety of problems.
INTERIOR PATH FOLLOWING PRIMALDUAL ALGORITHMS. PART I: LINEAR PROGRAMMING
, 1989
"... We describe a primaldual interior point algorithm for linear programming problems which requires a total of O(~fnL) number of iterations, where L is the input size. Each iteration updates a penalty parameter and finds the Newton direction associated with the KarushKuhnTucker system of equations w ..."
Abstract

Cited by 198 (11 self)
 Add to MetaCart
We describe a primaldual interior point algorithm for linear programming problems which requires a total of O(~fnL) number of iterations, where L is the input size. Each iteration updates a penalty parameter and finds the Newton direction associated with the KarushKuhnTucker system of equations which characterizes a solution of the logarithmic barrier function problem. The algorithm is based on the path following idea.
A unified approach to interiorpoint algorithms for linear complementarity problems.
 Lecture Notes in Computer Science,
, 1991
"... ..."
(Show Context)
A PrimalDual Interior Point Method Whose Running Time Depends Only on the Constraint Matrix
, 1995
"... We propose a primaldual "layeredstep " interior point (LIP) algorithm for linear programming with data given by real numbers. This algorithm follows the central path, either with short steps or with a new type of step called a "layered least squares " (LLS) ste ..."
Abstract

Cited by 56 (8 self)
 Add to MetaCart
We propose a primaldual &quot;layeredstep &quot; interior point (LIP) algorithm for linear programming with data given by real numbers. This algorithm follows the central path, either with short steps or with a new type of step called a &quot;layered least squares &quot; (LLS) step. The algorithm returns an exact optimum after a finite number of stepsin particular, after O(n3:5c(A)) iterations, where c(A) is a function of the
A Cutting Plane Method from Analytic Centers for Stochastic Programming
 Mathematical Programming
, 1994
"... The stochastic linear programming problem with recourse has a dual block angular structure. It can thus be handled by Benders decomposition or by Kelley's method of cutting planes; equivalently the dual problem has a primal block angular structure and can be handled by DantzigWolfe decompositi ..."
Abstract

Cited by 47 (17 self)
 Add to MetaCart
(Show Context)
The stochastic linear programming problem with recourse has a dual block angular structure. It can thus be handled by Benders decomposition or by Kelley's method of cutting planes; equivalently the dual problem has a primal block angular structure and can be handled by DantzigWolfe decomposition the two approaches are in fact identical by duality. Here we shall investigate the use of the method of cutting planes from analytic centers applied to similar formulations. The only significant difference form the aforementioned methods is that new cutting planes (or columns, by duality) will be generated not from the optimum of the linear programming relaxation, but from the analytic center of the set of localization. 1 Introduction The study of optimization problems in the presence of uncertainty still taxes the limits of methodology and software. One of the most approachable settings is that of twostaged planning under uncertainty, in which a first stage decision has to be taken bef...
HOMOTOPY CONTINUATION METHODS FOR NONLINEAR COMPLEMENTARITY PROBLEMS
, 1991
"... A complementarity problem with a continuous mapping f from Rn into itself can be written as the system of equations F(x, y) = 0 and (x, y)> 0. Here F is the mapping from R ~ " into itself defined by F(x, y) = ( xl y,, x2yZ,..., x, ~ ye, y ffx)). Under the assumption that the mapping f is ..."
Abstract

Cited by 43 (3 self)
 Add to MetaCart
A complementarity problem with a continuous mapping f from Rn into itself can be written as the system of equations F(x, y) = 0 and (x, y)> 0. Here F is the mapping from R ~ " into itself defined by F(x, y) = ( xl y,, x2yZ,..., x, ~ ye, y ffx)). Under the assumption that the mapping f is a P,,function, we study various aspects of homotopy continuation methods that trace a trajectory consisting of solutions of the family of systems of equations F(x, y) = t(a, b) and (x, y) 8 0 until the parameter t> 0 attains 0. Here (a, b) denotes a 2ndimensional constant positive vector. We establish the existence of a trajectory which leads to a solution of the problem, and then present a numerical method for tracing the trajectory. We also discuss the global and local convergence of the method.
The many facets of linear programming
 Math. Program., 91(3, Ser. B):417–436, 2002. ISMP 2000, Part 1
"... ..."