Results 1  10
of
192
SecondOrder Cone Programming
 MATHEMATICAL PROGRAMMING
, 2001
"... In this paper we survey the second order cone programming problem (SOCP). First we present several applications of the problem in various areas of engineering and robust optimization problems. We also give examples of optimization problems that can be cast as SOCPs. Next we review an algebraic struc ..."
Abstract

Cited by 247 (11 self)
 Add to MetaCart
In this paper we survey the second order cone programming problem (SOCP). First we present several applications of the problem in various areas of engineering and robust optimization problems. We also give examples of optimization problems that can be cast as SOCPs. Next we review an algebraic structure that is connected to SOCP. This algebra is a special case of a Euclidean Jordan algebra. After presenting duality theory, complementary slackness conditions, and definitions and algebraic characterizations of primal and dual nondegeneracy and strict complementarity we review the logarithmic barrier function for the SOCP problem and survey the pathfollowing interior point algorithms for it. Next we examine numerically stable methods for solving the interior point methods and study ways that sparsity in the input data can be exploited. Finally we give some current and future research direction in SOCP.
PrimalDual InteriorPoint Methods for SelfScaled Cones
 SIAM Journal on Optimization
, 1995
"... In this paper we continue the development of a theoretical foundation for efficient primaldual interiorpoint algorithms for convex programming problems expressed in conic form, when the cone and its associated barrier are selfscaled (see [9]). The class of problems under consideration includes li ..."
Abstract

Cited by 205 (12 self)
 Add to MetaCart
(Show Context)
In this paper we continue the development of a theoretical foundation for efficient primaldual interiorpoint algorithms for convex programming problems expressed in conic form, when the cone and its associated barrier are selfscaled (see [9]). The class of problems under consideration includes linear programming, semidefinite programming and quadratically constrained quadratic programming problems. For such problems we introduce a new definition of affinescaling and centering directions. We present efficiency estimates for several symmetric primaldual methods that can loosely be classified as pathfollowing methods. Because of the special properties of these cones and barriers, two of our algorithms can take steps that go typically a large fraction of the way to the boundary of the feasible region, rather than being confined to a ball of unit radius in the local norm defined by the Hessian of the barrier.
LOQO: An interior point code for quadratic programming
, 1994
"... ABSTRACT. This paper describes a software package, called LOQO, which implements a primaldual interiorpoint method for general nonlinear programming. We focus in this paper mainly on the algorithm as it applies to linear and quadratic programming with only brief mention of the extensions to convex ..."
Abstract

Cited by 195 (10 self)
 Add to MetaCart
(Show Context)
ABSTRACT. This paper describes a software package, called LOQO, which implements a primaldual interiorpoint method for general nonlinear programming. We focus in this paper mainly on the algorithm as it applies to linear and quadratic programming with only brief mention of the extensions to convex and general nonlinear programming, since a detailed paper describing these extensions were published recently elsewhere. In particular, we emphasize the importance of establishing and maintaining symmetric quasidefiniteness of the reduced KKT system. We show that the industry standard MPS format can be nicely formulated in such a way to provide quasidefiniteness. Computational results are included for a variety of linear and quadratic programming problems. 1.
A unified approach to interior point algorithms for linear complementarity problems,
 Lecture Notes in Computer Science 538, SpringerVerlag
, 1991
"... ..."
(Show Context)
PrimalDual PathFollowing Algorithms for Semidefinite Programming
 SIAM Journal on Optimization
, 1996
"... This paper deals with a class of primaldual interiorpoint algorithms for semidefinite programming (SDP) which was recently introduced by Kojima, Shindoh and Hara [11]. These authors proposed a family of primaldual search directions that generalizes the one used in algorithms for linear programmin ..."
Abstract

Cited by 166 (12 self)
 Add to MetaCart
This paper deals with a class of primaldual interiorpoint algorithms for semidefinite programming (SDP) which was recently introduced by Kojima, Shindoh and Hara [11]. These authors proposed a family of primaldual search directions that generalizes the one used in algorithms for linear programming based on the scaling matrix X 1=2 S \Gamma1=2 . They study three primaldual algorithms based on this family of search directions: a shortstep pathfollowing method, a feasible potentialreduction method and an infeasible potentialreduction method. However, they were not able to provide an algorithm which generalizes the longstep pathfollowing algorithm introduced by Kojima, Mizuno and Yoshise [10]. In this paper, we characterize two search directions within their family as being (unique) solutions of systems of linear equations in symmetric variables. Based on this characterization, we present: 1) a simplified polynomial convergence proof for one of their shortstep pathfollowing ...
A PrimalDual Potential Reduction Method for Problems Involving Matrix Inequalities
 in Protocol Testing and Its Complexity&quot;, Information Processing Letters Vol.40
, 1995
"... We describe a potential reduction method for convex optimization problems involving matrix inequalities. The method is based on the theory developed by Nesterov and Nemirovsky and generalizes Gonzaga and Todd's method for linear programming. A worstcase analysis shows that the number of iterat ..."
Abstract

Cited by 102 (20 self)
 Add to MetaCart
We describe a potential reduction method for convex optimization problems involving matrix inequalities. The method is based on the theory developed by Nesterov and Nemirovsky and generalizes Gonzaga and Todd's method for linear programming. A worstcase analysis shows that the number of iterations grows as the square root of the problem size, but in practice it appears to grow more slowly. As in other interiorpoint methods the overall computational effort is therefore dominated by the leastsquares system that must be solved in each iteration. A type of conjugategradient algorithm can be used for this purpose, which results in important savings for two reasons. First, it allows us to take advantage of the special structure the problems often have (e.g., Lyapunov or algebraic Riccati inequalities). Second, we show that the polynomial bound on the number of iterations remains valid even if the conjugategradient algorithm is not run until completion, which in practice can greatly reduce the computational effort per iteration.
Primaldual interior methods for nonconvex nonlinear programming
 SIAM Journal on Optimization
, 1998
"... Abstract. This paper concerns largescale general (nonconvex) nonlinear programming when first and second derivatives of the objective and constraint functions are available. A method is proposed that is based on finding an approximate solution of a sequence of unconstrained subproblems parameterize ..."
Abstract

Cited by 80 (8 self)
 Add to MetaCart
(Show Context)
Abstract. This paper concerns largescale general (nonconvex) nonlinear programming when first and second derivatives of the objective and constraint functions are available. A method is proposed that is based on finding an approximate solution of a sequence of unconstrained subproblems parameterized by a scalar parameter. The objective function of each unconstrained subproblem is an augmented penaltybarrier function that involves both primal and dual variables. Each subproblem is solved with a modified Newton method that generates search directions from a primaldual system similar to that proposed for interior methods. The augmented penaltybarrier function may be interpreted as a merit function for values of the primal and dual variables. An inertiacontrolling symmetric indefinite factorization is used to provide descent directions and directions of negative curvature for the augmented penaltybarrier merit function. A method suitable for large problems can be obtained by providing a version of this factorization that will treat large sparse indefinite systems.
On implementing a primaldual interiorpoint method for conic quadratic optimization
 MATHEMATICAL PROGRAMMING SER. B
, 2000
"... Conic quadratic optimization is the problem of minimizing a linear function subject to the intersection of an affine set and the product of quadratic cones. The problem is a convex optimization problem and has numerous applications in engineering, economics, and other areas of science. Indeed, linea ..."
Abstract

Cited by 75 (6 self)
 Add to MetaCart
Conic quadratic optimization is the problem of minimizing a linear function subject to the intersection of an affine set and the product of quadratic cones. The problem is a convex optimization problem and has numerous applications in engineering, economics, and other areas of science. Indeed, linear and convex quadratic optimization is a special case. Conic quadratic optimization problems can in theory be solved efficiently using interiorpoint methods. In particular it has been shown by Nesterov and Todd that primaldual interiorpoint methods developed for linear optimization can be generalized to the conic quadratic case while maintaining their efficiency. Therefore, based on the work of Nesterov and Todd, we discuss an implementation of a primaldual interiorpoint method for solution of largescale sparse conic quadratic optimization problems. The main features of the implementation are it is based on a homogeneous and selfdual model, handles the rotated quadratic cone directly, employs a Mehrotra type predictorcorrector