Results 1  10
of
84
On the Implementation of an InteriorPoint Filter LineSearch Algorithm for LargeScale Nonlinear Programming
, 2004
"... We present a primaldual interiorpoint algorithm with a filter linesearch method for nonlinear programming. Local and global convergence properties of this method were analyzed in previous work. Here we provide a comprehensive description of the algorithm, including the feasibility restoration ph ..."
Abstract

Cited by 284 (6 self)
 Add to MetaCart
We present a primaldual interiorpoint algorithm with a filter linesearch method for nonlinear programming. Local and global convergence properties of this method were analyzed in previous work. Here we provide a comprehensive description of the algorithm, including the feasibility restoration phase for the filter method, secondorder corrections, and inertia correction of the KKT matrix. Heuristics are also considered that allow faster performance. This method has been implemented in the IPOPT code, which we demonstrate in a detailed numerical study based on 954 problems from the CUTEr test set. An evaluation is made of several linesearch options, and a comparison is provided with two stateoftheart interiorpoint codes for nonlinear programming.
Benchmarking derivativefree optimization algorithms
"... We propose data profiles as a tool for analyzing the performance of derivativefree optimization solvers when there are constraints on the computational budget. We use performance and data profiles, together with a convergence test that measures the decrease in function value, to analyze the performa ..."
Abstract

Cited by 71 (6 self)
 Add to MetaCart
(Show Context)
We propose data profiles as a tool for analyzing the performance of derivativefree optimization solvers when there are constraints on the computational budget. We use performance and data profiles, together with a convergence test that measures the decrease in function value, to analyze the performance of three solvers on sets of smooth, noisy, and piecewisesmooth problems. Our results provide estimates for the performance difference between these solvers, and show that on these problems, the modelbased solver tested performs better than the two direct search solvers tested, even for noisy and piecewisesmooth problems. 1
NLEVP: A Collection of Nonlinear Eigenvalue Problems
, 2010
"... We present a collection of 46 nonlinear eigenvalue problems in the form of a MATLAB toolbox. The collection contains problems from models of reallife applications as well as ones constructed specifically to have particular properties. A classification is given of polynomial eigenvalue problems acco ..."
Abstract

Cited by 49 (12 self)
 Add to MetaCart
(Show Context)
We present a collection of 46 nonlinear eigenvalue problems in the form of a MATLAB toolbox. The collection contains problems from models of reallife applications as well as ones constructed specifically to have particular properties. A classification is given of polynomial eigenvalue problems according to their structural properties. Identifiers based on these and other properties can be used to extract particular types of problems from the collection. A brief description of each problem is given. NLEVP serves both to illustrate the tremendous variety of applications of nonlinear Eigenvalue problems and to provide representative problems for testing, tuning, and benchmarking of algorithms and codes.
A PrimalDual InteriorPoint Method for Nonlinear Programming with Strong Global and Local Convergence Properties
 SIAM Journal on Optimization
, 2002
"... An exactpenaltyfunctionbased schemeinspired from an old idea due to Mayne and Polak (Math. Prog., vol. 11, 1976, pp. 6780)is proposed for extending to general smooth constrained optimization problems any given feasible interiorpoint method for inequality constrained problems. It is s ..."
Abstract

Cited by 37 (5 self)
 Add to MetaCart
(Show Context)
An exactpenaltyfunctionbased schemeinspired from an old idea due to Mayne and Polak (Math. Prog., vol. 11, 1976, pp. 6780)is proposed for extending to general smooth constrained optimization problems any given feasible interiorpoint method for inequality constrained problems. It is shown that the primaldual interiorpoint framework allows for a simpler penalty parameter update rule than that discussed and analyzed by the originators of the scheme in the context of first order methods of feasible direction. Strong global and local convergence results are proved under mild assumptions. In particular, (i) the proposed algorithm does not su#er a common pitfall # Department of Electrical and Computer Engineering and Institute for Systems Research, University of Maryland, College Park, MD 20742, USA + IBM T.J. Watson Research Center, Yorktown Heights, NY 10598, USA # Applied Physics Laboratory, Laurel, MD 20723, USA Alphatech, Arlington, VA 22203, USA recently pointed out by Wachter and Biegler; and (ii) the positive definiteness assumption on the Hessian estimate, made in the original version of the algorithm, is relaxed, allowing for the use of exact Hessian information, resulting in local quadratic convergence. Promising numerical results are reported.
MA57  A new code for the solution of sparse Symmetric Definite And indefinite Systems
, 2002
"... We introduce a new code for the direct solution of sparse symmetric linear equations that solves indefinite systems with 2 × 2 pivoting for stability. This code, called MA57, is in HSL 2002 and supersedes the well used HSL code MA27. We describe the user interface in some detail and emphasize some o ..."
Abstract

Cited by 22 (1 self)
 Add to MetaCart
We introduce a new code for the direct solution of sparse symmetric linear equations that solves indefinite systems with 2 × 2 pivoting for stability. This code, called MA57, is in HSL 2002 and supersedes the well used HSL code MA27. We describe the user interface in some detail and emphasize some of the novel features of MA57. These include restart facilities, matrix modification, partial solution for matrix factors, solution of multiple righthand sides, and iterative refinement and error analysis. There are additional facilities within a Fortran 90 implementation that include the ability to identify and change pivots. Several of these facilities have been developed particularly to support optimization applications and the performance of the code on problems arising therefrom will be presented.
ORBIT: Optimization by radial basis function interpolation in trustregions
 SIAM Journal on Scientific Computing
, 2008
"... Abstract. We present a new derivativefree algorithm, ORBIT, for unconstrained local optimization of computationally expensive functions. A trustregion framework using interpolating Radial Basis Function (RBF) models is employed. The RBF models considered often allow ORBIT to interpolate nonlinear ..."
Abstract

Cited by 20 (4 self)
 Add to MetaCart
(Show Context)
Abstract. We present a new derivativefree algorithm, ORBIT, for unconstrained local optimization of computationally expensive functions. A trustregion framework using interpolating Radial Basis Function (RBF) models is employed. The RBF models considered often allow ORBIT to interpolate nonlinear functions using fewer function evaluations than the polynomial models considered by present techniques. Approximation guarantees are obtained by ensuring that a subset of the interpolation points are sufficiently poised for linear interpolation. The RBF property of conditional positive definiteness yields a natural method for adding additional points. We present numerical results on test problems to motivate the use of ORBIT when only a relatively small number of expensive function evaluations are available. Results on two very different application problems, calibration of a watershed model and optimization of a PDEbased bioremediation plan, are also very encouraging and support ORBIT’s effectiveness on blackbox functions for which no special mathematical structure is known or available.
Approximate factorization constraint preconditioners for saddlepoint matrices
 SIAM J. Sci. Comput
"... Abstract. We consider the application of the conjugate gradient method to the solution of large, symmetric indefinite linear systems. Special emphasis is put on the use of constraint preconditioners and a new factorization that can reduce the number of flops required by the preconditioning step. Res ..."
Abstract

Cited by 20 (2 self)
 Add to MetaCart
(Show Context)
Abstract. We consider the application of the conjugate gradient method to the solution of large, symmetric indefinite linear systems. Special emphasis is put on the use of constraint preconditioners and a new factorization that can reduce the number of flops required by the preconditioning step. Results concerning the eigenvalues of the preconditioned matrix and its minimum polynomial are given. Numerical experiments validate these conclusions.
Implementing generating set search methods for linearly constrained minimization
 Department of Computer Science, College of William and Mary
, 2005
"... Abstract. We discuss an implementation of a derivativefree generating set search method for linearly constrained minimization with no assumption of nondegeneracy placed on the constraints. The convergence guarantees for generating set search methods require that the set of search directions possess ..."
Abstract

Cited by 17 (5 self)
 Add to MetaCart
(Show Context)
Abstract. We discuss an implementation of a derivativefree generating set search method for linearly constrained minimization with no assumption of nondegeneracy placed on the constraints. The convergence guarantees for generating set search methods require that the set of search directions possesses certain geometrical properties that allow it to approximate the feasible region near the current iterate. In the hard case, the calculation of the search directions corresponds to finding the extreme rays of a cone with a degenerate vertex at the origin, a difficult problem. We discuss here how stateoftheart computational geometry methods make it tractable to solve this problem in connection with generating set search. We also discuss a number of other practical issues of implementation, such as the careful treatment of equality constraints and the desirability of augmenting the set of search directions beyond the theoretically minimal set. We illustrate the behavior of the implementation on several problems from the CUTEr test suite. We have found it to be successful on problems with several hundred variables and linear constraints.
Iterative methods for finding a trustregion step
, 2007
"... Abstract. We consider the problem of finding an approximate minimizer of a general quadratic function subject to a twonorm constraint. The SteihaugToint method minimizes the quadratic over a sequence of expanding subspaces until the iterates either converge to an interior point or cross the constr ..."
Abstract

Cited by 14 (3 self)
 Add to MetaCart
Abstract. We consider the problem of finding an approximate minimizer of a general quadratic function subject to a twonorm constraint. The SteihaugToint method minimizes the quadratic over a sequence of expanding subspaces until the iterates either converge to an interior point or cross the constraint boundary. The benefit of this approach is that an approximate solution may be obtained with minimal work and storage. However, the method does not allow the accuracy of a constrained solution to be specified. We propose an extension of the SteihaugToint method that allows a solution to be calculated to any prescribed accuracy. If the SteihaugToint point lies on the boundary, the constrained problem is solved on a sequence of evolving lowdimensional subspaces. Each subspace includes an accelerator direction obtained from a regularized Newton method applied to the constrained problem. A crucial property of this direction is that it can be computed by applying the conjugategradient method to a positivedefinite system in both the primal and dual variables of the constrained problem. The method includes a parameter that allows the user to take advantage of the tradeoff between the overall number of function evaluations and matrixvector products associated with the underlying trustregion method. At one extreme, a lowaccuracy solution is obtained that is comparable to the SteihaugToint point. At the other extreme, a highaccuracy solution can be specified that minimizes the overall number of function evaluations at the expense of more matrixvector products. Key words. Largescale unconstrained optimization, trustregion methods, conjugategradient method, Lanczos tridiagonalization process AMS subject classifications. 49J20, 49J15, 49M37, 49D37, 65F05, 65K05, 90C30