Results 1  10
of
95
SNOPT: An SQP Algorithm For LargeScale Constrained Optimization
, 2002
"... Sequential quadratic programming (SQP) methods have proved highly effective for solving constrained optimization problems with smooth nonlinear functions in the objective and constraints. Here we consider problems with general inequality constraints (linear and nonlinear). We assume that first deriv ..."
Abstract

Cited by 582 (23 self)
 Add to MetaCart
(Show Context)
Sequential quadratic programming (SQP) methods have proved highly effective for solving constrained optimization problems with smooth nonlinear functions in the objective and constraints. Here we consider problems with general inequality constraints (linear and nonlinear). We assume that first derivatives are available, and that the constraint gradients are sparse. We discuss
Interiorpoint methods for nonconvex nonlinear programming: Filter methods and merit functions
 Computational Optimization and Applications
, 2002
"... Abstract. In this paper, we present global and local convergence results for an interiorpoint method for nonlinear programming and analyze the computational performance of its implementation. The algorithm uses an ℓ1 penalty approach to relax all constraints, to provide regularization, and to bound ..."
Abstract

Cited by 119 (8 self)
 Add to MetaCart
(Show Context)
Abstract. In this paper, we present global and local convergence results for an interiorpoint method for nonlinear programming and analyze the computational performance of its implementation. The algorithm uses an ℓ1 penalty approach to relax all constraints, to provide regularization, and to bound the Lagrange multipliers. The penalty problems are solved using a simplified version of Chen and Goldfarb’s strictly feasible interiorpoint method [12]. The global convergence of the algorithm is proved under mild assumptions, and local analysis shows that it converges Qquadratically for a large class of problems. The proposed approach is the first to simultaneously have all of the following properties while solving a general nonconvex nonlinear programming problem: (1) the convergence analysis does not assume boundedness of dual iterates, (2) local convergence does not require the Linear Independence Constraint Qualification, (3) the solution of the penalty problem is shown to locally converge to optima that may not satisfy the KarushKuhnTucker conditions, and (4) the algorithm is applicable to mathematical programs with equilibrium constraints. Numerical testing on a set of general nonlinear programming problems, including degenerate problems and infeasible problems, confirm the theoretical results. We also provide comparisons to a highlyefficient nonlinear solver and thoroughly analyze the effects of enforcing theoretical convergence guarantees on the computational performance of the algorithm. 1.
CUTEr (and SifDec), a constrained and unconstrained testing environment, revisited
 ACM TRANSACTIONS ON MATHEMATICAL SOFTWARE
, 2001
"... The initial release of CUTE, a widely used testing environment for optimization software was described in [2]. The latest version, now known as CUTEr is presented. New features include reorganisation of the environment to allow simultaneous multiplatform installation, new tools for, and interface ..."
Abstract

Cited by 86 (8 self)
 Add to MetaCart
(Show Context)
The initial release of CUTE, a widely used testing environment for optimization software was described in [2]. The latest version, now known as CUTEr is presented. New features include reorganisation of the environment to allow simultaneous multiplatform installation, new tools for, and interfaces to, optimization packages, and a considerably simplified and entirely automated installation procedure for unix systems. The SIF decoder, which used to be a part of CUTE, has become a separate tool, easily callable by various packages. It features simple extensions to the SIF test problem format and the generation of files suited to automatic differentiation packages.
On Solving Mathematical Programs With Complementarity Constraints As Nonlinear Programs
, 2002
"... . We investigate the possibility of solving mathematical programs with complementarity constraints (MPCCs) using classical algorithms and procedures from nonlinear programming. Although MPCCs do not satisfy a constraint qualification, we establish sufficient conditions for their Lagrange multiplier ..."
Abstract

Cited by 43 (2 self)
 Add to MetaCart
. We investigate the possibility of solving mathematical programs with complementarity constraints (MPCCs) using classical algorithms and procedures from nonlinear programming. Although MPCCs do not satisfy a constraint qualification, we establish sufficient conditions for their Lagrange multiplier set to be nonempty in two different formulations. MPCCs that have nonempty Lagrange multiplier sets and that satisfy the quadratic growth condition can be approached by the elastic mode with a boundedpenalty parameter. This transformsthe MPCC into a nonlinear program with additional variables that has an isolated stationary point and local minimum at the solution of the original problem, which in turn makes it approachable by a sequential quadratic programming algorithm. The robustness of the elastic mode when applied to MPCCs is demonstrated by several numerical examples. 1. Introduction. Complementarity constraints can be used to model numerous economics or mechanics applications [18, 25]....
Optimality measures for performance profiles
 Preprint ANL/MCSP11550504, Mathematics and Computer Science Division, Argonne National Lab
, 2004
"... We examine the influence of optimality measures on the benchmarking process, and show that scaling requirements lead to a convergence test for nonlinearly constrained solvers that uses a mixture of absolute and relative error measures. We show that this convergence test is well behaved at any point ..."
Abstract

Cited by 27 (0 self)
 Add to MetaCart
(Show Context)
We examine the influence of optimality measures on the benchmarking process, and show that scaling requirements lead to a convergence test for nonlinearly constrained solvers that uses a mixture of absolute and relative error measures. We show that this convergence test is well behaved at any point where the constraints satisfy the MangasarianFromovitz constraint qualification and also avoids the explicit use of a complementarity measure. Our computational experiments explore the impact of this convergence test on the benchmarking process with performance profiles. 1
A globally convergent linearly constrained Lagrangian method for nonlinear optimization
 SIAM J. Optim
, 2002
"... Abstract. For optimization problems with nonlinear constraints, linearly constrained Lagrangian (LCL) methods solve a sequence of subproblems of the form “minimize an augmented Lagrangian function subject to linearized constraints. ” Such methods converge rapidly near a solution but may not be relia ..."
Abstract

Cited by 25 (4 self)
 Add to MetaCart
(Show Context)
Abstract. For optimization problems with nonlinear constraints, linearly constrained Lagrangian (LCL) methods solve a sequence of subproblems of the form “minimize an augmented Lagrangian function subject to linearized constraints. ” Such methods converge rapidly near a solution but may not be reliable from arbitrary starting points. Nevertheless, the wellknown software package MINOS has proved effective on many large problems. Its success motivates us to derive a related LCL algorithm that possesses three important properties: it is globally convergent, the subproblem constraints are always feasible, and the subproblems may be solved inexactly. The new algorithm has been implemented in Matlab, with an option to use either MINOS or SNOPT (Fortran codes) to solve the linearly constrained subproblems. Only first derivatives are required. We present numerical results on a subset of the COPS, HS, and CUTE test problems, which include many large examples. The results demonstrate the robustness and efficiency of the stabilized LCL procedure.
Benchmarking optimization software with cops 3.0
 MATHEMATICS AND COMPUTER SCIENCE DIVISION, ARGONNE NATIONAL LABORATORY
, 2004
"... ..."
A Data and Task Parallel Image Processing Environment
 Parallel Computing
, 2001
"... The paper presents a data and task paxallel environment for parallelizing lowlevel image processing applications on distributed memory systems. Image processing operators axe paxallelized by data decomposition using algorithmic skeletons. At the application level we use task decomposition, base ..."
Abstract

Cited by 19 (1 self)
 Add to MetaCart
(Show Context)
The paper presents a data and task paxallel environment for parallelizing lowlevel image processing applications on distributed memory systems. Image processing operators axe paxallelized by data decomposition using algorithmic skeletons. At the application level we use task decomposition, based on the Image Application Task Graph.
Degenerate Nonlinear Programming with a Quadratic Growth Condition
 Preprint ANL/MCSP7610699, Mathematics and Computer Science Division, Argonne National Laboratory, Argonne, Ill
"... . We show that the quadratic growth condition and the MangasarianFromovitz constraint qualification imply that local minima of nonlinear programs are isolated stationary points. As a result, when started sufficiently close to such points, an L1 exact penalty sequential quadratic programming algorit ..."
Abstract

Cited by 19 (5 self)
 Add to MetaCart
(Show Context)
. We show that the quadratic growth condition and the MangasarianFromovitz constraint qualification imply that local minima of nonlinear programs are isolated stationary points. As a result, when started sufficiently close to such points, an L1 exact penalty sequential quadratic programming algorithm will induce at least Rlinear convergence of the iterates to such a local minimum. We construct an example of a degenerate nonlinear program with a unique local minimum satisfying the quadratic growth and the MangasarianFromovitz constraint qualification but for which no positive semidefinite augmented Lagrangian exists. We present numerical results obtained using several nonlinear programming packages on this example, and discuss its implications for some algorithms. 1. Introduction. Recently, there has been renewed interest in analyzing and modifying sequential quadratic programming (SQP) algorithms for constrained nonlinear optimization for cases where the traditional regularity cond...
CPR: Mixed Task and Data Parallel Scheduling for Distributed Systems
 In Proceedings of the 15th International Parallel and Distributed Symposium
, 2001
"... It is wellknown that mixing task and data parallelism to solve large computational applications often yields better speedups compared to either applying pure task parallelism or pure data parallelism. Typically, the applications are modeled in terms of a dependence graph of coarsegrain dataparall ..."
Abstract

Cited by 19 (6 self)
 Add to MetaCart
It is wellknown that mixing task and data parallelism to solve large computational applications often yields better speedups compared to either applying pure task parallelism or pure data parallelism. Typically, the applications are modeled in terms of a dependence graph of coarsegrain dataparallel tasks, called a dataparallel task graph. In this paper we present a new compiletime heuristic, named Critical Path Reduction (CPR), for scheduling dataparallel task graphs. Experimental results based on graphs derived from real problems as well as synthetic graphs, show that CPR achieves higher speedup compared to other wellknown existing scheduling algorithms, at the expense of some higher cost. These results are also confirmed by performance measurements of two real applications (i.e., complex matrix multiplication and Strassen matrix multiplication) running on a cluster of workstations.