Results 1  10
of
80
Augmented Lagrangian methods under the Constant Positive Linear Dependence constraint qualification
"... ..."
Global minimization using an Augmented Lagrangian method with variable lowerlevel constraints
, 2007
"... A novel global optimization method based on an Augmented Lagrangian framework is introduced for continuous constrained nonlinear optimization problems. At each outer iteration k the method requires the εkglobal minimization of the Augmented Lagrangian with simple constraints, where εk → ε. Global c ..."
Abstract

Cited by 39 (1 self)
 Add to MetaCart
A novel global optimization method based on an Augmented Lagrangian framework is introduced for continuous constrained nonlinear optimization problems. At each outer iteration k the method requires the εkglobal minimization of the Augmented Lagrangian with simple constraints, where εk → ε. Global convergence to an εglobal minimizer of the original problem is proved. The subproblems are solved using the αBB method. Numerical experiments are presented.
Minimizing the object dimensions in circle and sphere packing problems
, 2006
"... Given a fixed set of identical or differentsized circular items, the problem we deal withconsists on finding the smallest object within which the items can be packed. Circular, triangular, squared, rectangular and also strip objects are considered. Moreover, 2D and3D problems are treated. Twiced ..."
Abstract

Cited by 16 (1 self)
 Add to MetaCart
Given a fixed set of identical or differentsized circular items, the problem we deal withconsists on finding the smallest object within which the items can be packed. Circular, triangular, squared, rectangular and also strip objects are considered. Moreover, 2D and3D problems are treated. Twicedifferentiable models for all these problems are presented. A strategy to reduce the complexity of evaluating the models is employed and, as a consequence, instances with a large number of items can be considered. Numerical experiments show the flexibility and reliability of the new unified approach.
LOCAL CONVERGENCE OF EXACT AND INEXACT AUGMENTED LAGRANGIAN METHODS UNDER THE SECONDORDER SUFFICIENT OPTIMALITY CONDITION
, 2012
"... We establish local convergence and rate of convergence of the classical augmented Lagrangian algorithm under the sole assumption that the dual starting point is close to a multiplier satisfying the secondorder sufficient optimality condition. In particular, no constraint qualifications of any kind ..."
Abstract

Cited by 15 (5 self)
 Add to MetaCart
We establish local convergence and rate of convergence of the classical augmented Lagrangian algorithm under the sole assumption that the dual starting point is close to a multiplier satisfying the secondorder sufficient optimality condition. In particular, no constraint qualifications of any kind are needed. Previous literature on the subject required, in addition, the linear independence constraint qualification and either the strict complementarity assumption or a stronger version of the secondorder sufficient condition. That said, the classical results allow the initial multiplier estimate to be far from the optimal one, at the expense of proportionally increasing the threshold value for the penalty parameters. Although our primary goal is to avoid constraint qualifications, if the stronger assumptions are introduced, then starting points far from the optimal multiplier are allowed within our analysis as well. Using only the secondorder sufficient optimality condition, for penalty parameters large enough we prove primaldual Qlinear convergence rate, which becomes superlinear if the parameters are allowed to go to infinity. Both exact and inexact solutions of subproblems are considered. In the exact case, we further show that the primal convergence rate is of the same Qorder as the primaldual rate. Previous assertions for the primal sequence all had to do with the weaker Rrate of convergence and required the stronger assumptions cited above. Finally, we show that under our assumptions one of the popular rules of controlling the penalty parameters ensures their boundedness.
Improving ultimate convergence of an Augmented Lagrangian method
, 2007
"... Optimization methods that employ the classical PowellHestenesRockafellar Augmented Lagrangian are useful tools for solving Nonlinear Programming problems. Their reputation decreased in the last ten years due to the comparative success of InteriorPoint Newtonian algorithms, which are asymptoticall ..."
Abstract

Cited by 14 (0 self)
 Add to MetaCart
Optimization methods that employ the classical PowellHestenesRockafellar Augmented Lagrangian are useful tools for solving Nonlinear Programming problems. Their reputation decreased in the last ten years due to the comparative success of InteriorPoint Newtonian algorithms, which are asymptotically faster. In the present research a combination of both approaches is evaluated. The idea is to produce a competitive method, being more robust and efficient than its “pure” counterparts for critical problems. Moreover, an additional hybrid algorithm is defined, in which the Interior Point method is replaced by the Newtonian resolution of a KKT system identified by the Augmented Lagrangian algorithm. The software used in this work is freely available through the Tango Project web page:
Structured minimalmemory inexact quasiNewton method and secant preconditioners for Augmented Lagrangian Optimization
, 2006
"... Augmented Lagrangian methods for largescale optimization usually require efficient algorithms for minimization with box constraints. On the other hand, activeset boxconstraint methods employ unconstrained optimization algorithms for minimization inside the faces of the box. Several approaches may ..."
Abstract

Cited by 13 (2 self)
 Add to MetaCart
Augmented Lagrangian methods for largescale optimization usually require efficient algorithms for minimization with box constraints. On the other hand, activeset boxconstraint methods employ unconstrained optimization algorithms for minimization inside the faces of the box. Several approaches may be employed for computing internal search directions in the largescale case. In this paper a minimalmemory quasiNewton approach with secant preconditioners is proposed, taking into account the structure of Augmented Lagrangians that come from the popular PowellHestenesRockafellar scheme. A combined algorithm, that uses the quasiNewton formula or a truncatedNewton procedure, depending on the presence of active constraints in the penaltyLagrangian function, is also suggested. Numerical experiments using the Cute collection are presented.
M.: Exploring local modifications for constrained meshes
 Computer Graphics Forum (Proceedings of Eurographics 2013
, 2013
"... Figure 1: Local modifications of a constrained mesh. In this example a glass structure composed of planar quads is locally deformed by exploring a subspace encoding local planar modifications of its central zone. Mesh editing under constraints is a challenging task with numerous applications in geom ..."
Abstract

Cited by 12 (2 self)
 Add to MetaCart
(Show Context)
Figure 1: Local modifications of a constrained mesh. In this example a glass structure composed of planar quads is locally deformed by exploring a subspace encoding local planar modifications of its central zone. Mesh editing under constraints is a challenging task with numerous applications in geometric modeling, industrial design, and architectural form finding. Recent methods support constraintbased exploration of meshes with fixed connectivity, but commonly lack local control. Because constraints are often globally coupled, a local modification by the user can have global effects on the surface, making iterative design exploration and refinement difficult. Simply fixing a local region of interest a priori is problematic, as it is not clear in advance which parts of the mesh need to be modified to obtain an aesthetically pleasing solution that satisfies all constraints. We propose a novel framework for exploring local modifications of constrained meshes. Our solution consists of three steps. First, a user specifies target positions for one or more vertices. Our algorithm computes a sparse set of displacement vectors that satisfies the constraints and yields a smooth deformation. Then we build a linear subspace to allow realtime exploration of local variations that satisfy the constraints approximately. Finally, after interactive exploration, the result is optimized to fully satisfy the set of constraints. We evaluate our framework on meshes where each face is constrained to be planar.
On sequential optimality conditions for smooth constrained optimization
, 2009
"... Sequential optimality conditions provide adequate theoretical tools to justify stopping criteria for nonlinear programming solvers. Approximate KKT and Approximate Gradient Projection conditions are analyzed in this work. These conditions are not necessarily equivalent. Implications between differen ..."
Abstract

Cited by 10 (3 self)
 Add to MetaCart
(Show Context)
Sequential optimality conditions provide adequate theoretical tools to justify stopping criteria for nonlinear programming solvers. Approximate KKT and Approximate Gradient Projection conditions are analyzed in this work. These conditions are not necessarily equivalent. Implications between different conditions and counterexamples will be shown. Algorithmic consequences will be discussed.
Low OrderValue Optimization and Applications
, 2005
"... Given r real functions F1(x),..., Fr(x) and an integer p between 1 and r, the Low OrderValue Optimization problem (LOVO) consists of minimizing the sum of the functions that take the p smaller values. If (y1,..., yr) is a vector of data and T (x, ti) is the predicted value of the observation i with ..."
Abstract

Cited by 9 (5 self)
 Add to MetaCart
(Show Context)
Given r real functions F1(x),..., Fr(x) and an integer p between 1 and r, the Low OrderValue Optimization problem (LOVO) consists of minimizing the sum of the functions that take the p smaller values. If (y1,..., yr) is a vector of data and T (x, ti) is the predicted value of the observation i with the parameters x ∈ IR n, it is natural to define Fi(x) = (T (x, ti) − yi) 2 (the quadratic error at observation i under the parameters x). When p = r this LOVO problem coincides with the classical nonlinear leastsquares problem. However, the interesting situation is when p is smaller than r. In that case, the solution of LOVO allows one to discard the influence of an estimated number of outliers. Thus, the LOVO problem is an interesting tool for robust estimation of parameters of nonlinear models. When p ≪ r the LOVO problem may be used to find hidden structures in data sets. One of the best succeeded applications include the Protein Alignment problem. Fully documented algorithms for this application are available at www.ime.unicamp.br/∼martinez/lovoalign. In this paper optimality conditions are discussed, algorithms for solving the LOVO problem are introduced and convergence theorems are proved. Finally, numerical experiments are presented.