Results 1 
8 of
8
Global minimization using an Augmented Lagrangian method with variable lowerlevel constraints
, 2007
"... A novel global optimization method based on an Augmented Lagrangian framework is introduced for continuous constrained nonlinear optimization problems. At each outer iteration k the method requires the εkglobal minimization of the Augmented Lagrangian with simple constraints, where εk → ε. Global c ..."
Abstract

Cited by 39 (1 self)
 Add to MetaCart
A novel global optimization method based on an Augmented Lagrangian framework is introduced for continuous constrained nonlinear optimization problems. At each outer iteration k the method requires the εkglobal minimization of the Augmented Lagrangian with simple constraints, where εk → ε. Global convergence to an εglobal minimizer of the original problem is proved. The subproblems are solved using the αBB method. Numerical experiments are presented.
Augmented Lagrangians with possible infeasibility and finite termination for global nonlinear programming
, 2012
"... In a recent paper, Birgin, Floudas and Martínez introduced an augmented Lagrangian method for global optimization. In their approach, augmented Lagrangian subproblems are solved using the αBB method and convergence to global minimizers was obtained assuming feasibility of the original problem. In th ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
In a recent paper, Birgin, Floudas and Martínez introduced an augmented Lagrangian method for global optimization. In their approach, augmented Lagrangian subproblems are solved using the αBB method and convergence to global minimizers was obtained assuming feasibility of the original problem. In the present research, the algorithm mentioned above will be improved in several crucial aspects. On the one hand, feasibility of the problem will not be required. Possible infeasibility will be detected in finite time by the new algorithms and optimal infeasibility results will be proved. On the other hand, finite termination results that guarantee optimality and/or feasibility up to any required precision will be provided. An adaptive modification in which subproblem tolerances depend on current feasibility and complementarity will also be given. The adaptive algorithm allows the augmented Lagrangian subproblems to be solved without requiring unnecessary potentially high precisions in the intermediate steps of the method, which improves the overall efficiency. Experiments showing how the new algorithms and results are related to practical computations will be given.
Global Nonlinear Programming with possible infeasibility and finite termination
, 2012
"... In a recent paper, Birgin, Floudas and Martínez introduced an augmented Lagrangian method for global optimization. In their approach, augmented Lagrangian subproblems are solved using the αBB method and convergence to global minimizers was obtained assuming feasibility of the original problem. In th ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
In a recent paper, Birgin, Floudas and Martínez introduced an augmented Lagrangian method for global optimization. In their approach, augmented Lagrangian subproblems are solved using the αBB method and convergence to global minimizers was obtained assuming feasibility of the original problem. In the present research, the algorithm mentioned above will be improved in several crucial aspects. On the one hand, feasibility of the problem will not be required. Possible infeasibility will be detected in finite time by the new algorithms and optimal infeasibility results will be proved. On the other hand, finite termination results thatguaranteeoptimalityand/orfeasibilityuptoanyrequiredprecisionwillbeprovided. An adaptive modification in which subproblem tolerances depend on current feasibility and complementarity will also be given. The adaptive algorithm allows the augmented Lagrangian subproblems to be solved without requiring unnecessary potentially high precisions in the intermediate steps of the method, which improves the overall efficiency. Experiments showing how the new algorithms and results are related to practical computations will be given.
Spectral Projected Gradient methods: Review and Perspectives
, 2012
"... Over the last two decades, it has been observed that using the gradient vector as a search direction in largescale optimization may lead to efficient algorithms. The effectiveness relies on choosing the step lengths according to novel ideas that are related to the spectrum of the underlying local H ..."
Abstract
 Add to MetaCart
Over the last two decades, it has been observed that using the gradient vector as a search direction in largescale optimization may lead to efficient algorithms. The effectiveness relies on choosing the step lengths according to novel ideas that are related to the spectrum of the underlying local Hessian rather than related to the standard decrease in the objective function. A review of these socalled spectral projected gradient methods for convex constrained optimization is presented. To illustrate the performance of these lowcost schemes, an optimization problem on the set of positive definite matrices is described.
ACTIVE SET ALGORITHM FOR LARGESCALE CONTINUOUS KNAPSACK PROBLEMS WITH APPLICATION TO TOPOLOGY OPTIMIZATION PROBLEMS
, 2009
"... Abstract. The structure of many realworld optimization problems includes minimization of a nonlinear (or quadratic) functional subject to bound and singly linear constraints (in the form of either equality or bilateral inequality) which are commonly called as continuous knapsack problems. Since the ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract. The structure of many realworld optimization problems includes minimization of a nonlinear (or quadratic) functional subject to bound and singly linear constraints (in the form of either equality or bilateral inequality) which are commonly called as continuous knapsack problems. Since there are efficient methods to solve largescale bound constrained nonlinear programs, it is desirable to adapt these methods to solve knapsack problems, while preserving their efficiency and convergence theories. The goal of this paper is to introduce a general framework to extend a boxconstrained optimization solver to solve knapsack problems. This framework includes two main ingredients which are O(n) methods; in terms of the computational cost and required memory; for the projection onto the knapsack constrains and the nullspace manipulation of the related linear constraint. The main focus of this work is on the extension of HagerZhang active set algorithm (SIAM J. Optim. 2006, pp. 526–557). The main reasons for this choice was its promising efficiency in practice as well as its excellent convergence theories (e.g., superlinear local convergence rate without strict complementarity assumption). Moreover, this method does not use any explicit form of second order information and/or solution of linear systems in the course of optimization which makes it an ideal choice for largescale problems. Moreover, application of Birgin and Mart́ınez active set algorithm (Comp. Opt. Appl. 2002, pp. 101–125) for knapsack problems is also briefly commented. The feasibility of the presented algorithm is supported by numerical results for topology optimization problems.
Optimality properties of an Augmented Lagrangian method on infeasible problems
, 2014
"... Sometimes, the feasible set of an optimization problem that one aims to solve using a Nonlinear Programming algorithm is empty. In this case, two characteristics of the algorithm are desirable. On the one hand, the algorithm should converge to a minimizer of some infeasibility measure. On the other ..."
Abstract
 Add to MetaCart
Sometimes, the feasible set of an optimization problem that one aims to solve using a Nonlinear Programming algorithm is empty. In this case, two characteristics of the algorithm are desirable. On the one hand, the algorithm should converge to a minimizer of some infeasibility measure. On the other hand, one may wish to find a point with minimal infeasibility for which some optimality condition, with respect to the objective function, holds. Ideally, the algorithm should converge to a minimizer of the objective function subject to minimal infeasibility. In this paper the behavior of an Augmented Lagrangian algorithm with respect to those properties will be studied.
Noname manuscript No. (will
"... be inserted by the editor) On efficiency of nonmonotone Armijotype line searches ..."
Abstract
 Add to MetaCart
be inserted by the editor) On efficiency of nonmonotone Armijotype line searches
Advance Access publication on May 6, 2013 On
, 2012
"... spectral properties of steepest descent methods ..."
(Show Context)