Results 1  10
of
27
InexactRestoration Method with Lagrangian Tangent Decrease and New Merit Function for Nonlinear Programming
, 1999
"... . A new InexactRestoration method for Nonlinear Programming is introduced. The iteration of the main algorithm has two phases. In Phase 1, feasibility is explicitly improved and in Phase 2 optimality is improved on a tangent approximation of the constraints. Trust regions are used for reducing the ..."
Abstract

Cited by 27 (5 self)
 Add to MetaCart
. A new InexactRestoration method for Nonlinear Programming is introduced. The iteration of the main algorithm has two phases. In Phase 1, feasibility is explicitly improved and in Phase 2 optimality is improved on a tangent approximation of the constraints. Trust regions are used for reducing the step when the trial point is not good enough. The trust region is not centered in the current point, as in many Nonlinear Programming algorithms, but in the intermediate "more feasible" point. Therefore, in this semifeasible approach, the more feasible intermediate point is considered to be essentially better than the current point. This is the first method in which intermediatepointcentered trust regions are combined with the decrease of the Lagrangian in the tangent approximation to the constraints. The merit function used in this paper is also new: it consists of a convex combination of the Lagrangian and the (nonsquared) norm of the constraints. The Euclidean norm is used for simplicity but other norms for measuring infeasibility are admissible. Global convergence theorems are proved, a theoretically justified algorithm for the first phase is introduced and some numerical insight is given. Key Words: Nonlinear Programming, trust regions, GRG methods, SGRA methods, restoration methods, global convergence. 1
Nonlinearprogramming reformulation of the Ordervalue optimization problem
 Mathematical Methods of Operations Research 61
, 2005
"... Ordervalue optimization (OVO) is a generalization of the minimax problem motivated by decisionmaking problems under uncertainty and by robust estimation. New optimality conditions for this nonsmooth optimization problem are derived. An equivalent mathematical programming problem with equilibrium c ..."
Abstract

Cited by 9 (6 self)
 Add to MetaCart
(Show Context)
Ordervalue optimization (OVO) is a generalization of the minimax problem motivated by decisionmaking problems under uncertainty and by robust estimation. New optimality conditions for this nonsmooth optimization problem are derived. An equivalent mathematical programming problem with equilibrium constraints is deduced. The relation between OVO and this nonlinearprogramming reformulation is studied. Particular attention is given to the relation between local minimizers and stationary points of both problems.
Inexact Restoration methods for nonlinear programming: advances and perspectives
, 2004
"... Inexact Restoration methods have been introduced in the last few years for solving nonlinear programming problems. These methods are related to classical restoration algorithms but also have some remarkable dierences. They generate a sequence of generally infeasible iterates with intermediate it ..."
Abstract

Cited by 8 (2 self)
 Add to MetaCart
Inexact Restoration methods have been introduced in the last few years for solving nonlinear programming problems. These methods are related to classical restoration algorithms but also have some remarkable dierences. They generate a sequence of generally infeasible iterates with intermediate iterations that consist of inexactly restored points. The convergence theory allows one to use arbitrary algorithms for performing the restoration. This feature is appealing because it allows one to use the structure of the problem in quite opportunistic ways. Dierent Inexact Restoration algorithms are available. The most recent ones use the trustregion approach. However, unlike the algorithms based on sequential quadratic programming, the trust regions are centered not in the current point but in the inexactly restored intermediate one. Global convergence has been proved, based on merit functions of augmented Lagrangian type. In this survey we point out some applications and we relate recent advances in the theory.
A new sequential optimality condition for constrained optimization and algorithmic consequences
, 2009
"... ..."
Local Convergence of an InexactRestoration Method and Numerical Experiments
, 2007
"... Local convergence of an inexactrestoration method for nonlinear programming is proved. Numerical experiments are performed with the objective of evaluating the behavior of the purely local method against a globally convergent nonlinearprogramming algorithm. ..."
Abstract

Cited by 8 (1 self)
 Add to MetaCart
Local convergence of an inexactrestoration method for nonlinear programming is proved. Numerical experiments are performed with the objective of evaluating the behavior of the purely local method against a globally convergent nonlinearprogramming algorithm.
Partial Spectral Projected Gradient Method with ActiveSet Strategy for Linearly Constrained Optimization
, 2009
"... A method for linearly constrained optimization which modifies and generalizes recent boxconstraint optimization algorithms is introduced. The new algorithm is based on a relaxed form of Spectral Projected Gradient iterations. Intercalated with these projected steps, internal iterations restricted t ..."
Abstract

Cited by 8 (0 self)
 Add to MetaCart
(Show Context)
A method for linearly constrained optimization which modifies and generalizes recent boxconstraint optimization algorithms is introduced. The new algorithm is based on a relaxed form of Spectral Projected Gradient iterations. Intercalated with these projected steps, internal iterations restricted to faces of the polytope are performed, which enhance the efficiency of the algorithms. Convergence proofs are given and numerical experiments are included and commented. Software supporting this paper is available through the Tango
A Practical Optimality Condition Without Constraint Qualifications for Nonlinear Programming
, 2001
"... A new optimality condition for minimization with general constraints is introduced. Unlike the KKT conditions, this condition is satisfied by local minimizers of nonlinear programming problems, independently of constraint qualifications. The new condition implies, and is strictly stronger than, ..."
Abstract

Cited by 7 (2 self)
 Add to MetaCart
A new optimality condition for minimization with general constraints is introduced. Unlike the KKT conditions, this condition is satisfied by local minimizers of nonlinear programming problems, independently of constraint qualifications. The new condition implies, and is strictly stronger than, FritzJohn optimality conditions. Sufficiency for convex programming is proved.
Two new weak constraint qualifications and applications
"... We present two new constraint qualifications (CQ) that are weaker than the recently introduced Relaxed Constant Positive Linear Dependence (RCPLD) constraint qualification. RCPLD is based on the assumption that many subsets of the gradients of the active constraints preserve positive linear dependen ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
(Show Context)
We present two new constraint qualifications (CQ) that are weaker than the recently introduced Relaxed Constant Positive Linear Dependence (RCPLD) constraint qualification. RCPLD is based on the assumption that many subsets of the gradients of the active constraints preserve positive linear dependence locally. A major open question was to identify the exact set of gradients whose properties had to be preserved locally and that would still work as a CQ. This is done in the first new constraint qualification, that we call Constant Rank of the Subspace Component (CRSC) CQ. This new CQ also preserves many of the good properties of RCPLD, like local stability and the validity of an error bound. We also introduce an even weaker CQ, called Constant Positive Generator (CPG), that can replace RCPLD in the analysis of the global convergence of algorithms. We close this work extending convergence results of algorithms belonging to all the main classes of nonlinear optimization methods: SQP, augmented Lagrangians, interior point algorithms, and inexact restoration. ∗ This work was supported by PRONEXOptimization (PRONEXCNPq/FAPERJ E
Optimization Problems in the Estimation Or Parameters of Thin Films and the Elimination of the Influence of the Substrate
, 2001
"... In a recent paper, the authors introduced a method to estimate optical parameters of thin lms using transmission data. The associated model assumes that the lm is deposited on a completely transparent substrate. It has been observed, however, that small absorption of substrates affect in a nonneglig ..."
Abstract

Cited by 5 (2 self)
 Add to MetaCart
In a recent paper, the authors introduced a method to estimate optical parameters of thin lms using transmission data. The associated model assumes that the lm is deposited on a completely transparent substrate. It has been observed, however, that small absorption of substrates affect in a nonnegligible way the transmitted energy. The question arises of how reliable is the estimation method to retrieve optical parameters in the presence of substrates of dierent thicknesses and absorption degrees. In this paper, transmission spectra of thin lms deposited on nontransparent substrates are generated and, as a first approximation, the method based on transparent substrates is used to estimate the optical parameters. As expected, the method is good when the absorption of the substrate is very small, but fails when one deals with less transparent substrates. To overcome this drawback, an iterative procedure is introduced, that allows one to approximate the transmittance with transparent substrate, given the transmittance with absorbent substrate. The updated method turns out to be almost as efficient in the case of absorbent substrates as it was in the case of transparent ones.
Inexact Restoration method for DerivativeFree Optimization with smooth constraints
, 2011
"... A new method is introduced for solving constrained optimization problems in which the derivatives of the constraints are available but the derivatives of the objective function are not. The method is based on the Inexact Restoration framework, by means of which each iteration is divided in two phase ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
A new method is introduced for solving constrained optimization problems in which the derivatives of the constraints are available but the derivatives of the objective function are not. The method is based on the Inexact Restoration framework, by means of which each iteration is divided in two phases. In the first phase one considers only the constraints, in order to improve feasibility. In the second phase one minimizes a suitable objective function subject to a linear approximation of the constraints. The second phase must be solved using derivativefree methods. An algorithm introduced recently by Kolda, Lewis, and Torczon for linearly constrained derivativefree optimization is employed for this purpose. Under usual assumptions, convergence to stationary points is proved. A computer implementation is described and numerical experiments are presented.