Results 1 
9 of
9
On sequential optimality conditions for smooth constrained optimization
, 2009
"... Sequential optimality conditions provide adequate theoretical tools to justify stopping criteria for nonlinear programming solvers. Approximate KKT and Approximate Gradient Projection conditions are analyzed in this work. These conditions are not necessarily equivalent. Implications between differen ..."
Abstract

Cited by 11 (3 self)
 Add to MetaCart
(Show Context)
Sequential optimality conditions provide adequate theoretical tools to justify stopping criteria for nonlinear programming solvers. Approximate KKT and Approximate Gradient Projection conditions are analyzed in this work. These conditions are not necessarily equivalent. Implications between different conditions and counterexamples will be shown. Algorithmic consequences will be discussed.
Inexact Restoration method for DerivativeFree Optimization with smooth constraints
, 2011
"... A new method is introduced for solving constrained optimization problems in which the derivatives of the constraints are available but the derivatives of the objective function are not. The method is based on the Inexact Restoration framework, by means of which each iteration is divided in two phase ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
A new method is introduced for solving constrained optimization problems in which the derivatives of the constraints are available but the derivatives of the objective function are not. The method is based on the Inexact Restoration framework, by means of which each iteration is divided in two phases. In the first phase one considers only the constraints, in order to improve feasibility. In the second phase one minimizes a suitable objective function subject to a linear approximation of the constraints. The second phase must be solved using derivativefree methods. An algorithm introduced recently by Kolda, Lewis, and Torczon for linearly constrained derivativefree optimization is employed for this purpose. Under usual assumptions, convergence to stationary points is proved. A computer implementation is described and numerical experiments are presented.
Constrained DerivativeFree Optimization on Thin Domains
, 2011
"... Many derivativefree methods for constrained problems are not efficient for minimizing functions on “thin” domains. Other algorithms, like those based on Augmented Lagrangians, deal with thin constraints using penaltylike strategies. When the constraints are computationally inexpensive but highly n ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
Many derivativefree methods for constrained problems are not efficient for minimizing functions on “thin” domains. Other algorithms, like those based on Augmented Lagrangians, deal with thin constraints using penaltylike strategies. When the constraints are computationally inexpensive but highly nonlinear, these methods spend many potentially expensive objective function evaluations motivated by the difficulties of improving feasibility. An algorithm that handles efficiently this case is proposed in this paper. The main iteration is splitted into two steps: restoration and minimization. In the restoration step the aim is to decrease infeasibility without evaluating the objective function. In the minimization step the objective function f is minimized on a relaxed feasible set. A global minimization result will be proved and computational experiments showing the advantages of this approach will be presented.
Global optimization of robust chance constrained problems
, 2007
"... We propose a stochastic algorithm for the global optimization of chance constrained problems. We assume that the probability measure with which the constraints are evaluated is known only through its moments. The algorithm proceeds in two phases. In the first phase the probability distribution is (c ..."
Abstract
 Add to MetaCart
(Show Context)
We propose a stochastic algorithm for the global optimization of chance constrained problems. We assume that the probability measure with which the constraints are evaluated is known only through its moments. The algorithm proceeds in two phases. In the first phase the probability distribution is (coarsely) discretized and solved to global optimality using a stochastic algorithm. We only assume that the stochastic algorithm exhibits a weak * convergence to a probability measure assigning all its mass to the discretized problem. A diffusion process is derived that has this convergence property. In the second phase, the discretization is improved by solving another nonlinear programming problem. It is shown that the algorithm converges to the solution of the original problem. We discuss the numerical performance of the algorithm and its application to process design. 1
SOME COMPOSITESTEP CONSTRAINED OPTIMIZATION METHODS INTERPRETED VIA THE PERTURBED SEQUENTIAL QUADRATIC PROGRAMMING FRAMEWORK
, 2013
"... We consider the inexact restoration and the compositestep sequential quadratic programming (SQP) methods, and relate them to the socalled perturbed SQP framework. In particular, iterations of the methods in question are interpreted as certain structured perturbations of the basic SQP iterations. T ..."
Abstract
 Add to MetaCart
(Show Context)
We consider the inexact restoration and the compositestep sequential quadratic programming (SQP) methods, and relate them to the socalled perturbed SQP framework. In particular, iterations of the methods in question are interpreted as certain structured perturbations of the basic SQP iterations. This gives a different insight into local behaviour of those algorithms, as well as improved or different local convergence and rate of convergence results. Key words: sequential quadratic programming; inexact restoration; perturbed SQP; compositestep SQP; superlinear convergence.
Inexact Restoration method for minimization problems arising in electronic structure calculations ∗
, 2010
"... An inexact restoration (IR) approach is presented to solve a matricial optimization problem arising in electronic structure calculations. The solution of the problem is the closedshell density matrix and the constraints are represented by a Grassmann manifold. One of the mathematical and computati ..."
Abstract
 Add to MetaCart
(Show Context)
An inexact restoration (IR) approach is presented to solve a matricial optimization problem arising in electronic structure calculations. The solution of the problem is the closedshell density matrix and the constraints are represented by a Grassmann manifold. One of the mathematical and computational challenges in this area is to develop methods for solving the problem not using eigenvalue calculations and having the possibility of preserving sparsity of iterates and gradients. The inexact restoration approach enjoys local quadratic convergence and global convergence to stationary points and does not use spectral matrix decompositions, so that, in principle, largescale implementations may preserve sparsity. Numerical experiments show that IR algorithms are competitive with current algorithms for solving closedshell HartreeFock equations and similar mathematical problems, thus being a promising alternative for problems where eigenvalue calculations are a limiting factor.
Assessing the reliability of generalpurpose Inexact Restoration methods
, 2014
"... Inexact Restoration methods have been proved to be effective to solve constrained optimization problems in which some structure of the feasible set induces a natural way of recovering feasibility from arbitrary infeasible points. Sometimes natural ways of dealing with minimization over tangent appro ..."
Abstract
 Add to MetaCart
Inexact Restoration methods have been proved to be effective to solve constrained optimization problems in which some structure of the feasible set induces a natural way of recovering feasibility from arbitrary infeasible points. Sometimes natural ways of dealing with minimization over tangent approximations of the feasible set are also employed. A recent paper [N. Banihashemi and C. Y. Kaya, Inexact Restoration for Euler discretization of boxconstrained optimal control problems, Journal of Optimization Theory and Applications 156, pp. 726–760, 2013] suggests that the Inexact Restoration approach can be competitive with wellestablished nonlinear programming solvers when applied to certain control problems without any problemoriented procedure for restoring feasibility. This result motivated us to revisit the idea of designing generalpurpose Inexact Restoration methods, especially for largescale problems. In this paper we introduce an affordable algorithm of Inexact Restoration type for solving arbitrary nonlinear programming problems and we perform the first experiments that aim to assess its reliability.
unknown title
"... www.scielo.br/cam A new double trust regions SQP method without a penalty function or a filter∗ XIAOJING ZHU∗ ∗ and DINGGUO PU ..."
Abstract
 Add to MetaCart
(Show Context)
www.scielo.br/cam A new double trust regions SQP method without a penalty function or a filter∗ XIAOJING ZHU∗ ∗ and DINGGUO PU
Euler Discretization and Inexact Restoration for Optimal Control
"... Abstract A computational technique for unconstrained optimal control problems is presented. First an Euler discretization is carried out to obtain a finitedimensional approximation of the continoustime (infinitedimensional) problem. Then an inexact restoration (IR) method due to Birgin and Martí ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract A computational technique for unconstrained optimal control problems is presented. First an Euler discretization is carried out to obtain a finitedimensional approximation of the continoustime (infinitedimensional) problem. Then an inexact restoration (IR) method due to Birgin and Martínez is applied to the discretized problem to find an approximate solution. Convergence of the technique to a solution of the continuoustime problem is facilitated by the convergence of the IR method and the convergence of the discrete (approximate) solution as finer subdivisions are taken. It is shown that a special case of the IR method is equivalent to the projected Newton method for equality constraint quadratic optimization problems. The technique is numerically demonstrated by means of a scalar system and the van der Pol system, and comprehensive comparisons are made with the Newton and Projected Newton methods.