Results 1  10
of
18
A Second Derivative SQP Method: Local Convergence 30 Practical Issues
 SIAM Journal of Optimization
"... results for a secondderivative SQP method for minimizing the exact ℓ1merit function for a fixed value of the penalty parameter. To establish this result, we used the properties of the socalled Cauchy step, which was itself computed from the socalled predictor step. In addition, we allowed for th ..."
Abstract

Cited by 17 (6 self)
 Add to MetaCart
results for a secondderivative SQP method for minimizing the exact ℓ1merit function for a fixed value of the penalty parameter. To establish this result, we used the properties of the socalled Cauchy step, which was itself computed from the socalled predictor step. In addition, we allowed for the computation of a variety of (optional) SQP steps that were intended to improve the efficiency of the algorithm. Although we established global convergence of the algorithm, we did not discuss certain aspects that are critical when developing software capable of solving general optimization problems. In particular, we must have strategies for updating the penalty parameter and better techniques for defining the positivedefinite matrix Bk used in computing the predictor step. In this paper we address both of these issues. We consider two techniques for defining the positivedefinite matrix Bk—a simple diagonal approximation and a more sophisticated limitedmemory BFGS update. We also analyze a strategy for updating the penalty parameter based on approximately minimizing the ℓ1penalty function over a sequence of increasing values of the penalty parameter. Algorithms based on exact penalty functions have certain desirable properties. To be practical, however, these algorithms must be guaranteed to avoid the socalled Maratos effect. We show that a nonmonotone variant of our algorithm avoids this phenomenon and, therefore, results in asymptotically superlinear local convergence; this is verified by preliminary numerical results on the Hock and Shittkowski test set. Key words. Nonlinear programming, nonlinear inequality constraints, sequential quadratic programming, ℓ1penalty function, nonsmooth optimization AMS subject classifications. 49J52, 49M37, 65F22, 65K05, 90C26, 90C30, 90C55 1. Introduction. In [19]
A SECOND DERIVATIVE SQP METHOD: THEORETICAL ISSUES
, 2008
"... Sequential quadratic programming (SQP) methods form a class of highly efficient algorithms for solving nonlinearly constrained optimization problems. Although second derivative information may often be calculated, there is little practical theory that justifies exactHessian SQP methods. In particul ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
Sequential quadratic programming (SQP) methods form a class of highly efficient algorithms for solving nonlinearly constrained optimization problems. Although second derivative information may often be calculated, there is little practical theory that justifies exactHessian SQP methods. In particular, the resulting quadratic programming (QP) subproblems are often nonconvex, and thus finding their global solutions may be computationally nonviable. This paper presents a secondderivative SQP method based on quadratic subproblems that are either convex, and thus may be solved efficiently, or need not be solved globally. Additionally, an explicit descentconstraint is imposed on certain QP subproblems, which “guides” the iterates through areas in which nonconvexity is a concern. Global convergence of the resulting algorithm is established.
A parametric levelset approach to simultaneous object identification and background reconstruction for dualenergy computed tomography
 Image Processing, IEEE Transactions on
, 2012
"... Dual energy computerized tomography has gained great interest because of its ability to characterize the chemical composition of a material rather than simply providing relative attenuation images as in conventional tomography. The purpose of this paper is to introduce a novel polychromatic dual ene ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
(Show Context)
Dual energy computerized tomography has gained great interest because of its ability to characterize the chemical composition of a material rather than simply providing relative attenuation images as in conventional tomography. The purpose of this paper is to introduce a novel polychromatic dual energy processing algorithm with an emphasis on detection and characterization of piecewise constant objects embedded in an unknown, cluttered background. Physical properties of the objects, specifically the Compton scattering and photoelectric absorption coefficients, are assumed to be known with some level of uncertainty. Our approach is based on a levelset representation of the characteristic function of the object and encompasses a number of regularization techniques for addressing both the prior information we have concerning the physical properties of the object as well as fundamental, physicsbased limitations associated with our ability to jointly recover the Compton scattering and photoelectric absorption properties of the scene. In the absence of an object with appropriate physical properties, our approach returns a null characteristic function and thus can be viewed as simultaneously solving the detection and characterization problems. Unlike the vast majority of methods which define the level set function nonparametrically, i.e., as a dense set of pixel values), we define our level set parametrically via radial basis functions (RBF’s) and employ a GaussNewton type algorithm for cost minimization. Numerical results show that the algorithm successfully detects objects of interest, finds their shape and location, and gives a adequate reconstruction of the background. Index Terms Computed tomography, dualenergy, polychromatic spectrum, parametric level set, inverse problems, iterative reconstruction I.
Multilevel algorithms for largescale interior point methods in bound constrained optimization
, 2006
"... We develop and compare multilevel algorithms for solving bound constrained nonlinear variational problems via interior point methods. Several equivalent formulations of the linear systems arising at each iteration of the interior point method are compared from the point of view of conditioning and i ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
We develop and compare multilevel algorithms for solving bound constrained nonlinear variational problems via interior point methods. Several equivalent formulations of the linear systems arising at each iteration of the interior point method are compared from the point of view of conditioning and iterative solution. Furthermore, we show how a multilevel continuation strategy can be used to obtain good initial guesses (“hot starts”) for each nonlinear iteration. A minimal surface problem is used to illustrate the various approaches.
A SECOND DERIVATIVE SQP METHOD WITH IMPOSED DESCENT
, 2008
"... Sequential quadratic programming (SQP) methods form a class of highly efficient algorithms for solving nonlinearly constrained optimization problems. Although second derivative information may often be calculated, there is little practical theory that justifies exactHessian SQP methods. In particul ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
Sequential quadratic programming (SQP) methods form a class of highly efficient algorithms for solving nonlinearly constrained optimization problems. Although second derivative information may often be calculated, there is little practical theory that justifies exactHessian SQP methods. In particular, the resulting quadratic programming (QP) subproblems are often nonconvex, and thus finding their global solutions may be computationally nonviable. This paper presents a secondderivative Sℓ1QP method based on quadratic subproblems that are either convex, and thus may be solved efficiently, or need not be solved globally. Additionally, an explicit descent constraint is imposed on certain QP subproblems, which “guides ” the iterates through areas in which nonconvexity is a concern. Global convergence of the resulting algorithm is established.
A SECONDDERIVATIVE TRUSTREGION SQP METHOD WITH A “TRUSTREGIONFREE ” PREDICTOR STEP ∗
, 2009
"... ..."
(Show Context)
Acknowledgements
"... mathematical suggestions. I would also like to acknowledge the support of the Centre of Algebra at the University of Lisbon, and of ..."
Abstract
 Add to MetaCart
(Show Context)
mathematical suggestions. I would also like to acknowledge the support of the Centre of Algebra at the University of Lisbon, and of
Noname manuscript No. (will be inserted by the editor) A PenaltyInteriorPoint Algorithm for Nonlinear Constrained Optimization
, 2011
"... Abstract Penalty and interiorpoint methods for nonlinear optimization problems have enjoyed great successes for decades. Penalty methods have proved to be effective for a variety of problem classes due to their regularization effects on the constraints. They have also been shown to allow for rapid ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract Penalty and interiorpoint methods for nonlinear optimization problems have enjoyed great successes for decades. Penalty methods have proved to be effective for a variety of problem classes due to their regularization effects on the constraints. They have also been shown to allow for rapid infeasibility detection. Interiorpoint methods have become the workhorse in largescale optimization due to their Newtonlike qualities, both in terms of their scalability and convergence behavior. Each of these two strategies, however, have certain disadvantages that make their use either impractical or inefficient for certain classes of problems. The goal of this paper is to present a penaltyinteriorpoint method that possesses the advantages of penalty and interiorpoint techniques, but does not suffer from their disadvantages. Numerous attempts have been made along these lines in recent years, each with varying degrees of success. The novel feature of the algorithm in this paper is that our focus is not only on the formulation of the penaltyinteriorpoint subproblem itself, but on the design of updates for the penalty and interiorpoint parameters. The updates we propose are designed so that rapid convergence to a solution of the nonlinear optimization problem or an infeasible stationary point is attained. We motivate the convergence properties of our algorithm and illustrate its practical performance on a large set of problems, including sets of problems that exhibit degeneracy or are infeasible.
Adaptive Augmented Lagrangian Methods for LargeScale Equality Constrained Optimization
, 2012
"... We propose an augmented Lagrangian algorithm for solving largescale equality constrained optimization problems. The novel feature of the algorithm is an adaptive update for the penalty parameter motivated by recently proposed techniques for exact penalty methods. This adaptive updating scheme grea ..."
Abstract
 Add to MetaCart
(Show Context)
We propose an augmented Lagrangian algorithm for solving largescale equality constrained optimization problems. The novel feature of the algorithm is an adaptive update for the penalty parameter motivated by recently proposed techniques for exact penalty methods. This adaptive updating scheme greatly improves the overall performance of the algorithm without sacrificing the strengths of the core augmented Lagrangian framework, such as its attractive local convergence behavior and ability to be implemented matrixfree. This latter strength is particularly important due to interests in employing augmented Lagrangian algorithms for solving largescale optimization problems. We focus on a trust region algorithm, but also propose a line search algorithm that employs the same adaptive penalty parameter updating scheme. We provide theoretical results related to the global convergence behavior of our algorithms and illustrate by a set of numerical experiments that they outperform traditional augmented Lagrangian methods in terms of critical performance measures.
Optimization
, 2010
"... Publication details, including instructions for authors and subscription information: ..."
Abstract
 Add to MetaCart
Publication details, including instructions for authors and subscription information: