Results 1  10
of
25
A trust region method based on interior point techniques for nonlinear programming
 Mathematical Programming
, 1996
"... Jorge Nocedal z An algorithm for minimizing a nonlinear function subject to nonlinear inequality constraints is described. It applies sequential quadratic programming techniques to a sequence of barrier problems, and uses trust regions to ensure the robustness of the iteration and to allow the direc ..."
Abstract

Cited by 152 (19 self)
 Add to MetaCart
Jorge Nocedal z An algorithm for minimizing a nonlinear function subject to nonlinear inequality constraints is described. It applies sequential quadratic programming techniques to a sequence of barrier problems, and uses trust regions to ensure the robustness of the iteration and to allow the direct use of second order derivatives. This framework permits primal and primaldual steps, but the paper focuses on the primal version of the new algorithm. An analysis of the convergence properties of this method is presented. Key words: constrained optimization, interior point method, largescale optimization, nonlinear programming, primal method, primaldual method, SQP iteration, barrier method, trust region method.
A feasible BFGS interior point algorithm for solving strongly convex minimization problems
 SIAM J. OPTIM
, 2000
"... We propose a BFGS primaldual interior point method for minimizing a convex function on a convex set defined by equality and inequality constraints. The algorithm generates feasible iterates and consists in computing approximate solutions of the optimality conditions perturbed by a sequence of posit ..."
Abstract

Cited by 18 (1 self)
 Add to MetaCart
(Show Context)
We propose a BFGS primaldual interior point method for minimizing a convex function on a convex set defined by equality and inequality constraints. The algorithm generates feasible iterates and consists in computing approximate solutions of the optimality conditions perturbed by a sequence of positive parameters µ converging to zero. We prove that it converges qsuperlinearly for each fixed µ. We also show that it is globally convergent to the analytic center of the primaldual optimalset when µ tends to 0 and strict complementarity holds.
On the Solution of Mathematical Programming Problems With Equilibrium Constraints
, 2001
"... Mathematical programming problems with equilibrium constraints (MPEC) are nonlinear programming problems where the constraints have a form that is analogous to firstorder optimality conditions of constrained optimization. We prove that, under reasonable sufficient conditions, stationary points of t ..."
Abstract

Cited by 12 (3 self)
 Add to MetaCart
(Show Context)
Mathematical programming problems with equilibrium constraints (MPEC) are nonlinear programming problems where the constraints have a form that is analogous to firstorder optimality conditions of constrained optimization. We prove that, under reasonable sufficient conditions, stationary points of the sum of squares of the constraints are feasible points of the MPEC. In usual formulations of MPEC all the feasible points are nonregular in the sense that they do not satisfy the MangasarianFromovitz constraint qualification of nonlinear programming. Therefore, all the feasible points satisfy the classical FritzJohn necessary optimality conditions. In principle, this can cause serious difficulties for nonlinear programming algorithms applied to MPEC. However, we show that most feasible points do not satisfy a recently introduced stronger optimality condition for nonlinear programming. This is the reason why, in general, nonlinear programming algorithms are successful when applied to MPEC. Keywords. Mathematical programming with equilibrium constraints, optimality conditions, minimization algorithms, reformulation. AMS: 90C33, 90C30
Volatility calibration with American options
 Methods and Applications of Analysis
"... In this paper, we present two methods in order to calibrate the local volatility with American put options. Both calibration methods use a leastsquare formulation and a descent algorithm. Pricing is done by solving parabolic variational inequalities, for which solution procedures by active set met ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
(Show Context)
In this paper, we present two methods in order to calibrate the local volatility with American put options. Both calibration methods use a leastsquare formulation and a descent algorithm. Pricing is done by solving parabolic variational inequalities, for which solution procedures by active set methods are discussed. The first strategy consists in computing the optimality conditions and the descent direction needed by the optimization loop. This approach has been implemented both at the continuous and discrete levels. It requires a careful analysis of the underlying variational inequalities and of their discrete counterparts. In the numerical example presented here (American options on the FTSE index), the squared volatility is parameterized by a bicubic spline. In the second approach, which works in low dimension, the descent directions are computed with Automatic Differentiation of computer programs implemented in C++.
Inductor shape optimization for electromagnetic casting, Technical report, n o RR6733
 INRIA
"... apport de recherche ..."
A Feasible Directions Method for Nonsmooth Convex Optimization
, 2009
"... Abstract: We propose a new technique for minimization of convex functions not necessarily smooth. Our approach employs an equivalent constrained optimization problem and approximated linear programs obtained with cutting planes. At each iteration a search direction and a step length are computed. If ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
Abstract: We propose a new technique for minimization of convex functions not necessarily smooth. Our approach employs an equivalent constrained optimization problem and approximated linear programs obtained with cutting planes. At each iteration a search direction and a step length are computed. If the step length is considered “non serious”, a cutting plane is added and a new search direction is computed. This procedure is repeated until a “serious ” step is obtained. When this happens, the search direction is a feasible descent direction of the constrained equivalent problem. The search directions are computed with FDIPA, the Feasible Directions Interior Point Algorithm. We prove global convergence and solve several test problems very efficiently.
A feasible direction interior point algorithm for nonlinear semidefinite programming
, 2012
"... ..."
Electromagnetic Casting Inverse Problem, in "EngOpt 2008
, 2008
"... The aim of this paper is to solve an inverse problem concerning electromagnetic casting of molten metals. We are interested in locating suitable inductors around a molten metal so that its equilibrium shape be as near as possible to a desired one. In this paper we derive an algorithm using Simultane ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
The aim of this paper is to solve an inverse problem concerning electromagnetic casting of molten metals. We are interested in locating suitable inductors around a molten metal so that its equilibrium shape be as near as possible to a desired one. In this paper we derive an algorithm using Simultaneous Analysis and Design (SAND), this mathematical programming method is stated for the inverse problem. The resulting optimization problem is solved with FAIPA, a feasible directions interior point algorithm. 2. Keywords: shape optimization, inverse problem, interior point method. The industrial technique of electromagnetic casting allows contactless heating, shaping and controlling of chemical aggressive, hot melts. Applications concern electromagnetic shaping of aluminum ingots using softcontact confinement of the liquid metal, electromagnetic shaping of components of aeronautical engines made of superalloy materials (Ni,Ti,...), control of the structure solidification, etc...
Optimization of superconducting magnetic rail using a feasible direction interior point algorithm
 In Electromagnetics Research B, Vol. 55, 2013 85 International Conference on Engineering Optimization, 1–5, Rio de Janeiro
, 2008
"... One of the most promising commercial applications of superconducting magnetic levitation is urban trains. The conventional wheelrail system is substituted by a rail of magnets interacting with superconductors installed in the vehicle. This type of transportation is very advantageous when compared w ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
One of the most promising commercial applications of superconducting magnetic levitation is urban trains. The conventional wheelrail system is substituted by a rail of magnets interacting with superconductors installed in the vehicle. This type of transportation is very advantageous when compared with conventional techniques, like Light Rail Vehicles (LRV), implying in significant reductions of noise, losses and operation costs. Since the load of a magnetic levitated (Maglev) vehicle is distributed along the line and not concentrated at the point of contact wheelrail, the infrastructure costs can be reduced. Moreover, the Maglev vehicle does not need trucks and therefore is lighter. The main cost is the magnetic rail. Therefore, any improvement in the shape and configuration of magnets and superconductors has a significant budgetary impact. In this contribution, the optimization of a magnetic rail is presented. The design variables are parameters that describe the geometry of the permanent magnets, iron magnetic circuit and superconductor. The main objective is to find the design variables that minimize the requirement of magnetic material for a given levitation force. Geometric constraints are also required. A Finite Element model is employed to compute the levitation force. A formulation for sensitivity analysis is obtained by differentiation of the equilibrium equation. The optimization is carried out with the Feasible Direction Interior Point Algorithm
Sparse QuasiNewton Matrices for Large Scale Nonlinear Optimization
, 2005
"... QuasiNewton techniques for nonlinear optimization construct a full matrix that is an approximation of the second derivative of the function, in the unconstrained case, or of the second derivative of the Lagrangian, when constraints are considered. Usually, numerical algorithms require positive defi ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
QuasiNewton techniques for nonlinear optimization construct a full matrix that is an approximation of the second derivative of the function, in the unconstrained case, or of the second derivative of the Lagrangian, when constraints are considered. Usually, numerical algorithms require positive definite quasiNewton matrices. Classical techniques work with full quasiNewton matrices requiring a very large storage area and a great number of computations. We present a new updating technique to obtain positive definite sparse quasiNewton matrices. This technique can be included in the Feasible Arc Interior Point Algorithm (FAIPA) in the Sequential Quadratic Programming Method (SQP) and in PrimalDual optimization Algorithms. Several very large test constrained optimization problems, employing the present technique within FAIPA, were solved very efficiently. 2. Keywords: Sparse quasiNewton, Large size optimization, Nonlinear constrained optimization. In general, engineering design optimization problems can be represented by the following constrained nonlinear mathematical program: min f(x)