Results 1  10
of
31
Optimization by Direct Search: New Perspectives on Some Classical and Modern Methods
 SIAM REVIEW VOL. 45, NO. 3, PP. 385–482
, 2003
"... Direct search methods are best known as unconstrained optimization techniques that do not explicitly use derivatives. Direct search methods were formally proposed and widely applied in the 1960s but fell out of favor with the mathematical optimization community by the early 1970s because they lacked ..."
Abstract

Cited by 224 (15 self)
 Add to MetaCart
(Show Context)
Direct search methods are best known as unconstrained optimization techniques that do not explicitly use derivatives. Direct search methods were formally proposed and widely applied in the 1960s but fell out of favor with the mathematical optimization community by the early 1970s because they lacked coherent mathematical analysis. Nonetheless, users remained loyal to these methods, most of which were easy to program, some of which were reliable. In the past fifteen years, these methods have seen a revival due, in part, to the appearance of mathematical analysis, as well as to interest in parallel and distributed computing. This review begins by briefly summarizing the history of direct search methods and considering the special properties of problems for which they are well suited. Our focus then turns to a broad class of methods for which we provide a unifying framework that lends itself to a variety of convergence results. The underlying principles allow generalization to handle bound constraints and linear constraints. We also discuss extensions to problems with nonlinear constraints.
Theoretical and Numerical ConstraintHandling Techniques used with Evolutionary Algorithms: A Survey of the State of the Art
, 2002
"... This paper provides a comprehensive survey of the most popular constrainthandling techniques currently used with evolutionary algorithms. We review approaches that go from simple variations of a penalty function, to others, more sophisticated, that are biologically inspired on emulations of the imm ..."
Abstract

Cited by 182 (26 self)
 Add to MetaCart
This paper provides a comprehensive survey of the most popular constrainthandling techniques currently used with evolutionary algorithms. We review approaches that go from simple variations of a penalty function, to others, more sophisticated, that are biologically inspired on emulations of the immune system, culture or ant colonies. Besides describing briefly each of these approaches (or groups of techniques), we provide some criticism regarding their highlights and drawbacks. A small comparative study is also conducted, in order to assess the performance of several penaltybased approaches with respect to a dominancebased technique proposed by the author, and with respect to some mathematical programming approaches. Finally, we provide some guidelines regarding how to select the most appropriate constrainthandling technique for a certain application, ad we conclude with some of the the most promising paths of future research in this area.
Interior methods for nonlinear optimization
 SIAM REVIEW
, 2002
"... Interior methods are an omnipresent, conspicuous feature of the constrained optimization landscape today, but it was not always so. Primarily in the form of barrier methods, interiorpoint techniques were popular during the 1960s for solving nonlinearly constrained problems. However, their use for ..."
Abstract

Cited by 120 (5 self)
 Add to MetaCart
(Show Context)
Interior methods are an omnipresent, conspicuous feature of the constrained optimization landscape today, but it was not always so. Primarily in the form of barrier methods, interiorpoint techniques were popular during the 1960s for solving nonlinearly constrained problems. However, their use for linear programming was not even contemplated because of the total dominance of the simplex method. Vague but continuing anxiety about barrier methods eventually led to their abandonment in favor of newly emerging, apparently more efficient alternatives such as augmented Lagrangian and sequential quadratic programming methods. By the early 1980s, barrier methods were almost without exception regarded as a closed chapter in the history of optimization. This picture changed dramatically with Karmarkar’s widely publicized announcement in 1984 of a fast polynomialtime interior method for linear programming; in 1985, a formal connection was established between his method and classical barrier methods. Since then, interior methods have advanced so far, so fast, that their influence has transformed both the theory and practice of constrained optimization. This article provides a condensed, selective look at classical material and recent research about interior methods for nonlinearly constrained optimization.
A GLOBALLY CONVERGENT AUGMENTED LAGRANGIAN PATTERN SEARCH ALGORITHM FOR OPTIMIZATION WITH GENERAL CONSTRAINTS AND SIMPLE BOUNDS
, 2002
"... We give a pattern search methodfor nonlinearly constrained optimization that is an adaption of a bound constrained augmented Lagrangian method first proposed by Conn, Gould, andToint [SIAM J. Numer. Anal., 28 (1991), pp. 545–572]. In the pattern search adaptation, we solve the bound constrained subp ..."
Abstract

Cited by 69 (7 self)
 Add to MetaCart
We give a pattern search methodfor nonlinearly constrained optimization that is an adaption of a bound constrained augmented Lagrangian method first proposed by Conn, Gould, andToint [SIAM J. Numer. Anal., 28 (1991), pp. 545–572]. In the pattern search adaptation, we solve the bound constrained subproblem approximately using a pattern search method. The stopping criterion proposedby Conn, Gould, andToint for the solution of the subproblem requires explicit knowledge of derivatives. Such information is presumed absent in pattern search methods; however, we show how we can replace this with a stopping criterion basedon the pattern size in a way that preserves the convergence properties of the original algorithm. In this way we proceedby successive, inexact, bound constrained minimization without knowing exactly how inexact the minimization is. As far as we know, this is the first provably convergent direct search method for general nonlinear programming.
Interiorpoint methods for optimization
, 2008
"... This article describes the current state of the art of interiorpoint methods (IPMs) for convex, conic, and general nonlinear optimization. We discuss the theory, outline the algorithms, and comment on the applicability of this class of methods, which have revolutionized the field over the last twen ..."
Abstract

Cited by 17 (0 self)
 Add to MetaCart
This article describes the current state of the art of interiorpoint methods (IPMs) for convex, conic, and general nonlinear optimization. We discuss the theory, outline the algorithms, and comment on the applicability of this class of methods, which have revolutionized the field over the last twenty years.
Inverse Barrier Methods for Linear Programming
 REVUE RAIROOPERATIONS RESEARCH
, 1991
"... In the recent interior point methods for linear programming much attention has been given to the logarithmic barrier method. In this paper we will analyse the class of inverse barrier methods for linear programming, in which the barrier is P x \Gammar i , where r ? 0 is the rank of the barrier. ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
(Show Context)
In the recent interior point methods for linear programming much attention has been given to the logarithmic barrier method. In this paper we will analyse the class of inverse barrier methods for linear programming, in which the barrier is P x \Gammar i , where r ? 0 is the rank of the barrier. There are many similarities with the logarithmic barrier method. The minima of an inverse barrier function for different values of the barrier parameter define a 'central path' dependent on r, called the rpath of the problem. For r # 0 this path coincides with the central path determined by the logarithmic barrier function. We introduce a metric to measure the distance of a feasible point to a point on the path. We prove that in a certain region around a point on the path the Newton process converges quadratically. Moreover, outside this region, taking a step into the Newton direction decreases the barrier function value at least with a constant. We will derive upper bounds for the total ...
Numerical solution of differential systems with algebraic inequalities arising in robot programming
 in Proceedings of the IEEE Conference on Robotics and Automation
, 1995
"... ..."
Asymptotic Analysis of Congested Communication Networks
, 1997
"... : This paper is devoted to the study of a routing problem in telecommunication networks, when the cost function is the average delay. We establish asymptotic expansions for the value function and solutions in the vicinity of a congested nominal problem. The study is strongly related to the one of a ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
: This paper is devoted to the study of a routing problem in telecommunication networks, when the cost function is the average delay. We establish asymptotic expansions for the value function and solutions in the vicinity of a congested nominal problem. The study is strongly related to the one of a partial inverse barrier method for linear programming. Keywords: Telecommunication networks, multicommodity flows, asymptotic expansions, linear programming, perturbation analysis, barrier functions, penalty methods. (R'esum'e : tsvp) INRIA, B.P. 105, 78153 Rocquencourt, France. Email: Frederic.Bonnans@inria.fr. y INRIA, B.P. 105, 78153 Rocquencourt, France. Email: Mounir.Haddou@inria.fr. Unité de recherche INRIA Rocquencourt Domaine de Voluceau, Rocquencourt, BP 105, 78153 LE CHESNAY Cedex (France) Téléphone : (33 1) 39 63 55 11  Télécopie : (33 1) 39 63 53 Analyse asymptotique des r'eseaux de communications congestionn'es R'esum'e : Nous 'etudions le probl`eme de minimisation d...
Inverse barriers and CESfunctions in linear programming
, 1995
"... Recently much attention was paid to polynomial interior point methods, almost exclusively based on the logarithmic barrier function. Some attempts were made to prove polynomiality of other barrier methods (e.g. the inverse barrier method) but without success. Other interior point methods could be de ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
Recently much attention was paid to polynomial interior point methods, almost exclusively based on the logarithmic barrier function. Some attempts were made to prove polynomiality of other barrier methods (e.g. the inverse barrier method) but without success. Other interior point methods could be defined based on CESfunctions (CES is the abbreviation of Constant Elasticity of Substitution). The classical inverse barrier function and the CESfunctions have a similar structure. In this paper we compare the path defined by the inverse barrier function and the path defined by CESfunctions in the case of linear programming. It will be shown that the two paths are equivalent, although parameterized differently. We also construct a dual of the CESfunction problem which is based on the dual CESfunction. This result also completes the duality results for linear programs with one CEStype (pnorm) type constraint. Key words: linear programming, interiorpoint methods, inverse barrier, CESfunc...
Constrained optimization via multiobjective evolutionary algorithms
 Deb (Eds.), Multiobjective Problems Solving from Nature: From Concepts to Applications, SpringerVerlag, Natural Computing Series, 2008, ISBN: 9783540729631
"... Summary. In this chapter, we present a survey of constrainthandling techniques based on evolutionary multiobjective optimization concepts. We present some basic definitions required to make this chapter selfcontained, and then we introduce the way in which a global (singleobjective) nonlinear opt ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
(Show Context)
Summary. In this chapter, we present a survey of constrainthandling techniques based on evolutionary multiobjective optimization concepts. We present some basic definitions required to make this chapter selfcontained, and then we introduce the way in which a global (singleobjective) nonlinear optimization problem is transformed into an unconstrained multiobjective optimization problem. A taxonomy of methods is also proposed and each of them is briefly described. Some interesting findings regarding common features of the approaches analyzed are also discussed. 1