Results 1  10
of
14
GLOBAL CONVERGENCE OF AUGMENTED LAGRANGIAN METHODS APPLIED TO OPTIMIZATION PROBLEMS WITH DEGENERATE CONSTRAINTS, INCLUDING PROBLEMS WITH COMPLEMENTARITY CONSTRAINTS
, 2012
"... We consider global convergence properties of the augmented Lagrangian methods on problems with degenerate constraints, with a special emphasis on mathematical programs with complementarity constraints (MPCC). In the general case, we show convergence to stationary points of the problem under an error ..."
Abstract

Cited by 7 (2 self)
 Add to MetaCart
(Show Context)
We consider global convergence properties of the augmented Lagrangian methods on problems with degenerate constraints, with a special emphasis on mathematical programs with complementarity constraints (MPCC). In the general case, we show convergence to stationary points of the problem under an error bound condition for the feasible set (which is weaker than constraint qualifications), assuming that the iterates have some modest features of approximate local minimizers of the augmented Lagrangian. For MPCC, we first argue that even weak forms of general constraint qualifications that are suitable for convergence of the augmented Lagrangian methods, such as the recently proposed relaxed positive linear dependence condition, should not be expected to hold and thus special analysis is needed. We next obtain a rather complete picture, showing that under the usual in this context MPCClinear independence constraint qualification accumulation points of the iterates are guaranteed to be Cstationary for MPCC (better than weakly stationary), but in general need not be Mstationary (hence, neither strongly stationary). However, strong stationarity is guaranteed if the generated dual sequence is bounded, which we show to be the typical
Augmented Lagrangians with possible infeasibility and finite termination for global nonlinear programming
, 2012
"... In a recent paper, Birgin, Floudas and Martínez introduced an augmented Lagrangian method for global optimization. In their approach, augmented Lagrangian subproblems are solved using the αBB method and convergence to global minimizers was obtained assuming feasibility of the original problem. In th ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
In a recent paper, Birgin, Floudas and Martínez introduced an augmented Lagrangian method for global optimization. In their approach, augmented Lagrangian subproblems are solved using the αBB method and convergence to global minimizers was obtained assuming feasibility of the original problem. In the present research, the algorithm mentioned above will be improved in several crucial aspects. On the one hand, feasibility of the problem will not be required. Possible infeasibility will be detected in finite time by the new algorithms and optimal infeasibility results will be proved. On the other hand, finite termination results that guarantee optimality and/or feasibility up to any required precision will be provided. An adaptive modification in which subproblem tolerances depend on current feasibility and complementarity will also be given. The adaptive algorithm allows the augmented Lagrangian subproblems to be solved without requiring unnecessary potentially high precisions in the intermediate steps of the method, which improves the overall efficiency. Experiments showing how the new algorithms and results are related to practical computations will be given.
Global Nonlinear Programming with possible infeasibility and finite termination
, 2012
"... In a recent paper, Birgin, Floudas and Martínez introduced an augmented Lagrangian method for global optimization. In their approach, augmented Lagrangian subproblems are solved using the αBB method and convergence to global minimizers was obtained assuming feasibility of the original problem. In th ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
In a recent paper, Birgin, Floudas and Martínez introduced an augmented Lagrangian method for global optimization. In their approach, augmented Lagrangian subproblems are solved using the αBB method and convergence to global minimizers was obtained assuming feasibility of the original problem. In the present research, the algorithm mentioned above will be improved in several crucial aspects. On the one hand, feasibility of the problem will not be required. Possible infeasibility will be detected in finite time by the new algorithms and optimal infeasibility results will be proved. On the other hand, finite termination results thatguaranteeoptimalityand/orfeasibilityuptoanyrequiredprecisionwillbeprovided. An adaptive modification in which subproblem tolerances depend on current feasibility and complementarity will also be given. The adaptive algorithm allows the augmented Lagrangian subproblems to be solved without requiring unnecessary potentially high precisions in the intermediate steps of the method, which improves the overall efficiency. Experiments showing how the new algorithms and results are related to practical computations will be given.
www.scielo.br/cam Activeset strategy in Powell’s methodfor optimization without derivatives
"... Abstract. In this article we present an algorithm for solving bound constrained optimization problems without derivatives based on Powell’s method [38] for derivativefree optimization. First we consider the unconstrained optimization problem. At each iteration a quadratic interpolation model of t ..."
Abstract
 Add to MetaCart
Abstract. In this article we present an algorithm for solving bound constrained optimization problems without derivatives based on Powell’s method [38] for derivativefree optimization. First we consider the unconstrained optimization problem. At each iteration a quadratic interpolation model of the objective function is constructed around the current iterate and this model is minimized to obtain a new trial point. The whole process is embedded within a trustregion framework. Our algorithm uses infinity norm instead of the Euclidean norm and we solve a box constrained quadratic subproblem using an activeset strategy to explore faces of the box. Therefore, a bound constrained optimization algorithm is easily extended. We compare our implementation with NEWUOA and BOBYQA, Powell’s algorithms for unconstrained and bound constrained derivative free optimization respectively. Numerical experiments show that, in general, our algorithm require less functional evaluations than Powell’s algorithms.
Outer TrustRegion method for Constrained Optimization
, 2009
"... Given an algorithm A for solving some mathematical problem based on the iterative solution of simpler subproblems, an Outer TrustRegion (OTR) modification of A is the result of adding a trustregion constraint to each subproblem. The trustregion size is adaptively updated according to the behavior ..."
Abstract
 Add to MetaCart
Given an algorithm A for solving some mathematical problem based on the iterative solution of simpler subproblems, an Outer TrustRegion (OTR) modification of A is the result of adding a trustregion constraint to each subproblem. The trustregion size is adaptively updated according to the behavior of crucial variables. The new subproblems should not be more complex than the original ones and the convergence properties of the Outer TrustRegion algorithm should be the same as those of the Algorithm A. Some reasons for introducing OTR modifications are given in the present work. Convergence results for an OTR version of an Augmented Lagrangian method for nonconvex constrained optimization are proved and numerical experiments are presented.
Optimization
, 2010
"... Publication details, including instructions for authors and subscription information: ..."
Abstract
 Add to MetaCart
Publication details, including instructions for authors and subscription information:
Extensions and applications of . . . Decision Making
, 2012
"... The research presented in this thesis is a collection of applications and extensions of stochastic accumulator models to various areas of decision making and attention in neuroscience. Ch. 1 introduces the major techniques and experimental results that guide us throughout the rest of the thesis. In ..."
Abstract
 Add to MetaCart
The research presented in this thesis is a collection of applications and extensions of stochastic accumulator models to various areas of decision making and attention in neuroscience. Ch. 1 introduces the major techniques and experimental results that guide us throughout the rest of the thesis. In particular, we introduce and define the leaky, competing accumulator, drift diffusion, and OrnsteinUhlenbeck models. In Ch. 2, we adopt an OrnsteinUhlenbeck (OU) process to fit a generalized version of the motion dots task in which monkeys are now faced with biased rewards. We demonstrate that monkeys shift their behaviors in a systematic way, and that they do so in a near optimal manner. We also fit the OU model to neural data and find that OU model behaves almost like a pure drift diffusion process. This gives further evidence that the DDM is a good model for both the behavior and neural activity related to perceptual choice. In Ch. 3, we construct a multiarea model for a covert search task. We discover
DOTH 12/15 Determining weak phases from B → J/ψP decays
, 2012
"... The decay B → J/ψKS remains the most important source of information for the Bd mixing phase, determined by the CKM angle β in the standard model. When aiming at a precision appropriate for present and coming high luminosity colliders, the corresponding hadronic matrix elements are a major obstacle, ..."
Abstract
 Add to MetaCart
(Show Context)
The decay B → J/ψKS remains the most important source of information for the Bd mixing phase, determined by the CKM angle β in the standard model. When aiming at a precision appropriate for present and coming high luminosity colliders, the corresponding hadronic matrix elements are a major obstacle, as their precise calculation is still not feasible with existing methods. Flavour symmetries offer a possibility to extract them from data, however again with limited precision. In this article, we propose a framework to take subleading contributions in Bu,d,s → J/ψP decays into account, P ∈ {pi,K, (η8)}, using an SU(3) analysis, together with the leading corrections to the symmetry limit. This allows for a modelindependent extraction of the Bd mixing phase adequate for coming high precision data, and additionally yields information on possible New Physics contributions in these modes. We find the penguininduced correction to be small, ∆S . 0.01, a limit which can be improved with coming data on CP asymmetries and branching ratios. Finally, the sensitivity on the CKM angle γ from these modes is critically examined, yielding a less optimistic picture than previously envisaged. aemail:
Survey
"... Adaptive survey designs tot minimize survey mode effects A case study on the Dutch Labor Force ..."
Abstract
 Add to MetaCart
Adaptive survey designs tot minimize survey mode effects A case study on the Dutch Labor Force
Invited “Discussion Paper ” for TOP CRITICAL LAGRANGE MULTIPLIERS: WHAT WE CURRENTLY KNOW ABOUT THEM, HOW THEY SPOIL OUR LIFE, AND WHAT WE CAN DO ABOUT IT∗
, 2014
"... We discuss a certain special subset of Lagrange multipliers, called critical, which usually exist when multipliers associated to a given solution are not unique. This kind of multipliers appear to be important for a number of reasons, some understood better, some (currently) not fully. What is clear ..."
Abstract
 Add to MetaCart
(Show Context)
We discuss a certain special subset of Lagrange multipliers, called critical, which usually exist when multipliers associated to a given solution are not unique. This kind of multipliers appear to be important for a number of reasons, some understood better, some (currently) not fully. What is clear, is that Newton and Newtonrelated methods have an amazingly strong tendency to generate sequences with dual components converging to critical multipliers. This is quite striking because, typically, the set of critical multipliers is “thin ” (the set of noncritical ones is relatively open and dense, meaning that its closure is the whole set). Apart from mathematical curiosity to understand the phenomenon for something as classical as the Newton method, the attraction to critical multipliers is relevant computationally. This is because convergence to such multipliers is the reason for slow convergence of the Newton method in degenerate cases, as convergence to noncritical limits (if it were to happen) would have given the superlinear rate. Moreover, the attraction phenomenon shows up not only for the basic Newton method, but also for other related techniques (for example, quasiNewton, and the linearlyconstrained augmented Lagrangian method). In spite of clear computational