Results 1 
8 of
8
Improving ultimate convergence of an Augmented Lagrangian method
, 2007
"... Optimization methods that employ the classical PowellHestenesRockafellar Augmented Lagrangian are useful tools for solving Nonlinear Programming problems. Their reputation decreased in the last ten years due to the comparative success of InteriorPoint Newtonian algorithms, which are asymptoticall ..."
Abstract

Cited by 14 (0 self)
 Add to MetaCart
Optimization methods that employ the classical PowellHestenesRockafellar Augmented Lagrangian are useful tools for solving Nonlinear Programming problems. Their reputation decreased in the last ten years due to the comparative success of InteriorPoint Newtonian algorithms, which are asymptotically faster. In the present research a combination of both approaches is evaluated. The idea is to produce a competitive method, being more robust and efficient than its “pure” counterparts for critical problems. Moreover, an additional hybrid algorithm is defined, in which the Interior Point method is replaced by the Newtonian resolution of a KKT system identified by the Augmented Lagrangian algorithm. The software used in this work is freely available through the Tango Project web page:
Online Learning in the Embedded Manifold of Lowrank Matrices
"... When learning models that are represented in matrix forms, enforcing a lowrank constraint can dramatically improve the memory and run time complexity, while providing a natural regularization of the model. However, naive approaches to minimizing functions over the set of lowrank matrices are eithe ..."
Abstract

Cited by 14 (0 self)
 Add to MetaCart
When learning models that are represented in matrix forms, enforcing a lowrank constraint can dramatically improve the memory and run time complexity, while providing a natural regularization of the model. However, naive approaches to minimizing functions over the set of lowrank matrices are either prohibitively time consuming (repeated singular value decomposition of the matrix) or numerically unstable (optimizing a factored representation of the lowrank matrix). We build on recent advances in optimization over manifolds, and describe an iterative online learning procedure, consisting of a gradient step, followed by a secondorder retraction back to the manifold. While the ideal retraction is costly to compute, and so is the projection operator that approximates it, we describe another retraction that can be computed efficiently. It has run time and memory complexity of O((n+m)k) for a rankk matrix of dimension m×n, when using an online procedure with rankone gradients. We use this algorithm, LORETA, to learn a matrixform similarity measure over pairs of documents represented as high dimensional vectors. LORETA improves the mean average precision over a passiveaggressive approach in a factorized model, and also improves over a full model trained on preselected features using the same memory requirements. We further adapt LORETA to learn positive semidefinite lowrank matrices, providing an online algorithm for lowrank metric learning. LORETA also shows consistent improvement over standard weakly supervised methods in a large (1600 classes and 1 million images, using ImageNet) multilabel image classification task.
IDENTIFYING ACTIVITY ∗
, 901
"... Abstract. Identification of active constraints in constrained optimization is of interest from both practical and theoretical viewpoints, as it holds the promise of reducing an inequalityconstrained problem to an equalityconstrained problem, in a neighborhood of a solution. We study this issue in ..."
Abstract

Cited by 6 (3 self)
 Add to MetaCart
(Show Context)
Abstract. Identification of active constraints in constrained optimization is of interest from both practical and theoretical viewpoints, as it holds the promise of reducing an inequalityconstrained problem to an equalityconstrained problem, in a neighborhood of a solution. We study this issue in the more general setting of composite nonsmooth minimization, in which the objective is a composition of a smooth vector function c with a lower semicontinuous function h, typically nonsmooth but structured. In this setting, the graph of the generalized gradient ∂h can often be decomposed into a union (nondisjoint) of simpler subsets. “Identification ” amounts to deciding which subsets of the graph are “active ” in the criticality conditions at a given solution. We give conditions under which any convergent sequence of approximate critical points finitely identifies the activity. Prominent among these properties is a condition akin to the MangasarianFromovitz constraint qualification, which ensures boundedness of the set of multiplier vectors that satisfy the optimality conditions at the solution. Key words. constrained optimization, composite optimization, MangasarianFromovitz constraint qualification, active set, identification.
Nonlinear Analysis
"... This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal noncommercial research and education use, including for instruction at the authors institution and sharing with colleagues. Other uses, including reproduction and distribution, or sel ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal noncommercial research and education use, including for instruction at the authors institution and sharing with colleagues. Other uses, including reproduction and distribution, or selling or licensing copies, or posting to personal, institutional or third party websites are prohibited. In most cases authors are permitted to post their version of the article (e.g. in Word or Tex form) to their personal website or institutional repository. Authors requiring further information regarding Elsevier’s archiving and manuscript policies are encouraged to visit:
ACTIVE SET IDENTIFICATION FOR LINEARLY CONSTRAINED MINIMIZATION WITHOUT EXPLICIT DERIVATIVES
"... Abstract. We consider active set identification for linearly constrained optimization problems in the absence of explicit information about the derivative of the objective function. We begin by presenting some general results on active set identification that are not tied to any particular algorithm ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract. We consider active set identification for linearly constrained optimization problems in the absence of explicit information about the derivative of the objective function. We begin by presenting some general results on active set identification that are not tied to any particular algorithm. These general results are sufficiently strong that, given a sequence of iterates converging to a Karush–Kuhn–Tucker point, it is possible to identify binding constraints for which there are nonzero multipliers. We then focus on generating set search methods, a class of derivativefree direct search methods. We discuss why these general results, which are posed in terms of the direction of steepest descent, apply to generating set search, even though these methods do not have explicit recourse to derivatives. Nevertheless, there is a clearly identifiable subsequence of iterations at which we can reliably estimate the set of constraints that are binding at a solution. We discuss how active set estimation can be used to accelerate generating set search methods and illustrate the appreciable improvement that can result using several examples from the CUTEr test suite. We also introduce two algorithmic refinements for generating set search methods. The first expands the subsequence of iterations at which we can make inferences about stationarity. The second is a more flexible step acceptance criterion.
Nonlinear Analysis ( ) – Contents lists available at ScienceDirect Nonlinear Analysis
"... journal homepage: www.elsevier.com/locate/na On the accurate identification of active set for constrained minimax ..."
Abstract
 Add to MetaCart
journal homepage: www.elsevier.com/locate/na On the accurate identification of active set for constrained minimax
Activeset
"... prediction for interior point methods using controlled perturbations Coralia Cartis∗and Yiming Yan† We propose the use of controlled perturbations to address the challenging question of optimal activeset prediction for interior point methods. Namely, in the context of linear programming, we conside ..."
Abstract
 Add to MetaCart
(Show Context)
prediction for interior point methods using controlled perturbations Coralia Cartis∗and Yiming Yan† We propose the use of controlled perturbations to address the challenging question of optimal activeset prediction for interior point methods. Namely, in the context of linear programming, we consider perturbing the inequality constraints/bounds so as to enlarge the feasible set. We show that if the perturbations are chosen appropriately, the solution of the original problem lies on or close to the central path of the perturbed problem. We also find that a primaldual pathfollowing algorithm applied to the perturbed problem is able to accurately predict the optimal active set of the original problem when the duality gap for the perturbed problem is not too small; furthermore, depending on problem conditioning, this prediction can happen sooner than predicting the activeset for the perturbed problem or for the original one if no perturbations are used. Encouraging preliminary numerical experience is reported when comparing activity prediction for the perturbed and unperturbed problem formulations.