Results 1  10
of
20
CHARACTERIZATIONS OF ERROR BOUNDS FOR LOWER SEMICONTINUOUS FUNCTIONS ON METRIC SPACES
, 2004
"... Refining the variational method introduced in Aze ́ et al. [Nonlinear Anal. 49 (2002) 643670], we give characterizations of the existence of socalled global and local error bounds, for lower semicontinuous functions defined on complete metric spaces. We thus provide a systematic and synthetic ap ..."
Abstract

Cited by 28 (0 self)
 Add to MetaCart
Refining the variational method introduced in Aze ́ et al. [Nonlinear Anal. 49 (2002) 643670], we give characterizations of the existence of socalled global and local error bounds, for lower semicontinuous functions defined on complete metric spaces. We thus provide a systematic and synthetic approach to the subject, emphasizing the special case of convex functions defined on arbitrary Banach spaces (refining the abstract part of Aze ́ and Corvellec [SIAM J. Optim. 12 (2002) 913927], and the characterization of the local metric regularity of closedgraph multifunctions between complete metric spaces.
Exact regularization of convex programs
, 2007
"... The regularization of a convex program is exact if all solutions of the regularized problem are also solutions of the original problem for all values of the regularization parameter below some positive threshold. For a general convex program, we show that the regularization is exact if and only if a ..."
Abstract

Cited by 26 (1 self)
 Add to MetaCart
(Show Context)
The regularization of a convex program is exact if all solutions of the regularized problem are also solutions of the original problem for all values of the regularization parameter below some positive threshold. For a general convex program, we show that the regularization is exact if and only if a certain selection problem has a Lagrange multiplier. Moreover, the regularization parameter threshold is inversely related to the Lagrange multiplier. We use this result to generalize an exact regularization result of Ferris and Mangasarian [Appl. Math. Optim., 23 (1991), pp. 266–273] involving a linearized selection problem. We also use it to derive necessary and sufficient conditions for exact penalization, similar to those obtained by Bertsekas [Math. Programming, 9 (1975), pp. 87–99] and by Bertsekas, Nedić, and Ozdaglar [Convex Analysis and Optimization, Athena Scientific, Belmont, MA, 2003]. When the regularization is not exact, we derive error bounds on the distance from the regularized solution to the original solution set. We also show that existence of a “weak sharp minimum ” is in some sense close to being necessary for exact regularization. We illustrate the main result with numerical experiments on the ℓ1 regularization of benchmark (degenerate) linear programs and semidefinite/secondorder cone programs. The experiments demonstrate the usefulness of ℓ1 regularization in finding sparse solutions.
Error bounds: Necessary and sufficient conditions
, 2010
"... The paper presents a general classification scheme of necessary and sufficient criteria for the error bound property incorporating the existing conditions. Several derivativelike objects both from the primal as well as from the dual space ..."
Abstract

Cited by 17 (6 self)
 Add to MetaCart
The paper presents a general classification scheme of necessary and sufficient criteria for the error bound property incorporating the existing conditions. Several derivativelike objects both from the primal as well as from the dual space
Firstorder and secondorder conditions for error bounds
 SIAM J. Optim
"... Abstract. For a lower semicontinuous function f on a Banach space X, we studythe existence of a positive scalar µ such that the distance function dS associated with the solution set S of f(x) ≤ 0 satisfies dS(x) ≤ µ max{f(x), 0} for each point x in a neighborhood of some point x0 in X with f(x) &l ..."
Abstract

Cited by 10 (5 self)
 Add to MetaCart
Abstract. For a lower semicontinuous function f on a Banach space X, we studythe existence of a positive scalar µ such that the distance function dS associated with the solution set S of f(x) ≤ 0 satisfies dS(x) ≤ µ max{f(x), 0} for each point x in a neighborhood of some point x0 in X with f(x) <ɛfor some 0 <ɛ ≤ +∞. We give several sufficient conditions for this in terms of an abstract subdifferential and the Dini derivatives of f. In a Hilbert space we further present some secondorder conditions. We also establish the corresponding results for a system of inequalities, equalities, and an abstract constraint set.
Metric regularity of epigraphical multivalued mappings and applications to vector optimization
 Mathematical Programming, Serie B, accepted
"... ..."
ERROR BOUNDS FOR DEGENERATE CONE INCLUSION PROBLEMS
"... Abstract. Error bounds for cone inclusion problems in Banach spaces are established under conditions weaker than Robinson’s constraint qualification. The results allow the cone to be more general than the origin, therefore also generalize a classical error bound result concerning equalityconstrained ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
Abstract. Error bounds for cone inclusion problems in Banach spaces are established under conditions weaker than Robinson’s constraint qualification. The results allow the cone to be more general than the origin, therefore also generalize a classical error bound result concerning equalityconstrained sets in optimization. Key words. cone inclusion problems, error bounds, Robinson’s constraint qualification, tangent cones.
ERROR BOUNDS FOR VECTORVALUED FUNCTIONS: NECESSARY AND SUFFICIENT CONDITIONS
, 2011
"... In this paper, we attempt to extend the definition and existing local error bound criteria to vectorvalued functions, or more generally, to functions taking values in a normed linear space. Some new derivativelike objects (slopes and subdifferentials) are introduced and a general classification sc ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
In this paper, we attempt to extend the definition and existing local error bound criteria to vectorvalued functions, or more generally, to functions taking values in a normed linear space. Some new derivativelike objects (slopes and subdifferentials) are introduced and a general classification scheme of error bound criteria is presented.
Error Bounds for Eigenvalue and Semidefinite Matrix Inequality Systems
"... Dedicated to Terry Rockafellar in honor of his 70th birthday Received: date / Revised version: date Abstract. In this paper we give sufficient conditions for existence of error bounds for systems expressed in terms of eigenvalue functions (such as in eigenvalue optimization) or positive semidefinite ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
Dedicated to Terry Rockafellar in honor of his 70th birthday Received: date / Revised version: date Abstract. In this paper we give sufficient conditions for existence of error bounds for systems expressed in terms of eigenvalue functions (such as in eigenvalue optimization) or positive semidefiniteness (such as in semidefinite programming). 1.
Necessary optimality conditions for optimal control problems with nonsmooth mixed state and control constraints
"... In this paper we study an optimal control problem with nonsmooth mixed state and control constraints. In most of the existing results, the necessary optimality condition for optimal control problems with mixed state and control constraints are derived under the MangasarianFromovitz condition and ..."
Abstract
 Add to MetaCart
In this paper we study an optimal control problem with nonsmooth mixed state and control constraints. In most of the existing results, the necessary optimality condition for optimal control problems with mixed state and control constraints are derived under the MangasarianFromovitz condition and under the assumption that the state and control constraint functions are smooth. In this paper we derive necessary optimality conditions for problems with nonsmooth mixed state and control constraints under constraint qualifications based on pseudoLipschitz continuity and calmness of certain setvalued maps. The necessary conditions are stratified, in the sense that they are asserted on precisely the domain upon which the hypotheses (and the optimality) are assumed to hold. Moreover necessary optimality conditions with an Euler inclusion taking an explicit multiplier form are derived for certain cases.