Results 1  10
of
16
Clarke subgradients of stratifiable functions
 SIAM J. OPTIM
, 2006
"... We establish the following result: if the graph of a (nonsmooth) realextendedvalued function f: R n → R∪{+∞} is closed and admits a Whitney stratification, then the norm of the gradient of f at x ∈ domf relative to the stratum containing x bounds from below all norms of Clarke subgradients of f a ..."
Abstract

Cited by 44 (15 self)
 Add to MetaCart
We establish the following result: if the graph of a (nonsmooth) realextendedvalued function f: R n → R∪{+∞} is closed and admits a Whitney stratification, then the norm of the gradient of f at x ∈ domf relative to the stratum containing x bounds from below all norms of Clarke subgradients of f at x. As a consequence, we obtain some MorseSard type theorems as well as a nonsmooth Kurdyka̷Lojasiewicz inequality for functions definable in an arbitrary ominimal structure.
Semismoothness of the maximum eigenvalue function of a symmetric tensor and its application
, 2011
"... In this paper, we examine the maximum eigenvalue function of an even order real symmetric tensor. By using the variational analysis techniques, we first show that the maximum eigenvalue function is a continuous and convex function on the symmetric tensor space. In particular, we obtain the convex s ..."
Abstract

Cited by 6 (5 self)
 Add to MetaCart
(Show Context)
In this paper, we examine the maximum eigenvalue function of an even order real symmetric tensor. By using the variational analysis techniques, we first show that the maximum eigenvalue function is a continuous and convex function on the symmetric tensor space. In particular, we obtain the convex subdifferential formula for the maximum eigenvalue function. Next, for an mthorder ndimensional symmetric tensor A, we show that the maximum eigenvalue function is always ρthorder semismooth at A for some rational number ρ> 0. In the special case when the geometric multiplicity is one, we show that ρ can be set as 1 (2m−1)n. Sufficient condition ensuring the strong semismoothness of the maximum eigenvalue function is also provided. As an application, we propose a generalized Newton method to solve the space tensor conic linear programming problem which arises in medical imaging area. Local convergence rate of this method is established by using the semismooth property of the maximum eigenvalue function.
Projection methods in conic optimization
"... Projection onto semidefinite positive matrices Consider the space Sn of symmetric nbyn matrices, equipped with the norm associated to the usual inner product 〈X,Y 〉 = ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
Projection onto semidefinite positive matrices Consider the space Sn of symmetric nbyn matrices, equipped with the norm associated to the usual inner product 〈X,Y 〉 =
Continuity of setvalued maps revisited in the light of tame geometry
, 905
"... Abstract Continuity of setvalued maps is hereby revisited: after recalling some basic concepts of variational analysis and a short description of the StateoftheArt, we obtain as byproduct two Sard type results concerning local minima of scalar and vector valued functions. Our main result though ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
(Show Context)
Abstract Continuity of setvalued maps is hereby revisited: after recalling some basic concepts of variational analysis and a short description of the StateoftheArt, we obtain as byproduct two Sard type results concerning local minima of scalar and vector valued functions. Our main result though, is inscribed in the framework of tame geometry, stating that a closedvalued semialgebraic setvalued map is almost everywhere continuous (in both topological and measuretheoretic sense). The result –depending on stratification techniques – holds true in a more general setting of ominimal (or tame) setvalued maps. Some applications are briefly discussed at the end. Key words Setvalued map, (strict, outer, inner) continuity, Aubin property, semialgebraic, piecewise polyhedral, tame optimization.
Numerical Algorithms for a Class of Matrix Norm Approximation Problems
, 2012
"... This thesis focuses on designing robust and efficient algorithms for a class of matrix norm approximation (MNA) problems that are to find an affine combination of given matrices having the minimal spectral norm subject to some prescribed linear equality and inequality constraints. These problems a ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
This thesis focuses on designing robust and efficient algorithms for a class of matrix norm approximation (MNA) problems that are to find an affine combination of given matrices having the minimal spectral norm subject to some prescribed linear equality and inequality constraints. These problems arise often in numerical algebra,
Generic identifiability and secondorder sufficiency in tame convex optimization
, 2009
"... We consider linear optimization over a fixed compact convex feasible region that is semialgebraic (or, more generally, “tame”). Generically, we prove that the optimal solution is unique and lies on a unique manifold, around which the feasible region is “partly smooth”, ensuring finite identificatio ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
(Show Context)
We consider linear optimization over a fixed compact convex feasible region that is semialgebraic (or, more generally, “tame”). Generically, we prove that the optimal solution is unique and lies on a unique manifold, around which the feasible region is “partly smooth”, ensuring finite identification of the manifold by many optimization algorithms. Furthermore, secondorder optimality conditions hold, guaranteeing smooth behavior of the optimal solution under small perturbations to the objective.
ELA DERIVATIVES OF THE DIAMETER AND THE AREA OF A CONNECTED COMPONENT OF THE PSEUDOSPECTRUM∗
"... This Article is brought to you for free and open access by Wyoming Scholars Repository. It has been accepted for inclusion in Electronic Journal of Linear Algebra by an authorized administrator of Wyoming Scholars Repository. For more information, please contact scholcom@uwyo.edu. ..."
Abstract
 Add to MetaCart
This Article is brought to you for free and open access by Wyoming Scholars Repository. It has been accepted for inclusion in Electronic Journal of Linear Algebra by an authorized administrator of Wyoming Scholars Repository. For more information, please contact scholcom@uwyo.edu.
SEMIALGEBRAIC TECHNIQUES IN VARIATIONAL ANALYSIS: PSEUDOSPECTRA, ROBUSTNESS, GENERIC CONTINUITY, AND
, 2009
"... Variational Analysis is the modern theory of nonsmooth, nonconvex analysis built on the theory of convex and smooth optimization. While the general theory needs to handle pathologies, functions and sets appearing in applications are typically structured. Semialgebraic functions and sets eliminates ..."
Abstract
 Add to MetaCart
(Show Context)
Variational Analysis is the modern theory of nonsmooth, nonconvex analysis built on the theory of convex and smooth optimization. While the general theory needs to handle pathologies, functions and sets appearing in applications are typically structured. Semialgebraic functions and sets eliminates much of the pathological behavior, and still forms a broad class of constructs appearing in practice, making it an ideal setting for practical variational analysis. Chapter 1 is an introduction to the thesis and Chapter 2 reviews preliminaries. Chapters 3 to 5 describe various semialgebraic techniques in variational analysis. Chapter 3 gives equivalent conditions for the Lipschitz continuity of pseudospectra in the setvalued sense. As corollaries, we give formulas for the Lipschitz constants of the pseudospectra, pseudospectral abscissa and pseudospectral radius. We also study critical points of the resolvent function. Chapter 4 studies robust solutions of an optimization problem using the “robust regularization ” of a function, and prove that it is Lipschitz at a point for all small > 0 for nice functions, and in particular semialgebraic functions. This result generalizes some of the ideas
Gradient dynamical systems, tame . . .
, 2009
"... These lectures present an introduction to what is nowadays called Tame Optimization, with emphasis to (nonsmooth) Lojasiewicz gradient inequalities and Sard–type theorems. The former topic will be introduced via the asymptotic analysis of dynamical systems of (sub)gradient type; its consequences in ..."
Abstract
 Add to MetaCart
These lectures present an introduction to what is nowadays called Tame Optimization, with emphasis to (nonsmooth) Lojasiewicz gradient inequalities and Sard–type theorems. The former topic will be introduced via the asymptotic analysis of dynamical systems of (sub)gradient type; its consequences in the algorithmic analysis (proximal algorithm, gradienttype methods) will also be discussed. The latter topic will be presented as a natural consequence of the structural assumptions made on the function (ominimality, stratification). Our secondary aim is to provide essential background and material for further research. During the lectures, some open problems will be eventually mentioned.