Results 11  20
of
271
Linear precoding in cooperative MIMO cellular networks with limited coordination clusters
 IEEE J. Sel. Areas Commun
, 2010
"... ar ..."
(Show Context)
On the closedness of the linear image of a closed convex cone
, 1992
"... informs doi 10.1287/moor.1060.0242 ..."
(Show Context)
Exact regularization of convex programs
, 2007
"... The regularization of a convex program is exact if all solutions of the regularized problem are also solutions of the original problem for all values of the regularization parameter below some positive threshold. For a general convex program, we show that the regularization is exact if and only if a ..."
Abstract

Cited by 26 (1 self)
 Add to MetaCart
(Show Context)
The regularization of a convex program is exact if all solutions of the regularized problem are also solutions of the original problem for all values of the regularization parameter below some positive threshold. For a general convex program, we show that the regularization is exact if and only if a certain selection problem has a Lagrange multiplier. Moreover, the regularization parameter threshold is inversely related to the Lagrange multiplier. We use this result to generalize an exact regularization result of Ferris and Mangasarian [Appl. Math. Optim., 23 (1991), pp. 266–273] involving a linearized selection problem. We also use it to derive necessary and sufficient conditions for exact penalization, similar to those obtained by Bertsekas [Math. Programming, 9 (1975), pp. 87–99] and by Bertsekas, Nedić, and Ozdaglar [Convex Analysis and Optimization, Athena Scientific, Belmont, MA, 2003]. When the regularization is not exact, we derive error bounds on the distance from the regularized solution to the original solution set. We also show that existence of a “weak sharp minimum ” is in some sense close to being necessary for exact regularization. We illustrate the main result with numerical experiments on the ℓ1 regularization of benchmark (degenerate) linear programs and semidefinite/secondorder cone programs. The experiments demonstrate the usefulness of ℓ1 regularization in finding sparse solutions.
Advances in convex optimization: Conic programming
 In Proceedings of International Congress of Mathematicians
, 2007
"... Abstract. During the last two decades, major developments in convex optimization were focusing on conic programming, primarily, on linear, conic quadratic and semidefinite optimization. Conic programming allows to reveal rich structure which usually is possessed by a convex program and to exploit ..."
Abstract

Cited by 23 (0 self)
 Add to MetaCart
Abstract. During the last two decades, major developments in convex optimization were focusing on conic programming, primarily, on linear, conic quadratic and semidefinite optimization. Conic programming allows to reveal rich structure which usually is possessed by a convex program and to exploit this structure in order to process the program efficiently. In the paper, we overview the major components of the resulting theory (conic duality and primaldual interior point polynomial time algorithms), outline the extremely rich “expressive abilities ” of conic quadratic and semidefinite programming and discuss a number of instructive applications.
A New Condition Measure, PreConditioners, and Relations between Different Measures of Conditioning for Conic Linear Systems
, 2001
"... In recent years, a body of research into "condition numbers" for convex optimization has been developed, aimed at capturing the intuitive notion of problem behavior. This research has been shown to be relevant in studying the efficiency of algorithms (including interiorpoint algorithms) f ..."
Abstract

Cited by 22 (5 self)
 Add to MetaCart
In recent years, a body of research into "condition numbers" for convex optimization has been developed, aimed at capturing the intuitive notion of problem behavior. This research has been shown to be relevant in studying the efficiency of algorithms (including interiorpoint algorithms) for convex optimization as well as other behavioral characteristics of these problems such as problem geometry, deformation under data perturbation, etc. This paper studies measures of conditioning for a conic linear system of the form (FP d ): Ax = b; x 2 CX , whose data is d = (A; b). We present a new measure of conditioning, denoted d , and we show implications of d for problem geometry and algorithm complexity, and demonstrate that the value of = d is independent of the speci c data representation of (FP d ). We then prove certain relations among a variety of condition measures for (FP d ), including d , d , d , and C(d). We discuss some drawbacks of using the condition number C(d) as the sole measure of conditioning of a conic linear system, and we introduce the notion of a "preconditioner" for (FP d ) which results in an equivalent formulation (FP ~ d ) of (FP d ) with a better condition number C( ~ d). We characterize the best such preconditioner and provide an algorithm and complexity analysis for constructing an equivalent data instance ~ d whose condition number C( ~ d) is within a known factor of the best possible.
The taxation of capital returns in overlapping generations economies without Financial assets
, 2008
"... ..."
Computing coresets and approximate smallest enclosing hyperspheres in high dimensions
 the Proceedings of ALENEX’03
"... We study the minimum enclosing ball (MEB) problem for sets of points or balls in high dimensions. Using techniques of secondorder cone programming and “coresets”, we have developed (1+ɛ)approximation algorithms that perform well in practice, especially for very high dimensions, in addition to hav ..."
Abstract

Cited by 21 (2 self)
 Add to MetaCart
(Show Context)
We study the minimum enclosing ball (MEB) problem for sets of points or balls in high dimensions. Using techniques of secondorder cone programming and “coresets”, we have developed (1+ɛ)approximation algorithms that perform well in practice, especially for very high dimensions, in addition to having provable guarantees. We prove the existence of coresets of size O(1/ɛ) , improving the previous bound of O(1/ɛ2) , and we study empirically how the coreset size grows with dimension. We show that our algorithm, which is simple to implement, results in fast computation of nearly optimal solutions for point sets in much higher dimension than previously computable using exact techniques. 1
Gas models and three difficult objectives
, 2006
"... Gas models and three difficult objectives ..."