Results 1  10
of
146
ROBUST PORTFOLIO SELECTION PROBLEMS
, 2003
"... In this paper we show how to formulate and solve robust portfolio selection problems. The objective of these robust formulations is to systematically combat the sensitivity of the optimal portfolio to statistical and modeling errors in the estimates of the relevant market parameters. We introduce “u ..."
Abstract

Cited by 160 (8 self)
 Add to MetaCart
In this paper we show how to formulate and solve robust portfolio selection problems. The objective of these robust formulations is to systematically combat the sensitivity of the optimal portfolio to statistical and modeling errors in the estimates of the relevant market parameters. We introduce “uncertainty structures” for the market parameters and show that the robust portfolio selection problems corresponding to these uncertainty structures can be reformulated as secondorder cone programs and, therefore, the computational effort required to solve them is comparable to that required for solving convex quadratic programs. Moreover, we show that these uncertainty structures correspond to confidence regions associated with the statistical procedures employed to estimate the market parameters. Finally, we demonstrate a simple recipe for efficiently computing robust portfolios given raw market data and a desired level of confidence.
Zero Forcing Precoding and Generalized Inverses
"... We consider the problem of linear zero forcing precoding design, and discuss its relation to the theory of generalized inverses in linear algebra. Special attention is given to a specific generalized inverse known as the pseudoinverse. We begin with the standard design under the assumption of a tot ..."
Abstract

Cited by 52 (0 self)
 Add to MetaCart
We consider the problem of linear zero forcing precoding design, and discuss its relation to the theory of generalized inverses in linear algebra. Special attention is given to a specific generalized inverse known as the pseudoinverse. We begin with the standard design under the assumption of a total power constraint and prove that precoders based on the pseudoinverse are optimal in this setting. Then, we proceed to examine individual perantenna power constraints. In this case, the pseudoinverse is not necessarily the optimal generalized inverse. In fact, finding the optimal inverse is nontrivial and depends on the specific performance measure. We address two common criteria, fairness and throughput, and show that the optimal matrices may be found using standard convex optimization methods. We demonstrate the improved performance offered by our approach using computer simulations.
Robust Solutions of Uncertain Quadratic and ConicQuadratic Problems
 Solutions of Uncertain Linear Programs: Math. Program
, 2001
"... We consider a conicquadratic (and in particular a quadratically constrained) optimization problem with uncertain data, known only to reside in some uncertainty set U . The robust counterpart of such a problem leads usually to an NPhard semidefinite problem; this is the case for example when U is g ..."
Abstract

Cited by 52 (8 self)
 Add to MetaCart
(Show Context)
We consider a conicquadratic (and in particular a quadratically constrained) optimization problem with uncertain data, known only to reside in some uncertainty set U . The robust counterpart of such a problem leads usually to an NPhard semidefinite problem; this is the case for example when U is given as intersection of ellipsoids, or as an ndimensional box. For these cases we build a single, explicit semidefinite program, which approximates the NPhard robust counterpart, and we derive an estimate on the quality of the approximation, which is independent of the dimensions of the underlying conicquadratic problem.
Semidefinite relaxation for detection of 16QAM signaling in mimo channels
 IEEE Signal Processing Letters
, 2005
"... Abstract—We develop a computationally efficient approximation of the maximum likelihood (ML) detector for 16 quadrature amplitude modulation (16QAM) in multipleinput multipleoutput (MIMO) systems. The detector is based on a convex relaxation of the ML problem. The resulting optimization is a semi ..."
Abstract

Cited by 39 (1 self)
 Add to MetaCart
(Show Context)
Abstract—We develop a computationally efficient approximation of the maximum likelihood (ML) detector for 16 quadrature amplitude modulation (16QAM) in multipleinput multipleoutput (MIMO) systems. The detector is based on a convex relaxation of the ML problem. The resulting optimization is a semidefinite program that can be solved in polynomial time with respect to the number of inputs in the system. Simulation results in a random MIMO system show that the proposed algorithm outperforms the conventional decorrelator detector by about 2.5 dB at high signaltonoise ratios. Index Terms—Maximum likelihood detection, MIMO systems, semidefinite relaxation. I.
Living on the edge: A geometric theory of phase transitions in convex optimization
, 2013
"... Recent empirical research indicates that many convex optimization problems with random constraints exhibit a phase transition as the number of constraints increases. For example, this phenomenon emerges in the `1 minimization method for identifying a sparse vector from random linear samples. Indee ..."
Abstract

Cited by 36 (4 self)
 Add to MetaCart
(Show Context)
Recent empirical research indicates that many convex optimization problems with random constraints exhibit a phase transition as the number of constraints increases. For example, this phenomenon emerges in the `1 minimization method for identifying a sparse vector from random linear samples. Indeed, this approach succeeds with high probability when the number of samples exceeds a threshold that depends on the sparsity level; otherwise, it fails with high probability. This paper provides the first rigorous analysis that explains why phase transitions are ubiquitous in random convex optimization problems. It also describes tools for making reliable predictions about the quantitative aspects of the transition, including the location and the width of the transition region. These techniques apply to regularized linear inverse problems with random measurements, to demixing problems under a random incoherence model, and also to cone programs with random affine constraints. These applications depend on foundational research in conic geometry. This paper introduces a new summary parameter, called the statistical dimension, that canonically extends the dimension of a linear subspace to the class of convex cones. The main technical result demonstrates that the sequence of conic intrinsic volumes of a convex cone concentrates sharply near the statistical dimension. This fact leads to an approximate version of the conic kinematic formula that gives bounds on the probability that a randomly oriented cone shares a ray with a fixed cone.
Robust Convex Quadratically Constrained Programs
 Mathematical Programming
, 2002
"... In this paper we study robust convex quadratically constrained programs, a subset of the class of robust convex programs introduced by BenTal and Nemirovski [4]. Unlike [4], our focus in this paper is to identify uncertainty structures that allow the corresponding robust quadratically constrained p ..."
Abstract

Cited by 36 (2 self)
 Add to MetaCart
(Show Context)
In this paper we study robust convex quadratically constrained programs, a subset of the class of robust convex programs introduced by BenTal and Nemirovski [4]. Unlike [4], our focus in this paper is to identify uncertainty structures that allow the corresponding robust quadratically constrained programs to be reformulated as secondorder cone programs. We propose three classes of uncertainty sets that satisfy this criterion and present examples where these classes of uncertainty sets are natural. 1 Problem formulation A generic quadratically constrained program (QCP) is defined as follows.
Selected topics in robust convex optimization
 MATH. PROG. B, THIS ISSUE
, 2007
"... Robust Optimization is a rapidly developing methodology for handling optimization problems affected by nonstochastic “uncertainbutbounded” data perturbations. In this paper, we overview several selected topics in this popular area, specifically, (1) recent extensions of the basic concept of robu ..."
Abstract

Cited by 35 (2 self)
 Add to MetaCart
(Show Context)
Robust Optimization is a rapidly developing methodology for handling optimization problems affected by nonstochastic “uncertainbutbounded” data perturbations. In this paper, we overview several selected topics in this popular area, specifically, (1) recent extensions of the basic concept of robust counterpart of an optimization problem with uncertain data, (2) tractability of robust counterparts, (3) links between RO and traditional chance constrained settings of problems with stochastic data, and (4) a novel generic application of the RO methodology in Robust Linear Control.
A Primaldual InteriorPoint Method for Linear Optimization Based on a New Proximity Function
, 2002
"... In this paper we present a generic primaldual interiorpoint algorithm for linear optimization in which the search direction depends on a univariate kernel function which is also used as proximity measure in the analysis of the algorithm. We present some powerful tools for the analysis of the alg ..."
Abstract

Cited by 35 (9 self)
 Add to MetaCart
In this paper we present a generic primaldual interiorpoint algorithm for linear optimization in which the search direction depends on a univariate kernel function which is also used as proximity measure in the analysis of the algorithm. We present some powerful tools for the analysis of the algorithm under the assumption that the kernel function satisfies three easy to check and mild conditions (i.e., exponential convexity, superconvexity and monotonicity of the second derivative). The approach is demonstrated by introducing a new kernel function and showing that the corresponding largeupdate algorithm improves the iteration complexity with a factor n 1 4 when compared with the classical method, which is based on the use of the logarithmic barrier function.
Boundederror quantum state identification and exponential separations in communication complexity
 In Proc. of the 38th Symposium on Theory of Computing (STOC
, 2006
"... We consider the problem of boundederror quantum state identification: given either state α0 or state α1, we are required to output ‘0’, ‘1 ’ or ‘? ’ (“don’t know”), such that conditioned on outputting ‘0 ’ or ‘1’, our guess is correct with high probability. The goal is to maximize the probability o ..."
Abstract

Cited by 29 (16 self)
 Add to MetaCart
(Show Context)
We consider the problem of boundederror quantum state identification: given either state α0 or state α1, we are required to output ‘0’, ‘1 ’ or ‘? ’ (“don’t know”), such that conditioned on outputting ‘0 ’ or ‘1’, our guess is correct with high probability. The goal is to maximize the probability of not outputting ‘?’. We prove a direct product theorem: if we are given two such problems, with optimal probabilities a and b, respectively, and the states in the first problem are pure, then the optimal probability for the joint boundederror state identification problem is O(ab). Our proof is based on semidefinite programming duality. Using this result, we present two exponential separations in the simultaneous message passing model of communication complexity. First, we describe a relation that can be computed with O(log n) classical bits of communication in the presence of shared randomness, but needs Ω(n 1/3) communication if the parties don’t share randomness, even if communication is quantum. This shows the optimality of Yao’s recent exponential simulation of sharedrandomness protocols by quantum protocols without shared randomness. Combined with an earlier separation in the other direction due to BarYossef et al., this shows that the quantum SMP model is incomparable with the classical sharedrandomness SMP model. Second, we describe a relation that can be computed with O(log n) classical bits of communication in the presence of shared entanglement, but needs Ω((n / log n) 1/3) communication if the parties share randomness but no entanglement, even if communication is quantum. This is the first example in communication complexity of a situation where entanglement buys you much more than quantum communication.