Results 1  10
of
14
A Sparse Signal Reconstruction Perspective for Source Localization With Sensor Arrays
, 2005
"... We present a source localization method based on a sparse representation of sensor measurements with an overcomplete basis composed of samples from the array manifold. We enforce sparsity by imposing penalties based on the 1norm. A number of recent theoretical results on sparsifying properties of ..."
Abstract

Cited by 223 (6 self)
 Add to MetaCart
We present a source localization method based on a sparse representation of sensor measurements with an overcomplete basis composed of samples from the array manifold. We enforce sparsity by imposing penalties based on the 1norm. A number of recent theoretical results on sparsifying properties of 1 penalties justify this choice. Explicitly enforcing the sparsity of the representation is motivated by a desire to obtain a sharp estimate of the spatial spectrum that exhibits superresolution. We propose to use the singular value decomposition (SVD) of the data matrix to summarize multiple time or frequency samples. Our formulation leads to an optimization problem, which we solve efficiently in a secondorder cone (SOC) programming framework by an interior point implementation. We propose a grid refinement method to mitigate the effects of limiting estimates to a grid of spatial locations and introduce an automatic selection criterion for the regularization parameter involved in our approach. We demonstrate the effectiveness of the method on simulated data by plots of spatial spectra and by comparing the estimator variance to the Cramér–Rao bound (CRB). We observe that our approach has a number of advantages over other source localization techniques, including increased resolution, improved robustness to noise, limitations in data quantity, and correlation of the sources, as well as not requiring an accurate initialization.
Monotonicity of primaldual interiorpoint algorithms for semidefinite programming problems
, 1998
"... We present primaldual interiorpoint algorithms with polynomial iteration bounds to find approximate solutions of semidefinite programming problems. Our algorithms achieve the current best iteration bounds and, in every iteration of our algorithms, primal and dual objective values are strictly imp ..."
Abstract

Cited by 214 (35 self)
 Add to MetaCart
We present primaldual interiorpoint algorithms with polynomial iteration bounds to find approximate solutions of semidefinite programming problems. Our algorithms achieve the current best iteration bounds and, in every iteration of our algorithms, primal and dual objective values are strictly improved.
Semidefinite optimization
 Acta Numerica
, 2001
"... Optimization problems in which the variable is not a vector but a symmetric matrix which is required to be positive semidefinite have been intensely studied in the last ten years. Part of the reason for the interest stems from the applicability of such problems to such diverse areas as designing the ..."
Abstract

Cited by 149 (2 self)
 Add to MetaCart
(Show Context)
Optimization problems in which the variable is not a vector but a symmetric matrix which is required to be positive semidefinite have been intensely studied in the last ten years. Part of the reason for the interest stems from the applicability of such problems to such diverse areas as designing the strongest column, checking the stability of a differential inclusion, and obtaining tight bounds for hard combinatorial optimization problems. Part also derives from great advances in our ability to solve such problems efficiently in theory and in practice (perhaps “or ” would be more appropriate: the most effective computational methods are not always provably efficient in theory, and vice versa). Here we describe this class of optimization problems, give a number of examples demonstrating its significance, outline its duality theory, and discuss algorithms for solving such problems.
Duality in Vector Optimization
 Math. Programming
, 1983
"... the connections between semidefinite ..."
(Show Context)
On Lagrangian relaxation of quadratic matrix constraints
 SIAM J. MATRIX ANAL. APPL
, 2000
"... Quadratically constrained quadratic programs (QQPs) play an important modeling role for many diverse problems. These problems are in general NP hard and numerically intractable. Lagrangian relaxations often provide good approximate solutions to these hard problems. Such relaxations are equivalent ..."
Abstract

Cited by 50 (17 self)
 Add to MetaCart
(Show Context)
Quadratically constrained quadratic programs (QQPs) play an important modeling role for many diverse problems. These problems are in general NP hard and numerically intractable. Lagrangian relaxations often provide good approximate solutions to these hard problems. Such relaxations are equivalent to semidefinite programming relaxations. For several special cases of QQP, e.g., convex programs and trust region subproblems, the Lagrangian relaxation provides the exact optimal value, i.e., there is a zero duality gap. However, this is not true for the general QQP, or even the QQP with two convex constraints, but a nonconvex objective. In this paper we consider a certain QQP where the quadratic constraints correspond to the matrix orthogonality condition XXT = I. For this problem we show that the Lagrangian dual based on relaxing the constraints XXT = I and the seemingly redundant constraints XT X = I has a zero duality gap. This result has natural applications to quadratic assignment and graph partitioning problems, as well as the problem of minimizing the weighted sum of the largest eigenvalues of a matrix. We also show that the technique of relaxing quadratic matrix constraints can be used to obtain a strengthened semidefinite relaxation for the maxcut problem.
Cones Of Matrices And Successive Convex Relaxations Of Nonconvex Sets
, 2000
"... . Let F be a compact subset of the ndimensional Euclidean space R n represented by (finitely or infinitely many) quadratic inequalities. We propose two methods, one based on successive semidefinite programming (SDP) relaxations and the other on successive linear programming (LP) relaxations. Each ..."
Abstract

Cited by 49 (19 self)
 Add to MetaCart
. Let F be a compact subset of the ndimensional Euclidean space R n represented by (finitely or infinitely many) quadratic inequalities. We propose two methods, one based on successive semidefinite programming (SDP) relaxations and the other on successive linear programming (LP) relaxations. Each of our methods generates a sequence of compact convex subsets C k (k = 1, 2, . . . ) of R n such that (a) the convex hull of F # C k+1 # C k (monotonicity), (b) # # k=1 C k = the convex hull of F (asymptotic convergence). Our methods are extensions of the corresponding LovaszSchrijver liftandproject procedures with the use of SDP or LP relaxation applied to general quadratic optimization problems (QOPs) with infinitely many quadratic inequality constraints. Utilizing descriptions of sets based on cones of matrices and their duals, we establish the exact equivalence of the SDP relaxation and the semiinfinite convex QOP relaxation proposed originally by Fujie and Kojima. Using th...
A Study of Search Directions in PrimalDual InteriorPoint Methods for Semidefinite Programming
, 1998
"... We discuss several di#erent search directions which can be used in primaldual interiorpoint methods for semidefinite programming problems and investigate their theoretical properties, including scale invariance, primaldual symmetry, and whether they always generate welldefined directions. Among ..."
Abstract

Cited by 35 (1 self)
 Add to MetaCart
We discuss several di#erent search directions which can be used in primaldual interiorpoint methods for semidefinite programming problems and investigate their theoretical properties, including scale invariance, primaldual symmetry, and whether they always generate welldefined directions. Among the directions satisfying all but at most two of these desirable properties are the AlizadehHaeberlyOverton, HelmbergRendl VanderbeiWolkowicz/KojimaShindohHara/Monteiro, NesterovTodd, Gu, and Toh directions, as well as directions we will call the MTW and Half directions. The first five of these appear to be the best in our limited computational testing also. Key words: semidefinite programming, search direction, invariance properties. AMS Subject classification: 90C05. Abbreviated title: Search directions in SDP 1 Introduction This paper is concerned with interiorpoint methods for semidefinite programming (SDP) problems and in particular the various search directions they use and ...
A parallel primaldual interiorpoint method for semidefinite programs using positive definite matrix completion
 PARALLEL COMPUTING
, 2003
"... A parallel computational method SDPARAC is presented for SDPs (semidefinite programs). It combines two methods SDPARA and SDPAC proposed by the authors who developed a software package SDPA. SDPARA is a parallel implementation of SDPA and it features parallel computation of the elements of the Sc ..."
Abstract

Cited by 19 (12 self)
 Add to MetaCart
(Show Context)
A parallel computational method SDPARAC is presented for SDPs (semidefinite programs). It combines two methods SDPARA and SDPAC proposed by the authors who developed a software package SDPA. SDPARA is a parallel implementation of SDPA and it features parallel computation of the elements of the Schur complement equation system and a parallel Cholesky factorization of its coefficient matrix. SDPARA can effectively solve SDPs with a large number of equality constraints, however, it does not solve SDPs with a large scale matrix variable with similar effectiveness. SDPAC is a primaldual interiorpoint method using the positive definite matrix completion technique by Fukuda et al, and it performs effectively with SDPs with a large scale matrix variable, but not with a large number of equality constraints. SDPARAC benefits from the strong performance of each of the two methods. Furthermore, SDPARAC is designed to attain a high scalability by considering most of the expensive computations involved in the primaldual interiorpoint method. Numerical experiments with the three parallel software packages SDPARAC, SDPARA and PDSDP by Benson show that SDPARAC efficiently solve SDPs with a large scale matrix variable as well as a large number of equality constraints with a small amount of memory.
Interiorpoint methods for optimization
, 2008
"... This article describes the current state of the art of interiorpoint methods (IPMs) for convex, conic, and general nonlinear optimization. We discuss the theory, outline the algorithms, and comment on the applicability of this class of methods, which have revolutionized the field over the last twen ..."
Abstract

Cited by 18 (0 self)
 Add to MetaCart
This article describes the current state of the art of interiorpoint methods (IPMs) for convex, conic, and general nonlinear optimization. We discuss the theory, outline the algorithms, and comment on the applicability of this class of methods, which have revolutionized the field over the last twenty years.
Fastest mixing Markov chain on graphs with symmetries
 SIAM J. OPTIM
, 2007
"... We show how to exploit symmetries of a graph to efficiently compute the fastest mixing Markov chain on the graph (i.e., find the transition probabilities on the edges to minimize the secondlargest eigenvalue modulus of the transition probability matrix). Exploiting symmetry can lead to significant ..."
Abstract

Cited by 15 (1 self)
 Add to MetaCart
(Show Context)
We show how to exploit symmetries of a graph to efficiently compute the fastest mixing Markov chain on the graph (i.e., find the transition probabilities on the edges to minimize the secondlargest eigenvalue modulus of the transition probability matrix). Exploiting symmetry can lead to significant reduction in both the number of variables and the size of matrices in the corresponding semidefinite program, thus enable numerical solution of largescale instances that are otherwise computationally infeasible. We obtain analytic or semianalytic results for particular classes of graphs, such as edgetransitive and distancetransitive graphs. We describe two general approaches for symmetry exploitation, based on orbit theory and blockdiagonalization, respectively. We also establish the connection between these two approaches.