Results 1  10
of
29
Approximation Accuracy, Gradient Methods, and Error Bound for Structured Convex Optimization
, 2009
"... Convex optimization problems arising in applications, possibly as approximations of intractable problems, are often structured and large scale. When the data are noisy, it is of interest to bound the solution error relative to the (unknown) solution of the original noiseless problem. Related to this ..."
Abstract

Cited by 38 (1 self)
 Add to MetaCart
(Show Context)
Convex optimization problems arising in applications, possibly as approximations of intractable problems, are often structured and large scale. When the data are noisy, it is of interest to bound the solution error relative to the (unknown) solution of the original noiseless problem. Related to this is an error bound for the linear convergence analysis of firstorder gradient methods for solving these problems. Example applications include compressed sensing, variable selection in regression, TVregularized image denoising, and sensor network localization.
Further relaxations of the SDP approach to sensor network localization
 SIAM J. Optim
"... Recently, a semidefinite programming (SDP) relaxation approach has been proposed to solve the sensor network localization problem. Although it achieves high accuracy in estimating sensor’s locations, the speed of the SDP approach is not satisfactory for practical applications. In this paper we prop ..."
Abstract

Cited by 33 (0 self)
 Add to MetaCart
(Show Context)
Recently, a semidefinite programming (SDP) relaxation approach has been proposed to solve the sensor network localization problem. Although it achieves high accuracy in estimating sensor’s locations, the speed of the SDP approach is not satisfactory for practical applications. In this paper we propose methods to further relax the SDP relaxation; more precisely, to decompose the single semidefinite matrix cone into a set of smallsize semidefinite matrix cones, which we call the smaller SDP (SSDP) approach. We present two such relaxations or decompositions; and they are, although weaker than SDP relaxation, tested to be both efficient and accurate in practical computations. The speed of the SSDP is much faster than that of the SDP approach as well as other approaches. We also prove several theoretical properties of the new SSDP relaxations.
Sparse SOS relaxations for minimizing functions that are summations of small polynomials
 SIAM Journal On Optimization
, 2008
"... This paper discusses how to find the global minimum of functions that are summations of small polynomials (“small ” means involving a small number of variables). Some sparse sum of squares (SOS) techniques are proposed. We compare their computational complexity and lower bounds with prior SOS relaxa ..."
Abstract

Cited by 23 (4 self)
 Add to MetaCart
(Show Context)
This paper discusses how to find the global minimum of functions that are summations of small polynomials (“small ” means involving a small number of variables). Some sparse sum of squares (SOS) techniques are proposed. We compare their computational complexity and lower bounds with prior SOS relaxations. Under certain conditions, we also discuss how to extract the global minimizers from these sparse relaxations. The proposed methods are especially useful in solving sparse polynomial system and nonlinear least squares problems. Numerical experiments are presented, which show that the proposed methods significantly improve the computational performance of prior methods for solving these problems. Lastly, we present applications of this sparsity technique in solving polynomial systems derived from nonlinear differential equations and sensor network localization. Key words: Polynomials, sum of squares (SOS), sparsity, nonlinear least squares, polynomial system, nonlinear differential equations, sensor network localization 1
(Robust) EdgeBased Semidefinite Programming Relaxation of Sensor Network Localization
 MATH PROGRAM
"... Recently Wang, Zheng, Boyd, and Ye (SIAM J Optim 19:655–673, 2008) proposed a further relaxation of the semidefinite programming (SDP) relaxation of the sensor network localization problem, named edgebased SDP (ESDP). In simulation, the ESDP is solved much faster by interiorpoint method than SDP r ..."
Abstract

Cited by 17 (2 self)
 Add to MetaCart
Recently Wang, Zheng, Boyd, and Ye (SIAM J Optim 19:655–673, 2008) proposed a further relaxation of the semidefinite programming (SDP) relaxation of the sensor network localization problem, named edgebased SDP (ESDP). In simulation, the ESDP is solved much faster by interiorpoint method than SDP relaxation, and the solutions found are comparable or better in approximation accuracy. We study some key properties of the ESDP relaxation, showing that, when distances are exact, zero individual trace is not only sufficient, but also necessary for a sensor to be correctly positioned by an interior solution. We also show via an example that, when distances are inexact, zero individual trace is insufficient for a sensor to be accurately positioned by an interior solution. We then propose a noiseaware robust version of ESDP relaxation for which small individual trace is necessary and sufficient for a sensor to be accurately positioned by a certain analytic center solution, assuming the noise level is sufficiently small. For this analytic center solution, the position error for each sensor is shown to be in the order of the square root of its trace. Lastly, we propose a logbarrier penalty coordinate gradient descent method to find such an analytic center solution. In simulation, this method is much faster than interiorpoint method for solving ESDP, and the solutions found are comparable in approximation accuracy. Moreover, the method can distribute its computation over the sensors via local communication, making it practical for positioning and tracking in real time.
Euclidean Distance Matrices and Applications
"... Over the past decade, Euclidean distance matrices, or EDMs, have been receiving increased attention for two main reasons. The first reason is that the many applications of EDMs, such as molecular conformation in bioinformatics, dimensionality reduction in machine learning and statistics, and especia ..."
Abstract

Cited by 14 (0 self)
 Add to MetaCart
(Show Context)
Over the past decade, Euclidean distance matrices, or EDMs, have been receiving increased attention for two main reasons. The first reason is that the many applications of EDMs, such as molecular conformation in bioinformatics, dimensionality reduction in machine learning and statistics, and especially
A new graph parameter related to bounded rank positive semidefinite matrix completions
 MATHEMATICAL PROGRAMMING
"... ..."
Structured semidefinite representation of some convex sets
, 2008
"... Linear matrix Inequalities (LMIs) have had a major impact on control but formulating a problem as an LMI is an art. Recently there is the beginnings of a theory of which problems are in fact expressible as LMIs. For optimization purposes it can also be useful to have “lifts” which are expressible as ..."
Abstract

Cited by 8 (6 self)
 Add to MetaCart
Linear matrix Inequalities (LMIs) have had a major impact on control but formulating a problem as an LMI is an art. Recently there is the beginnings of a theory of which problems are in fact expressible as LMIs. For optimization purposes it can also be useful to have “lifts” which are expressible as LMIs. We show here that this is a much less restrictive condition and give methods for actually constructing lifts and their LMI representation.
Universal Rigidity and Edge Sparsification for Sensor Network Localization
, 2009
"... Owing to their high accuracy and ease of formulation, there has been great interest in applying convex optimization techniques, particularly that of semidefinite programming (SDP) relaxation, to tackle the sensor network localization problem in recent years. However, a drawback of such techniques is ..."
Abstract

Cited by 8 (1 self)
 Add to MetaCart
(Show Context)
Owing to their high accuracy and ease of formulation, there has been great interest in applying convex optimization techniques, particularly that of semidefinite programming (SDP) relaxation, to tackle the sensor network localization problem in recent years. However, a drawback of such techniques is that the resulting convex program is often expensive to solve. In order to speed up computation, various edge sparsification heuristics have been proposed, whose aim is to reduce the number of edges in the input graph. Although these heuristics do reduce the size of the convex program and hence making it faster to solve, they are often ad hoc in nature and do not preserve the localization properties of the input. As such, one often has to face a tradeoff between solution accuracy and computational effort. In this paper we propose a novel edge sparsification heuristic that can provably preserve the localization properties of the original input. At the heart of our heuristic is a graph decomposition procedure, which allows us to identify certain sparse generically universally rigid subgraphs of the input graph. Our computational results show that the proposed approach can significantly reduce the computational and memory complexities of SDP–based algorithms for solving the sensor network localization problem. Moreover, it compares favorably with existing speedup approaches, both in terms of accuracy and solution time. 1
Determining the RobottoRobot 3D Relative Pose using Combinations of Range and Bearing Measurements (Part II)
, 2011
"... In this paper, we address the problem of motioninduced 3D robottorobot extrinsic calibration based on different combinations of interrobot measurements (i.e., distance and/or bearing observations) and egomotion estimates, recorded across multiple time steps. In particular, we focus on solving ..."
Abstract

Cited by 8 (1 self)
 Add to MetaCart
(Show Context)
In this paper, we address the problem of motioninduced 3D robottorobot extrinsic calibration based on different combinations of interrobot measurements (i.e., distance and/or bearing observations) and egomotion estimates, recorded across multiple time steps. In particular, we focus on solving minimal problems where the unknown 6degreeoffreedom transformation between two robots is determined based on the minimum number of measurements necessary for finding a discrete set of solutions. In our previous work [1], we have shown that only 14 base systems need to be solved, and provided closedform solutions for three of them. This paper considers the remaining systems and provides closedform solutions to most of them, while for some of the most challenging problems, we introduce efficient symbolicnumerical solution methods. Finally, we evaluate the performance of our proposed
Distributed Maximum Likelihood Sensor Network Localization
 IEEE Transactions on Signal Processing
, 2014
"... Abstract—We propose a class of convex relaxations to solve the sensor network localization problem, based on a maximum likelihood (ML) formulation. This class, as well as the tightness of the relaxations, depends on the noise probability density function (PDF) of the collected measurements.We deri ..."
Abstract

Cited by 7 (3 self)
 Add to MetaCart
(Show Context)
Abstract—We propose a class of convex relaxations to solve the sensor network localization problem, based on a maximum likelihood (ML) formulation. This class, as well as the tightness of the relaxations, depends on the noise probability density function (PDF) of the collected measurements.We derive a computational efficient edgebased version of this ML convex relaxation class and we design a distributed algorithm that enables the sensor nodes to solve these edgebased convex programs locally by communicating only with their close neighbors. This algorithm relies on the alternating direction method of multipliers (ADMM), it converges to the centralized solution, it can run asynchronously, and it is computation errorresilient. Finally, we compare our proposed distributed scheme with other available methods, both analytically and numerically, and we argue the added value of ADMM, especially for largescale networks. Index Terms—Distributed optimization, convex relaxations, sensor network localization, distributed algorithms, ADMM, distributed localization, sensor networks, maximum likelihood. I.