Results 1  10
of
118
Convergent SDPRelaxations in Polynomial Optimization with Sparsity
 SIAM Journal on Optimization
"... Abstract. We consider a polynomial programming problem P on a compact semialgebraic set K ⊂ R n, described by m polynomial inequalities gj(X) ≥ 0, and with criterion f ∈ R[X]. We propose a hierarchy of semidefinite relaxations in the spirit those of Waki et al. [9]. In particular, the SDPrelaxati ..."
Abstract

Cited by 56 (16 self)
 Add to MetaCart
(Show Context)
Abstract. We consider a polynomial programming problem P on a compact semialgebraic set K ⊂ R n, described by m polynomial inequalities gj(X) ≥ 0, and with criterion f ∈ R[X]. We propose a hierarchy of semidefinite relaxations in the spirit those of Waki et al. [9]. In particular, the SDPrelaxation of order r has the following two features: (a) The number of variables is O(κ 2r) where κ = max[κ1, κ2] witth κ1 (resp. κ2) being the maximum number of variables appearing the monomials of f (resp. appearing in a single constraint gj(X) ≥ 0). (b) The largest size of the LMI’s (Linear Matrix Inequalities) is O(κ r). This is to compare with the respective number of variables O(n 2r) and LMI size O(n r) in the original SDPrelaxations defined in [11]. Therefore, great computational savings are expected in case of sparsity in the data {gj, f}, i.e. when κ is small, a frequent case in practical applications of interest. The novelty with respect to [9] is that we prove convergence to the global optimum of P when the sparsity pattern satisfies a condition often encountered in large size problems of practical applications, and known as the running intersection property in graph theory. In such cases, and as a byproduct, we also obtain a new representation result for polynomials positive on a basic closed semialgebraic set, a sparse version of Putinar’s Positivstellensatz [16]. 1.
Globally optimal estimates for geometric reconstruction problems
 In ICCV
, 2005
"... We introduce a framework for computing statistically optimal estimates of geometric reconstruction problems. While traditional algorithms often suffer from either local minima or nonoptimality or a combination of both we pursue the goal of achieving global solutions of the statistically optimal c ..."
Abstract

Cited by 53 (15 self)
 Add to MetaCart
(Show Context)
We introduce a framework for computing statistically optimal estimates of geometric reconstruction problems. While traditional algorithms often suffer from either local minima or nonoptimality or a combination of both we pursue the goal of achieving global solutions of the statistically optimal costfunction. Our approach is based on a hierarchy of convex relaxations to solve nonconvex optimization problems with polynomials. These convex relaxations generate a monotone sequence of lower bounds and we show how one can detect whether the global optimum is attained at a given relaxation. The technique is applied to a number of classical vision problems: triangulation, camera pose, homography estimation and last, but not least, epipolar geometry estimation. Experimental validation on both synthetic and real data is provided. In practice, only a few relaxations are needed for attaining the global optimum. 1
Exploiting sparsity in SDP relaxation for sensor network localization
 SIAM J. Optim
, 2009
"... Abstract. A sensor network localization problem can be formulated as a quadratic optimization problem (QOP). For quadratic optimization problems, semidefinite programming (SDP) relaxation by Lasserre with relaxation order 1 for general polynomial optimization problems (POPs) is known to be equivalen ..."
Abstract

Cited by 36 (9 self)
 Add to MetaCart
(Show Context)
Abstract. A sensor network localization problem can be formulated as a quadratic optimization problem (QOP). For quadratic optimization problems, semidefinite programming (SDP) relaxation by Lasserre with relaxation order 1 for general polynomial optimization problems (POPs) is known to be equivalent to the sparse SDP relaxation by Waki et al. with relaxation order 1, except the size and sparsity of the resulting SDP relaxation problems. We show that the sparse SDP relaxation applied to the QOP is at least as strong as the BiswasYe SDP relaxation for the sensor network localization problem. A sparse variant of the BiswasYe SDP relaxation, which is equivalent to the original BiswasYe SDP relaxation, is also derived. Numerical results are compared with the BiswasYe SDP relaxation and the edgebased SDP relaxation by Wang et al.. We show that the proposed sparse SDP relaxation is faster than the BiswasYe SDP relaxation. In fact, the computational efficiency in solving the resulting SDP problems increases as the number of anchors and/or the radio range grow. The proposed sparse SDP relaxation also provides more accurate solutions than the edgebased SDP relaxation when exact distances are given between sensors and anchors and there are only a small number of anchors. Key words. Sensor network localization problem, polynomial optimization problem, semidefinite relaxation, sparsity
Sum of squares methods for sensor network localization
, 2006
"... We formulate the sensor network localization problem as finding the global minimizer of a quartic polynomial. Then sum of squares (SOS) relaxations can be applied to solve it. However, the general SOS relaxations are too expensive to implement for large problems. Exploiting the special features of t ..."
Abstract

Cited by 30 (3 self)
 Add to MetaCart
(Show Context)
We formulate the sensor network localization problem as finding the global minimizer of a quartic polynomial. Then sum of squares (SOS) relaxations can be applied to solve it. However, the general SOS relaxations are too expensive to implement for large problems. Exploiting the special features of this polynomial, we propose a new structured SOS relaxation, and discuss its various properties. When distances are given exactly, this SOS relaxation often returns true sensor locations. At each step of interior point methods solving this SOS relaxation, the complexity is O(n 3), where n is the number of sensors. When the distances have small perturbations, we show that the sensor locations given by this SOS relaxation are accurate within a constant factor of the perturbation error under some technical assumptions. The performance of this SOS relaxation is tested on some randomly generated problems.
Exact Certification of Global Optimality of Approximate Factorizations Via Rationalizing SumsOfSquares with Floating Point Scalars
, 2008
"... We generalize the technique by Peyrl and Parillo [Proc. SNC 2007] to computing lower bound certificates for several wellknown factorization problems in hybrid symbolicnumeric computation. The idea is to transform a numerical sumofsquares (SOS) representation of a positive polynomial into an exact ..."
Abstract

Cited by 25 (10 self)
 Add to MetaCart
(Show Context)
We generalize the technique by Peyrl and Parillo [Proc. SNC 2007] to computing lower bound certificates for several wellknown factorization problems in hybrid symbolicnumeric computation. The idea is to transform a numerical sumofsquares (SOS) representation of a positive polynomial into an exact rational identity. Our algorithms successfully certify accurate rational lower bounds near the irrational global optima for benchmark approximate polynomial greatest common divisors and multivariate polynomial irreducibility radii from the literature, and factor coefficient bounds in the setting of a model problem by Rump (up to n = 14, factor degree = 13). The numeric SOSes produced by the current fixed precision semidefinite programming (SDP) packages (SeDuMi, SOSTOOLS, YALMIP) are usually too coarse to allow successful projection to exact SOSes via Maple 11’s exact linear algebra. Therefore, before projection we refine the SOSes by rankpreserving Newton iteration. For smaller problems the starting SOSes for Newton can be guessed without SDP (“SDPfree SOS”), but for larger inputs we additionally appeal to sparsity techniques in our SDP formulation.
Exploiting sparsity in linear and nonlinear matrix inequalities via positive semidefinite matrix completion
, 2010
"... ..."
Sparse SOS relaxations for minimizing functions that are summations of small polynomials
 SIAM Journal On Optimization
, 2008
"... This paper discusses how to find the global minimum of functions that are summations of small polynomials (“small ” means involving a small number of variables). Some sparse sum of squares (SOS) techniques are proposed. We compare their computational complexity and lower bounds with prior SOS relaxa ..."
Abstract

Cited by 23 (4 self)
 Add to MetaCart
(Show Context)
This paper discusses how to find the global minimum of functions that are summations of small polynomials (“small ” means involving a small number of variables). Some sparse sum of squares (SOS) techniques are proposed. We compare their computational complexity and lower bounds with prior SOS relaxations. Under certain conditions, we also discuss how to extract the global minimizers from these sparse relaxations. The proposed methods are especially useful in solving sparse polynomial system and nonlinear least squares problems. Numerical experiments are presented, which show that the proposed methods significantly improve the computational performance of prior methods for solving these problems. Lastly, we present applications of this sparsity technique in solving polynomial systems derived from nonlinear differential equations and sensor network localization. Key words: Polynomials, sum of squares (SOS), sparsity, nonlinear least squares, polynomial system, nonlinear differential equations, sensor network localization 1
Correlative sparsity in primaldual interiorpoint methods for LP, SDP and SOCP
, 2006
"... Exploiting sparsity has been a key issue in solving largescale optimization problems. The most timeconsuming part of primaldual interiorpoint methods for linear programs, secondorder cone programs, and semidefinite programs is solving the Schur complement equation at each iteration, usually by ..."
Abstract

Cited by 22 (16 self)
 Add to MetaCart
Exploiting sparsity has been a key issue in solving largescale optimization problems. The most timeconsuming part of primaldual interiorpoint methods for linear programs, secondorder cone programs, and semidefinite programs is solving the Schur complement equation at each iteration, usually by the Cholesky factorization. The computational efficiency is greatly affected by the sparsity of the coefficient matrix of the equation that is determined by the sparsity of an optimization problem (linear program, semidefinite program or secondorder program). We show if an optimization problem is correlatively sparse, then the coefficient matrix of the Schur complement equation inherits the sparsity, and a sparse Cholesky factorization applied to the matrix results in no fillin.
A parallel primaldual interiorpoint method for semidefinite programs using positive definite matrix completion
 PARALLEL COMPUTING
, 2003
"... A parallel computational method SDPARAC is presented for SDPs (semidefinite programs). It combines two methods SDPARA and SDPAC proposed by the authors who developed a software package SDPA. SDPARA is a parallel implementation of SDPA and it features parallel computation of the elements of the Sc ..."
Abstract

Cited by 20 (13 self)
 Add to MetaCart
(Show Context)
A parallel computational method SDPARAC is presented for SDPs (semidefinite programs). It combines two methods SDPARA and SDPAC proposed by the authors who developed a software package SDPA. SDPARA is a parallel implementation of SDPA and it features parallel computation of the elements of the Schur complement equation system and a parallel Cholesky factorization of its coefficient matrix. SDPARA can effectively solve SDPs with a large number of equality constraints, however, it does not solve SDPs with a large scale matrix variable with similar effectiveness. SDPAC is a primaldual interiorpoint method using the positive definite matrix completion technique by Fukuda et al, and it performs effectively with SDPs with a large scale matrix variable, but not with a large number of equality constraints. SDPARAC benefits from the strong performance of each of the two methods. Furthermore, SDPARAC is designed to attain a high scalability by considering most of the expensive computations involved in the primaldual interiorpoint method. Numerical experiments with the three parallel software packages SDPARAC, SDPARA and PDSDP by Benson show that SDPARAC efficiently solve SDPs with a large scale matrix variable as well as a large number of equality constraints with a small amount of memory.
Inner approximations for polynomial matrix inequalities and robust stability regions
, 2012
"... Following a polynomial approach, many robust fixedorder controller design problems can be formulated as optimization problems whose set of feasible solutions is modelled by parametrized polynomial matrix inequalities (PMI). These feasibility sets are typically nonconvex. Given a parametrized PMI se ..."
Abstract

Cited by 13 (6 self)
 Add to MetaCart
(Show Context)
Following a polynomial approach, many robust fixedorder controller design problems can be formulated as optimization problems whose set of feasible solutions is modelled by parametrized polynomial matrix inequalities (PMI). These feasibility sets are typically nonconvex. Given a parametrized PMI set, we provide a hierarchy of linear matrix inequality (LMI) problems whose optimal solutions generate inner approximations modelled by a single polynomial superlevel set. Those inner approximations converge in a welldefined analytic sense to the nonconvex original feasible set, with asymptotically vanishing conservatism. One may also impose the hierarchy of inner approximations to be nested or convex. In the latter case they do not converge any more to the feasible set, but they can be used in a convex optimization framework at the price of some conservatism. Finally, we show that the specific geometry of nonconvex polynomial stability regions can be exploited to improve convergence of the hierarchy of inner approximations.