Results 1  10
of
23
Exact Certification of Global Optimality of Approximate Factorizations Via Rationalizing SumsOfSquares with Floating Point Scalars
, 2008
"... We generalize the technique by Peyrl and Parillo [Proc. SNC 2007] to computing lower bound certificates for several wellknown factorization problems in hybrid symbolicnumeric computation. The idea is to transform a numerical sumofsquares (SOS) representation of a positive polynomial into an exact ..."
Abstract

Cited by 26 (10 self)
 Add to MetaCart
(Show Context)
We generalize the technique by Peyrl and Parillo [Proc. SNC 2007] to computing lower bound certificates for several wellknown factorization problems in hybrid symbolicnumeric computation. The idea is to transform a numerical sumofsquares (SOS) representation of a positive polynomial into an exact rational identity. Our algorithms successfully certify accurate rational lower bounds near the irrational global optima for benchmark approximate polynomial greatest common divisors and multivariate polynomial irreducibility radii from the literature, and factor coefficient bounds in the setting of a model problem by Rump (up to n = 14, factor degree = 13). The numeric SOSes produced by the current fixed precision semidefinite programming (SDP) packages (SeDuMi, SOSTOOLS, YALMIP) are usually too coarse to allow successful projection to exact SOSes via Maple 11’s exact linear algebra. Therefore, before projection we refine the SOSes by rankpreserving Newton iteration. For smaller problems the starting SOSes for Newton can be guessed without SDP (“SDPfree SOS”), but for larger inputs we additionally appeal to sparsity techniques in our SDP formulation.
A note on the representation of positive polynomials with structured sparsity, Archiv der Mathematik
"... Abstract. We consider real polynomials in finitely many variables. Let the variables consist of finitely many blocks that are allowed to overlap in a certain way. Let the solution set of a finite system of polynomial inequalities be given where each inequality involves only variables of one block. W ..."
Abstract

Cited by 12 (0 self)
 Add to MetaCart
Abstract. We consider real polynomials in finitely many variables. Let the variables consist of finitely many blocks that are allowed to overlap in a certain way. Let the solution set of a finite system of polynomial inequalities be given where each inequality involves only variables of one block. We investigate polynomials that are positive on such a set and sparse in the sense that each monomial involves only variables of one block. In particular, we derive a short and direct proof for Lasserre’s theorem on the existence of sums of squares certificates respecting the block structure. The motivation for the results can be found in the literature on numerical methods for global optimization of polynomials that exploit sparsity. 1.
Structured semidefinite representation of some convex sets
, 2008
"... Linear matrix Inequalities (LMIs) have had a major impact on control but formulating a problem as an LMI is an art. Recently there is the beginnings of a theory of which problems are in fact expressible as LMIs. For optimization purposes it can also be useful to have “lifts” which are expressible as ..."
Abstract

Cited by 8 (6 self)
 Add to MetaCart
Linear matrix Inequalities (LMIs) have had a major impact on control but formulating a problem as an LMI is an art. Recently there is the beginnings of a theory of which problems are in fact expressible as LMIs. For optimization purposes it can also be useful to have “lifts” which are expressible as LMIs. We show here that this is a much less restrictive condition and give methods for actually constructing lifts and their LMI representation.
Sum of squares methods for minimizing polynomial forms over spheres and hypersurfaces. Frontiers of mathematics in china
"... This paper studies the problem of minimizing a homogeneous polynomial (form) f(x) over the unit sphere Sn−1 = {x ∈ Rn: ‖x‖2 = 1}. The problem is NPhard when f(x) has degree 3 or higher. Denote by fmin (resp., fmax) the minimum (resp., maximum) value of f(x) on Sn−1. First, when f(x) is an even form ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
(Show Context)
This paper studies the problem of minimizing a homogeneous polynomial (form) f(x) over the unit sphere Sn−1 = {x ∈ Rn: ‖x‖2 = 1}. The problem is NPhard when f(x) has degree 3 or higher. Denote by fmin (resp., fmax) the minimum (resp., maximum) value of f(x) on Sn−1. First, when f(x) is an even form of degree 2d, we study the standard sum of squares (SOS) relaxation for finding a lower bound of the minimum fmin: max γ s.t. f(x) − γ · ‖x‖2d2 is SOS. Let fsos be the above optimal value. Then we show that for all n ≥ 2d 1 ≤ fmax − fsos fmax − fmin ≤ C(d) n
Approximate GCDs of polynomials and sparse SOS relaxations
, 2008
"... The problem of computing approximate GCDs of several polynomials with real or complex coefficients can be formulated as computing the minimal perturbation such that the perturbed polynomials have an exact GCD of given degree. We present algorithms based on SOS (Sums Of Squares) relaxations for solvi ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
The problem of computing approximate GCDs of several polynomials with real or complex coefficients can be formulated as computing the minimal perturbation such that the perturbed polynomials have an exact GCD of given degree. We present algorithms based on SOS (Sums Of Squares) relaxations for solving the involved polynomial or rational function optimization problems with or without constraints.
REGULARIZATION METHODS FOR SDP RELAXATIONS IN LARGE SCALE POLYNOMIAL OPTIMIZATION
"... We study how to solve semidefinite programming (SDP) relaxations for large scale polynomial optimization. When interiorpoint methods are used, typically only small or moderately large problems could be solved. This paper studies regularization methods for solving polynomial optimization problems. ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
(Show Context)
We study how to solve semidefinite programming (SDP) relaxations for large scale polynomial optimization. When interiorpoint methods are used, typically only small or moderately large problems could be solved. This paper studies regularization methods for solving polynomial optimization problems. We describe these methods for semidefinite optimization with block structures, and then apply them to solve large scale polynomial optimization problems. The performance is tested on various numerical examples. By regularization methods, significantly bigger problems could be solved on a regular computer, which is almost impossible by interior point methods.
Regularization methods for sum of squares relaxations in large scale polynomial optimization
 Department of Mathematics, University of California
, 2009
"... We study how to solve sum of squares (SOS) and Lasserre’s relaxations for large scale polynomial optimization. When interiorpoint type methods are used, typically only small or moderately large problems could be solved. This paper proposes the regularization type methods which would solve significa ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
(Show Context)
We study how to solve sum of squares (SOS) and Lasserre’s relaxations for large scale polynomial optimization. When interiorpoint type methods are used, typically only small or moderately large problems could be solved. This paper proposes the regularization type methods which would solve significantly larger problems. We first describe these methods for general conic semidefinite optimization, and then apply them to solve large scale polynomial optimization. Their efficiency is demonstrated by extensive numerical computations. In particular, a general dense quartic polynomial optimization with 100 variables would be solved on a regular computer, which is almost impossible by applying prior existing SOS solvers. Key words polynomial optimization, regularization methods, semidefinite programming, sum of squares, Lasserre’s relaxation AMS subject classification 65K05, 90C22 1
Selecting a Monomial Basis for Sums of Squares Programming over a Quotient Ring
"... Abstract — In this paper we describe a method for choosing a “good ” monomial basis for a sums of squares (SOS) program formulated over a quotient ring. It is known that the monomial basis need only include standard monomials with respect to a Groebner basis. We show that in many cases it is possibl ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
(Show Context)
Abstract — In this paper we describe a method for choosing a “good ” monomial basis for a sums of squares (SOS) program formulated over a quotient ring. It is known that the monomial basis need only include standard monomials with respect to a Groebner basis. We show that in many cases it is possible to use a reduced subset of standard monomials by combining Groebner basis techniques with the wellknown Newton polytope reduction. This reduced subset of standard monomials yields a smaller semidefinite program for obtaining a certificate of nonnegativity of a polynomial on an algebraic variety. I.
Optimizing nvariate (n+k)nomials for small k
, 2010
"... We give a high precision polynomialtime approximation scheme for the supremum of any honest nvariate (n + 2)nomial with a constant term, allowing real exponents as well as real coefficients. Our complexity bounds count field operations and inequality checks, and are quadratic in n and the logarit ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
We give a high precision polynomialtime approximation scheme for the supremum of any honest nvariate (n + 2)nomial with a constant term, allowing real exponents as well as real coefficients. Our complexity bounds count field operations and inequality checks, and are quadratic in n and the logarithm of a certain condition number. For the special case of nvariate (n+2)nomials with integer exponents, the log of our condition number is subquadratic in the sparse size. The best previous complexity bounds were exponential in the sparse size, even for n fixed. Along the way, we partially extend the theory of Viro diagrams and Adiscriminants to real exponents. We also show that, for any fixed δ>0, deciding whether the supremum of an nvariate ( n+n δ)nomial exceeds a given number is NPRcomplete.