Results 1  10
of
357
A proof of the Kepler conjecture
 Math. Intelligencer
, 1994
"... This section describes the structure of the proof of ..."
Abstract

Cited by 206 (12 self)
 Add to MetaCart
(Show Context)
This section describes the structure of the proof of
The Convex Geometry of Linear Inverse Problems
, 2010
"... In applications throughout science and engineering one is often faced with the challenge of solving an illposed inverse problem, where the number of available measurements is smaller than the dimension of the model to be estimated. However in many practical situations of interest, models are constr ..."
Abstract

Cited by 181 (18 self)
 Add to MetaCart
In applications throughout science and engineering one is often faced with the challenge of solving an illposed inverse problem, where the number of available measurements is smaller than the dimension of the model to be estimated. However in many practical situations of interest, models are constrained structurally so that they only have a few degrees of freedom relative to their ambient dimension. This paper provides a general framework to convert notions of simplicity into convex penalty functions, resulting in convex optimization solutions to linear, underdetermined inverse problems. The class of simple models considered are those formed as the sum of a few atoms from some (possibly infinite) elementary atomic set; examples include wellstudied cases such as sparse vectors (e.g., signal processing, statistics) and lowrank matrices (e.g., control, statistics), as well as several others including sums of a few permutations matrices (e.g., ranked elections, multiobject tracking), lowrank tensors (e.g., computer vision, neuroscience), orthogonal matrices (e.g., machine learning), and atomic measures (e.g., system identification). The convex programming formulation is based on minimizing the norm induced by the convex hull of the atomic set; this norm is referred to as the atomic norm. The facial
Sums of Squares and Semidefinite Programming Relaxations for Polynomial Optimization Problems with Structured Sparsity
 SIAM Journal on Optimization
, 2006
"... Abstract. Unconstrained and inequality constrained sparse polynomial optimization problems (POPs) are considered. A correlative sparsity pattern graph is defined to find a certain sparse structure in the objective and constraint polynomials of a POP. Based on this graph, sets of supports for sums of ..."
Abstract

Cited by 119 (29 self)
 Add to MetaCart
(Show Context)
Abstract. Unconstrained and inequality constrained sparse polynomial optimization problems (POPs) are considered. A correlative sparsity pattern graph is defined to find a certain sparse structure in the objective and constraint polynomials of a POP. Based on this graph, sets of supports for sums of squares (SOS) polynomials that lead to efficient SOS and semidefinite programming (SDP) relaxations are obtained. Numerical results from various test problems are included to show the improved performance of the SOS and SDP relaxations. Key words.
Scalable analysis of linear systems using mathematical programming
 In Proc. VMCAI, LNCS 3385
, 2005
"... Abstract. We present a method for generating linear invariants for domain consisting of arbitrary polyhedra of a predefined fixed shape. The basic operations on the domain like abstraction, intersection, join and inclusion tests are all posed as linear optimization queries, which can be solved effic ..."
Abstract

Cited by 107 (9 self)
 Add to MetaCart
(Show Context)
Abstract. We present a method for generating linear invariants for domain consisting of arbitrary polyhedra of a predefined fixed shape. The basic operations on the domain like abstraction, intersection, join and inclusion tests are all posed as linear optimization queries, which can be solved efficiently by existing LP solvers. The number and dimensionality of the LP queries are polynomial in the program dimensionality, size and the number of target invariants. The method generalizes similar analyses in the interval, octagon, and octahedra domains, without resorting to polyhedral manipulations. We demonstrate the performance of our method on some benchmark programs. 1
Proving Program Invariance and Termination by Parametric Abstraction, Lagrangian Relaxation and Semidefinite Programming
 IN VMCAI’2005: VERIFICATION, MODEL CHECKING, AND ABSTRACT INTERPRETATION, VOLUME 3385 OF LNCS
, 2005
"... In order to verify semialgebraic programs, we automatize the Floyd/Naur/Hoare proof method. The main task is to automatically infer valid invariants and rank functions. First we express the program semantics in polynomial form. Then the unknown rank function and invariants are abstracted in parametr ..."
Abstract

Cited by 96 (1 self)
 Add to MetaCart
In order to verify semialgebraic programs, we automatize the Floyd/Naur/Hoare proof method. The main task is to automatically infer valid invariants and rank functions. First we express the program semantics in polynomial form. Then the unknown rank function and invariants are abstracted in parametric form. The implication in the Floyd/Naur/Hoare verification conditions is handled by abstraction into numerical constraints by Lagrangian relaxation. The remaining universal quantification is handled by semidefinite programming relaxation. Finally the parameters are computed using semidefinite programming solvers. This new approach exploits the recent progress in the numerical resolution of linear or bilinear matrix inequalities by semidefinite programming using efficient polynomial primal/dual interior point methods generalizing those wellknown in linear programming to convex optimization. The framework is applied to invariance and termination proof of sequential, nondeterministic, concurrent, and fair parallel imperative polynomial programs and can easily be extended to other safety and liveness properties.
Detecting global optimality and extracting solutions in GloptiPoly
 Chapter in D. Henrion, A. Garulli (Editors). Positive polynomials in control. Lecture Notes in Control and Information Sciences
, 2005
"... GloptiPoly is a Matlab/SeDuMi addon to build and solve convex linear matrix inequality (LMI) relaxations of nonconvex optimization problems with multivariate polynomial objective function and constraints, based on the theory of moments. In contrast with the dual sumofsquares decompositions of po ..."
Abstract

Cited by 80 (12 self)
 Add to MetaCart
(Show Context)
GloptiPoly is a Matlab/SeDuMi addon to build and solve convex linear matrix inequality (LMI) relaxations of nonconvex optimization problems with multivariate polynomial objective function and constraints, based on the theory of moments. In contrast with the dual sumofsquares decompositions of positive polynomials, the theory of moments allows to detect global optimality of an LMI relaxation and extract globally optimal solutions. In this report, we describe and illustrate the numerical linear algebra algorithm implemented in GloptiPoly for detecting global optimality and extracting solutions. We also mention some related heuristics that could be useful to reduce the number of variables in the LMI relaxations. 1
Introducing SOSTOOLS: A General Purpose Sum of Squares Programming Solver
 Proceedings of the IEEE Conference on Decision and Control (CDC), Las Vegas, NV
, 2002
"... SOSTOOLS is a MATLAB toolbox for constructing and solving sum of squares programs. It can be used in combination with semidefinite programming software, such as SeDuMi, to solve many continuous and combinatorial optimization problems, as well as various controlrelated problems. This paper provides ..."
Abstract

Cited by 74 (16 self)
 Add to MetaCart
(Show Context)
SOSTOOLS is a MATLAB toolbox for constructing and solving sum of squares programs. It can be used in combination with semidefinite programming software, such as SeDuMi, to solve many continuous and combinatorial optimization problems, as well as various controlrelated problems. This paper provides an overview on sum of squares programming, describes the primary features of SOSTOOLS, and shows how SOSTOOLS is used to solve sum of squares programs. Some applications from different areas are presented to show the wide applicability of sum of squares programming in general and SOSTOOLS in particular. 1
Feedback Control of Quantum State Reduction
, 2004
"... Feedback control of quantum mechanical systems must take into account the probabilistic nature of quantum measurement. We formulate quantum feedback control as a problem of stochastic nonlinear control by considering separately a quantum filtering problem and a state feedback control problem for th ..."
Abstract

Cited by 73 (5 self)
 Add to MetaCart
Feedback control of quantum mechanical systems must take into account the probabilistic nature of quantum measurement. We formulate quantum feedback control as a problem of stochastic nonlinear control by considering separately a quantum filtering problem and a state feedback control problem for the filter. We explore the use of stochastic Lyapunov techniques for the design of feedback controllers for quantum spin systems and demonstrate the possibility of stabilizing one outcome of a quantum measurement with unit probability.
Layering as optimization decomposition
 PROCEEDINGS OF THE IEEE
, 2007
"... Network protocols in layered architectures have historically been obtained on an ad hoc basis, and many of the recent crosslayer designs are conducted through piecemeal approaches. They may instead be holistically analyzed and systematically designed as distributed solutions to some global optimiza ..."
Abstract

Cited by 64 (23 self)
 Add to MetaCart
(Show Context)
Network protocols in layered architectures have historically been obtained on an ad hoc basis, and many of the recent crosslayer designs are conducted through piecemeal approaches. They may instead be holistically analyzed and systematically designed as distributed solutions to some global optimization problems. This paper presents a survey of the recent efforts towards a systematic understanding of “layering ” as “optimization decomposition”, where the overall communication network is modeled by a generalized Network Utility Maximization (NUM) problem, each layer corresponds to a decomposed subproblem, and the interfaces among layers are quantified as functions of the optimization variables coordinating the subproblems. There can be many alternative decompositions, each leading to a different layering architecture. This paper summarizes the current status of horizontal decomposition into distributed computation and vertical decomposition into functional modules such as congestion control, routing, scheduling, random access, power control, and channel coding. Key messages and methods arising from many recent work are listed, and open issues discussed. Through case studies, it is illustrated how “Layering as Optimization Decomposition” provides a common language to think