Results 1  10
of
1,380,920
Global Optimization with Polynomials and the Problem of Moments
 SIAM JOURNAL ON OPTIMIZATION
, 2001
"... We consider the problem of finding the unconstrained global minimum of a realvalued polynomial p(x) : R R, as well as the global minimum of p(x), in a compact set K defined by polynomial inequalities. It is shown that this problem reduces to solving an (often finite) sequence of convex linear ma ..."
Abstract

Cited by 577 (48 self)
 Add to MetaCart
matrix inequality (LMI) problems. A notion of KarushKuhnTucker polynomials is introduced in a global optimality condition. Some illustrative examples are provided.
Exact Matrix Completion via Convex Optimization
, 2008
"... We consider a problem of considerable practical interest: the recovery of a data matrix from a sampling of its entries. Suppose that we observe m entries selected uniformly at random from a matrix M. Can we complete the matrix and recover the entries that we have not seen? We show that one can perfe ..."
Abstract

Cited by 873 (26 self)
 Add to MetaCart
by solving a simple convex optimization program. This program finds the matrix with minimum nuclear norm that fits the data. The condition above assumes that the rank is not too large. However, if one replaces the 1.2 exponent with 1.25, then the result holds for all values of the rank. Similar results hold
Optimally sparse representation in general (nonorthogonal) dictionaries via ℓ¹ minimization
 PROC. NATL ACAD. SCI. USA 100 2197–202
, 2002
"... Given a ‘dictionary’ D = {dk} of vectors dk, we seek to represent a signal S as a linear combination S = ∑ k γ(k)dk, with scalar coefficients γ(k). In particular, we aim for the sparsest representation possible. In general, this requires a combinatorial optimization process. Previous work considered ..."
Abstract

Cited by 633 (38 self)
 Add to MetaCart
optimization problem: specifically, minimizing the ℓ¹ norm of the coefficients γ. In this paper, we obtain parallel results in a more general setting, where the dictionary D can arise from two or several bases, frames, or even less structured systems. We introduce the Spark, ameasure of linear dependence
Optimal approximation by piecewise smooth functions and associated variational problems
 Commun. Pure Applied Mathematics
, 1989
"... (Article begins on next page) The Harvard community has made this article openly available. Please share how this access benefits you. Your story matters. Citation Mumford, David Bryant, and Jayant Shah. 1989. Optimal approximations by piecewise smooth functions and associated variational problems. ..."
Abstract

Cited by 1294 (14 self)
 Add to MetaCart
(Article begins on next page) The Harvard community has made this article openly available. Please share how this access benefits you. Your story matters. Citation Mumford, David Bryant, and Jayant Shah. 1989. Optimal approximations by piecewise smooth functions and associated variational problems
No Free Lunch Theorems for Optimization
, 1997
"... A framework is developed to explore the connection between effective optimization algorithms and the problems they are solving. A number of “no free lunch ” (NFL) theorems are presented which establish that for any algorithm, any elevated performance over one class of problems is offset by performan ..."
Abstract

Cited by 961 (10 self)
 Add to MetaCart
A framework is developed to explore the connection between effective optimization algorithms and the problems they are solving. A number of “no free lunch ” (NFL) theorems are presented which establish that for any algorithm, any elevated performance over one class of problems is offset
Some optimal inapproximability results
, 2002
"... We prove optimal, up to an arbitrary ffl? 0, inapproximability results for MaxEkSat for k * 3, maximizing the number of satisfied linear equations in an overdetermined system of linear equations modulo a prime p and Set Splitting. As a consequence of these results we get improved lower bounds for ..."
Abstract

Cited by 751 (11 self)
 Add to MetaCart
for the efficient approximability of many optimization problems studied previously. In particular, for MaxE2Sat, MaxCut, MaxdiCut, and Vertex cover. Warning: Essentially this paper has been published in JACM and is subject to copyright restrictions. In particular it is for personal use only.
Learnability in Optimality Theory
, 1995
"... In this article we show how Optimality Theory yields a highly general Constraint Demotion principle for grammar learning. The resulting learning procedure specifically exploits the grammatical structure of Optimality Theory, independent of the content of substantive constraints defining any given gr ..."
Abstract

Cited by 529 (35 self)
 Add to MetaCart
grammatical module. We decompose the learning problem and present formal results for a central subproblem, deducing the constraint ranking particular to a target language, given structural descriptions of positive examples. The structure imposed on the space of possible grammars by Optimality Theory allows
A Limited Memory Algorithm for Bound Constrained Optimization
 SIAM JOURNAL ON SCIENTIFIC COMPUTING
, 1994
"... An algorithm for solving large nonlinear optimization problems with simple bounds is described. It is based ..."
Abstract

Cited by 572 (9 self)
 Add to MetaCart
An algorithm for solving large nonlinear optimization problems with simple bounds is described. It is based
SNOPT: An SQP Algorithm For LargeScale Constrained Optimization
, 2002
"... Sequential quadratic programming (SQP) methods have proved highly effective for solving constrained optimization problems with smooth nonlinear functions in the objective and constraints. Here we consider problems with general inequality constraints (linear and nonlinear). We assume that first deriv ..."
Abstract

Cited by 597 (24 self)
 Add to MetaCart
Sequential quadratic programming (SQP) methods have proved highly effective for solving constrained optimization problems with smooth nonlinear functions in the objective and constraints. Here we consider problems with general inequality constraints (linear and nonlinear). We assume that first
Multiobjective Optimization Using Nondominated Sorting in Genetic Algorithms
 Evolutionary Computation
, 1994
"... In trying to solve multiobjective optimization problems, many traditional methods scalarize the objective vector into a single objective. In those cases, the obtained solution is highly sensitive to the weight vector used in the scalarization process and demands the user to have knowledge about t ..."
Abstract

Cited by 539 (5 self)
 Add to MetaCart
In trying to solve multiobjective optimization problems, many traditional methods scalarize the objective vector into a single objective. In those cases, the obtained solution is highly sensitive to the weight vector used in the scalarization process and demands the user to have knowledge about
Results 1  10
of
1,380,920