Results 1  10
of
202
Decoding ErrorCorrecting Codes via Linear Programming
, 2003
"... Errorcorrecting codes are fundamental tools used to transmit digital information over unreliable channels. Their study goes back to the work of Hamming [Ham50] and Shannon [Sha48], who used them as the basis for the field of information theory. The problem of decoding the original information up ..."
Abstract

Cited by 116 (5 self)
 Add to MetaCart
Errorcorrecting codes are fundamental tools used to transmit digital information over unreliable channels. Their study goes back to the work of Hamming [Ham50] and Shannon [Sha48], who used them as the basis for the field of information theory. The problem of decoding the original information up to the full errorcorrecting potential of the system is often very complex, especially for modern codes that approach the theoretical limits of the communication channel. In this thesis we investigate the application of linear programming (LP) relaxation to the problem of decoding an errorcorrecting code. Linear programming relaxation is a standard technique in approximation algorithms and operations research, and is central to the study of efficient algorithms to find good (albeit suboptimal) solutions to very difficult optimization problems. Our new “LP decoders” have tight combinatorial characterizations of decoding success that can be used to analyze errorcorrecting performance. Furthermore, LP decoders have the desirable (and rare) property that whenever they output a result, it is guaranteed to be the optimal result: the most likely (ML) information sent over the
Settling the Complexity of Computing TwoPlayer Nash Equilibria
"... We prove that Bimatrix, the problem of finding a Nash equilibrium in a twoplayer game, is complete for the complexity class PPAD (Polynomial Parity Argument, Directed version) introduced by Papadimitriou in 1991. Our result, building upon the work of Daskalakis, Goldberg, and Papadimitriou on the c ..."
Abstract

Cited by 88 (5 self)
 Add to MetaCart
(Show Context)
We prove that Bimatrix, the problem of finding a Nash equilibrium in a twoplayer game, is complete for the complexity class PPAD (Polynomial Parity Argument, Directed version) introduced by Papadimitriou in 1991. Our result, building upon the work of Daskalakis, Goldberg, and Papadimitriou on the complexity of fourplayer Nash equilibria [21], settles a long standing open problem in algorithmic game theory. It also serves as a starting point for a series of results concerning the complexity of twoplayer Nash equilibria. In particular, we prove the following theorems: • Bimatrix does not have a fully polynomialtime approximation scheme unless every problem in PPAD is solvable in polynomial time. • The smoothed complexity of the classic LemkeHowson algorithm and, in fact, of any algorithm for Bimatrix is not polynomial unless every problem in PPAD is solvable in randomized polynomial time. Our results also have a complexity implication in mathematical economics: • ArrowDebreu market equilibria are PPADhard to compute.
Computing Nash equilibria: Approximation and smoothed complexity
 In Proceedings of the 47th Annual IEEE Symposium on Foundations of Computer Science (FOCS
, 2006
"... By proving that the problem of computing a 1/n Θ(1)approximate Nash equilibrium remains PPADcomplete, we show that the BIMATRIX game is not likely to have a fully polynomialtime approximation scheme. In other words, no algorithm with time polynomial in n and 1/ǫ can compute an ǫapproximate Nash ..."
Abstract

Cited by 85 (11 self)
 Add to MetaCart
(Show Context)
By proving that the problem of computing a 1/n Θ(1)approximate Nash equilibrium remains PPADcomplete, we show that the BIMATRIX game is not likely to have a fully polynomialtime approximation scheme. In other words, no algorithm with time polynomial in n and 1/ǫ can compute an ǫapproximate Nash equilibrium of an n×n bimatrix game, unless PPAD ⊆ P. Instrumental to our proof, we introduce a new discrete fixedpoint problem on a highdimensional cube with a constant sidelength, such as on an ndimensional cube with sidelength 7, and show that they are PPADcomplete. Furthermore, we prove that it is unlikely, unless PPAD ⊆ RP, that the smoothed complexity of the LemkeHowson algorithm or any algorithm for computing a Nash equilibrium of a bimatrix game is polynomial in n and 1/σ under perturbations with magnitude σ. Our result answers a major open question in the smoothed analysis of algorithms and the approximation of Nash equilibria.
The effectiveness of lloydtype methods for the kmeans problem
 In FOCS
, 2006
"... We investigate variants of Lloyd’s heuristic for clustering high dimensional data in an attempt to explain its popularity (a half century after its introduction) among practitioners, and in order to suggest improvements in its application. We propose and justify a clusterability criterion for data s ..."
Abstract

Cited by 84 (3 self)
 Add to MetaCart
(Show Context)
We investigate variants of Lloyd’s heuristic for clustering high dimensional data in an attempt to explain its popularity (a half century after its introduction) among practitioners, and in order to suggest improvements in its application. We propose and justify a clusterability criterion for data sets. We present variants of Lloyd’s heuristic that quickly lead to provably nearoptimal clustering solutions when applied to wellclusterable instances. This is the first performance guarantee for a variant of Lloyd’s heuristic. The provision of a guarantee on output quality does not come at the expense of speed: some of our algorithms are candidates for being faster in practice than currently used variants of Lloyd’s method. In addition, our other algorithms are faster on wellclusterable instances than recently proposed approximation algorithms, while maintaining similar guarantees on clustering quality. Our main algorithmic contribution is a novel probabilistic seeding process for the starting configuration of a Lloydtype iteration. 1
Smoothed Analysis of the Condition Numbers and Growth Factors of Matrices
 SIAM Journal on Matrix Analysis
, 2006
"... Abstract Let A be any matrix and let A be a slight random perturbation of A. We prove that it is unlikely that A has large condition number. Using this result, we prove it is unlikely that A has large growth factor under Gaussian elimination without pivoting. By combining these results, we bound th ..."
Abstract

Cited by 65 (4 self)
 Add to MetaCart
(Show Context)
Abstract Let A be any matrix and let A be a slight random perturbation of A. We prove that it is unlikely that A has large condition number. Using this result, we prove it is unlikely that A has large growth factor under Gaussian elimination without pivoting. By combining these results, we bound the smoothed precision needed by Gaussian elimination without pivoting. Our results improve the averagecase analysis of Gaussian elimination without pivoting performed by Yeung and Chan (SIAM J. Matrix Anal. Appl., 1997).
Random matrices: The circular law
, 2008
"... Let x be a complex random variable with mean zero and bounded variance σ². Let Nn be a random matrix of order n with entries being i.i.d. 1 copies of x. Let λ1,..., λn be the eigenvalues of ..."
Abstract

Cited by 58 (14 self)
 Add to MetaCart
Let x be a complex random variable with mean zero and bounded variance σ². Let Nn be a random matrix of order n with entries being i.i.d. 1 copies of x. Let λ1,..., λn be the eigenvalues of
Mechanism Design for Policy Routing
, 2006
"... The Border Gateway Protocol (BGP) for interdomain routing is designed to allow autonomous systems (ASes) to express policy preferences over alternative routes. We model these preferences as arising from an AS’s underlying utility for each route and study the problem of finding a set of routes that ..."
Abstract

Cited by 56 (8 self)
 Add to MetaCart
(Show Context)
The Border Gateway Protocol (BGP) for interdomain routing is designed to allow autonomous systems (ASes) to express policy preferences over alternative routes. We model these preferences as arising from an AS’s underlying utility for each route and study the problem of finding a set of routes that maximizes the overall welfare (i.e., the sum of all ASes’ utilities for their selected routes). We show that, if the utility functions are unrestricted, this problem is NPhard even to approximate closely. We then study a natural class of restricted utilities that we call nexthop preferences. We present a strategyproof, polynomialtime computable mechanism for welfaremaximizing routing over this restricted domain. However, we show that, in contrast to earlier work on lowestcost routing mechanism design, this mechanism appears to be incompatible with
From the LittlewoodOfford problem to the Circular Law: Universality of the spectral distribution of random matrices
 BULL. AMER. MATH. SOC
, 2009
"... The famous circular law asserts that if Mn is an n×n matrix with iid complex entries of mean zero and unit variance, then the empirical spectral distribution of the normalized matrix 1 √ Mn converges both in probability and n almost surely to the uniform distribution on the unit disk {z ∈ C: z  ≤1 ..."
Abstract

Cited by 51 (7 self)
 Add to MetaCart
(Show Context)
The famous circular law asserts that if Mn is an n×n matrix with iid complex entries of mean zero and unit variance, then the empirical spectral distribution of the normalized matrix 1 √ Mn converges both in probability and n almost surely to the uniform distribution on the unit disk {z ∈ C: z  ≤1}. After a long sequence of partial results that verified this law under additional assumptions on the distribution of the entries, the circular law is now known to be true for arbitrary distributions with mean zero and unit variance. In this survey we describe some of the key ingredients used in the establishment of the circular law at this level of generality, in particular recent advances in understanding the LittlewoodOfford problem and its inverse.
Why simple hash functions work: Exploiting the entropy in a data stream
 In Proceedings of the 19th Annual ACMSIAM Symposium on Discrete Algorithms
, 2008
"... Hashing is fundamental to many algorithms and data structures widely used in practice. For theoretical analysis of hashing, there have been two main approaches. First, one can assume that the hash function is truly random, mapping each data item independently and uniformly to the range. This idealiz ..."
Abstract

Cited by 49 (8 self)
 Add to MetaCart
(Show Context)
Hashing is fundamental to many algorithms and data structures widely used in practice. For theoretical analysis of hashing, there have been two main approaches. First, one can assume that the hash function is truly random, mapping each data item independently and uniformly to the range. This idealized model is unrealistic because a truly random hash function requires an exponential number of bits to describe. Alternatively, one can provide rigorous bounds on performance when explicit families of hash functions are used, such as 2universal or O(1)wise independent families. For such families, performance guarantees are often noticeably weaker than for ideal hashing. In practice, however, it is commonly observed that weak hash functions, including 2universal hash functions, perform as predicted by the idealized analysis for truly random hash functions. In this paper, we try to explain this phenomenon. We demonstrate that the strong performance of universal hash functions in practice can arise naturally from a combination of the randomness of the hash function and the data. Specifically, following the large body of literature on random sources and randomness extraction, we model the data as coming from a “block source, ” whereby
Random knapsack in expected polynomial time
 IN PROC. 35TH ANNUAL ACM SYMPOSIUM ON THEORY OF COMPUTING (STOC2003
, 2003
"... We present the first averagecase analysis proving a polynomial upper bound on the expected running time of an exact algorithm for the 0/1 knapsack problem. In particular, we prove for various input distributions, that the number of Paretooptimal knapsack fillings is polynomially bounded in the num ..."
Abstract

Cited by 48 (10 self)
 Add to MetaCart
(Show Context)
We present the first averagecase analysis proving a polynomial upper bound on the expected running time of an exact algorithm for the 0/1 knapsack problem. In particular, we prove for various input distributions, that the number of Paretooptimal knapsack fillings is polynomially bounded in the number of available items. An algorithm by Nemhauser and Ullmann can enumerate these solutions very efficiently so that a polynomial upper bound on the number of Paretooptimal solutions implies an algorithm with expected polynomial running time. The random input model underlying our analysis is quite general and not restricted to a particular input distribution. We assume adversarial weights and randomly drawn profits (or vice versa). Our analysis covers general probability distributions with finite mean and, in its most general form, can even handle different probability distributions for the profits of different items. This feature enables us to study the effects of correlations between profits and weights. Our analysis confirms and explains practical studies showing that socalled strongly correlated instances are harder to solve than weakly correlated ones.