Results 1  10
of
134
Which Problems Have Strongly Exponential Complexity?
 Journal of Computer and System Sciences
, 1998
"... For several NPcomplete problems, there have been a progression of better but still exponential algorithms. In this paper, we address the relative likelihood of subexponential algorithms for these problems. We introduce a generalized reduction which we call SubExponential Reduction Family (SERF) t ..."
Abstract

Cited by 249 (9 self)
 Add to MetaCart
(Show Context)
For several NPcomplete problems, there have been a progression of better but still exponential algorithms. In this paper, we address the relative likelihood of subexponential algorithms for these problems. We introduce a generalized reduction which we call SubExponential Reduction Family (SERF) that preserves subexponential complexity. We show that CircuitSAT is SERFcomplete for all NPsearch problems, and that for any fixed k, kSAT, kColorability, kSet Cover, Independent Set, Clique, Vertex Cover, are SERFcomplete for the class SNP of search problems expressible by second order existential formulas whose first order part is universal. In particular, subexponential complexity for any one of the above problems implies the same for all others. We also look at the issue of proving strongly exponential lower bounds for AC 0 ; that is, bounds of the form 2 \Omega\Gamma n) . This problem is even open for depth3 circuits. In fact, such a bound for depth3 circuits with even l...
Ranksparsity incoherence for matrix decomposition
, 2010
"... Suppose we are given a matrix that is formed by adding an unknown sparse matrix to an unknown lowrank matrix. Our goal is to decompose the given matrix into its sparse and lowrank components. Such a problem arises in a number of applications in model and system identification, and is intractable ..."
Abstract

Cited by 229 (23 self)
 Add to MetaCart
(Show Context)
Suppose we are given a matrix that is formed by adding an unknown sparse matrix to an unknown lowrank matrix. Our goal is to decompose the given matrix into its sparse and lowrank components. Such a problem arises in a number of applications in model and system identification, and is intractable to solve in general. In this paper we consider a convex optimization formulation to splitting the specified matrix into its components, by minimizing a linear combination of the ℓ1 norm and the nuclear norm of the components. We develop a notion of ranksparsity incoherence, expressed as an uncertainty principle between the sparsity pattern of a matrix and its row and column spaces, and use it to characterize both fundamental identifiability as well as (deterministic) sufficient conditions for exact recovery. Our analysis is geometric in nature with the tangent spaces to the algebraic varieties of sparse and lowrank matrices playing a prominent role. When the sparse and lowrank matrices are drawn from certain natural random ensembles, we show that the sufficient conditions for exact recovery are satisfied with high probability. We conclude with simulation results on synthetic matrix decomposition problems.
Graph Nonisomorphism Has Subexponential Size Proofs Unless The PolynomialTime Hierarchy Collapses
 SIAM Journal on Computing
, 1998
"... We establish hardness versus randomness tradeoffs for a broad class of randomized procedures. In particular, we create efficient nondeterministic simulations of bounded round ArthurMerlin games using a language in exponential time that cannot be decided by polynomial size oracle circuits with acce ..."
Abstract

Cited by 120 (6 self)
 Add to MetaCart
(Show Context)
We establish hardness versus randomness tradeoffs for a broad class of randomized procedures. In particular, we create efficient nondeterministic simulations of bounded round ArthurMerlin games using a language in exponential time that cannot be decided by polynomial size oracle circuits with access to satisfiability. We show that every language with a bounded round ArthurMerlin game has subexponential size membership proofs for infinitely many input lengths unless exponential time coincides with the third level of the polynomialtime hierarchy (and hence the polynomialtime hierarchy collapses). This provides the first strong evidence that graph nonisomorphism has subexponential size proofs. We set up a general framework for derandomization which encompasses more than the traditional model of randomized computation. For a randomized procedure to fit within this framework, we only require that for any fixed input the complexity of checking whether the procedure succeeds on a given ...
An Improved Exponentialtime Algorithm for kSAT
, 1998
"... We propose and analyze a simple new randomized algorithm, called ResolveSat, for finding satisfying assignments of Boolean formulas in conjunctive normal form. The algorithm consists of two stages: a preprocessing stage in which resolution is applied to enlarge the set of clauses of the formula, ..."
Abstract

Cited by 116 (7 self)
 Add to MetaCart
We propose and analyze a simple new randomized algorithm, called ResolveSat, for finding satisfying assignments of Boolean formulas in conjunctive normal form. The algorithm consists of two stages: a preprocessing stage in which resolution is applied to enlarge the set of clauses of the formula, followed by a search stage that uses a simple randomized greedy procedure to look for a satisfying assignment. We show that, for each k, the running time of ResolveSat on a kCNF formula is significantly better than 2 n , even in the worst case. In particular, we show that the algorithm finds a satisfying assignment of a general satisfiable 3CNF in time O(2 :448n ) with high probability; where the best previous algorithm [13] has running time O(2 :562n ). We obtain a better upper bound of 2 (2 ln 2\Gamma1)n+o(n) = O(2 0:387n ) for 3CNF that have exactly one satisfying assignment (unique kSAT). For each k, the bounds for general kCNF are the best currently known for ...
Satisfiability coding lemma
 In Proceedings of the 38th IEEE Conference on Foundations of Computer Science
, 1997
"... ..."
(Show Context)
Latent Variable Graphical Model Selection via Convex Optimization
, 2010
"... Suppose we have samples of a subset of a collection of random variables. No additional information is provided about the number of latent variables, nor of the relationship between the latent and observed variables. Is it possible to discover the number of hidden components, and to learn a statistic ..."
Abstract

Cited by 75 (5 self)
 Add to MetaCart
Suppose we have samples of a subset of a collection of random variables. No additional information is provided about the number of latent variables, nor of the relationship between the latent and observed variables. Is it possible to discover the number of hidden components, and to learn a statistical model over the entire collection of variables? We address this question in the setting in which the latent and observed variables are jointly Gaussian, with the conditional statistics of the observed variables conditioned on the latent variables being specified by a graphical model. As a first step we give natural conditions under which such latentvariable Gaussian graphical models are identifiable given marginal statistics of only the observed variables. Essentially these conditions require that the conditional graphical model among the observed variables is sparse, while the effect of the latent variables is “spread out ” over most of the observed variables. Next we propose a tractable convex program based on regularized maximumlikelihood for model selection in this latentvariable setting; the regularizer uses both the ℓ1 norm and the nuclear norm. Our modeling framework can be viewed as a combination of dimensionality reduction (to identify latent variables) and graphical modeling (to capture remaining statistical structure not attributable to the latent variables), and it consistently estimates both the number of hidden components and the conditional graphical model structure among the observed variables. These results are applicable in the highdimensional setting in which the number of latent/observed variables grows with the number of samples of the observed variables. The geometric properties of the algebraic varieties of sparse matrices and of lowrank matrices play an important role in our analysis.
Arithmetic Circuits: a survey of recent results and open questions
"... A large class of problems in symbolic computation can be expressed as the task of computing some polynomials; and arithmetic circuits form the most standard model for studying the complexity of such computations. This algebraic model of computation attracted a large amount of research in the last fi ..."
Abstract

Cited by 65 (5 self)
 Add to MetaCart
A large class of problems in symbolic computation can be expressed as the task of computing some polynomials; and arithmetic circuits form the most standard model for studying the complexity of such computations. This algebraic model of computation attracted a large amount of research in the last five decades, partially due to its simplicity and elegance. Being a more structured model than Boolean circuits, one could hope that the fundamental problems of theoretical computer science, such as separating P from NP, will be easier to solve for arithmetic circuits. However, in spite of the appearing simplicity and the vast amount of mathematical tools available, no major breakthrough has been seen. In fact, all the fundamental questions are still open for this model as well. Nevertheless, there has been a lot of progress in the area and beautiful results have been found, some in the last few years. As examples we mention the connection between polynomial identity testing and lower bounds of Kabanets and Impagliazzo, the lower bounds of Raz for multilinear formulas, and two new approaches for proving lower bounds: Geometric Complexity Theory and Elusive Functions. The goal of this monograph is to survey the field of arithmetic circuit complexity, focusing mainly on what we find to be the most interesting and accessible research directions. We aim to cover the main results and techniques, with an emphasis on works from the last two decades. In particular, we
Spectral Methods for Matrix Rigidity with Applications to SizeDepth Tradeoffs and Communication Complexity
 In Proc. 36th
, 1996
"... The rigidity of a matrix measures the number of entries that must be changed in order to reduce its rank below a certain value. The known lower bounds on the rigidity of explicit matrices are very weak. It is known that stronger lower bounds would have implications to complexity theory. We consider ..."
Abstract

Cited by 51 (3 self)
 Add to MetaCart
(Show Context)
The rigidity of a matrix measures the number of entries that must be changed in order to reduce its rank below a certain value. The known lower bounds on the rigidity of explicit matrices are very weak. It is known that stronger lower bounds would have implications to complexity theory. We consider restricted variants of the rigidity problem over the complex numbers. Using spectral methods, we derive lower bounds on these variants. Two applications of such restricted variants are given. First, we show that our lower bound on a variant of rigidity implies lower bounds on sizedepth tradeoffs for arithmetic circuits with bounded coefficients computing linear transformations. These bounds generalize a result of Nisan and Wigderson. The second application is conditional; we show that it would suffice to prove lower bounds on certain restricted forms of rigidity to conclude several separation results such as separating the analogs of PH and PSPACE in communication complexity theory. Our res...
The complexity of constructing pseudorandom generators from hard functions
 Computational Complexity
, 2004
"... Abstract. We study the complexity of constructing pseudorandom generators (PRGs) from hard functions, focussing on constantdepth circuits. We show that, starting from a function f: {0, 1} l → {0, 1} computable in alternating time O(l) with O(1) alternations that is hard on average (i.e. there is a ..."
Abstract

Cited by 44 (9 self)
 Add to MetaCart
Abstract. We study the complexity of constructing pseudorandom generators (PRGs) from hard functions, focussing on constantdepth circuits. We show that, starting from a function f: {0, 1} l → {0, 1} computable in alternating time O(l) with O(1) alternations that is hard on average (i.e. there is a constant ɛ> 0 such that every circuit of size 2 ɛl fails to compute f on at least a 1/poly(l) fraction of inputs) we can construct a PRG: {0, 1} O(log n) → {0, 1} n computable by DLOGTIMEuniform constantdepth circuits of size polynomial in n. Such a PRG implies BP · AC 0 = AC 0 under DLOGTIMEuniformity. On the negative side, we prove that starting from a worstcase hard function f: {0, 1} l → {0, 1} (i.e. there is a constant ɛ> 0 such that every circuit of size 2 ɛl fails to compute f on some input) for every positive constant δ < 1 there is no blackbox construction of a PRG: {0, 1} δn → {0, 1} n computable by constantdepth circuits of size polynomial in n. We also study worstcase hardness amplification, which is the related problem of producing an averagecase hard function starting from a worstcase hard one. In particular, we deduce that there is no blackbox worstcase hardness amplification within the polynomial time hierarchy. These negative results are obtained by showing that polynomialsize constantdepth circuits cannot compute good extractors and listdecodable codes.