Results 1  10
of
76
Satisfiability Allows No Nontrivial Sparsification Unless The PolynomialTime Hierarchy Collapses
 ELECTRONIC COLLOQUIUM ON COMPUTATIONAL COMPLEXITY, REPORT NO. 38 (2010)
, 2010
"... Consider the following twoplayer communication process to decide a language L: The first player holds the entire input x but is polynomially bounded; the second player is computationally unbounded but does not know any part of x; their goal is to cooperatively decide whether x belongs to L at small ..."
Abstract

Cited by 56 (2 self)
 Add to MetaCart
(Show Context)
Consider the following twoplayer communication process to decide a language L: The first player holds the entire input x but is polynomially bounded; the second player is computationally unbounded but does not know any part of x; their goal is to cooperatively decide whether x belongs to L at small cost, where the cost measure is the number of bits of communication from the first player to the second player. For any integer d ≥ 3 and positive real ǫ we show that if satisfiability for nvariable dCNF formulas has a protocol of cost O(n d−ǫ) then coNP is in NP/poly, which implies that the polynomialtime hierarchy collapses to its third level. The result even holds when the first player is conondeterministic, and is tight as there exists a trivial protocol for ǫ = 0. Under the hypothesis that coNP is not in NP/poly, our result implies tight lower bounds for parameters of interest in several areas, namely sparsification, kernelization in parameterized complexity, lossy compression, and probabilistically checkable proofs. By reduction, similar results hold for other NPcomplete problems. For the vertex cover problem on nvertex duniform hypergraphs, the above statement holds for any integer d ≥ 2. The case d = 2 implies that no NPhard vertex deletion problem based on a graph property that is inherited by subgraphs can have kernels consisting of O(k 2−ǫ) edges unless coNP is in NP/poly, where k denotes the size of the deletion set. Kernels consisting of O(k 2) edges are known for several problems in the class, including vertex cover, feedback vertex set, and boundeddegree deletion.
Kernelization: New Upper and Lower Bound Techniques
 In Proc. of the 4th International Workshop on Parameterized and Exact Computation (IWPEC), volume 5917 of LNCS
, 2009
"... Abstract. In this survey, we look at kernelization: algorithms that transform in polynomial time an input to a problem to an equivalent input, whose size is bounded by a function of a parameter. Several results of recent research on kernelization are mentioned. This survey looks at some recent resu ..."
Abstract

Cited by 54 (0 self)
 Add to MetaCart
(Show Context)
Abstract. In this survey, we look at kernelization: algorithms that transform in polynomial time an input to a problem to an equivalent input, whose size is bounded by a function of a parameter. Several results of recent research on kernelization are mentioned. This survey looks at some recent results where a general technique shows the existence of kernelization algorithms for large classes of problems, in particular for planar graphs and generalizations of planar graphs, and recent lower bound techniques that give evidence that certain types of kernelization algorithms do not exist.
Solving MAXrSAT above a Tight Lower Bound
, 2010
"... We present an exact algorithm that decides, for every fixed r ≥ 2 in time O(m) + 2 O(k2) whether a given multiset of m clauses of size r admits a truth assignment that satisfies at least ((2 r − 1)m + k)/2 r clauses. Thus MaxrSat is fixedparameter tractable when parameterized by the number of sat ..."
Abstract

Cited by 40 (15 self)
 Add to MetaCart
We present an exact algorithm that decides, for every fixed r ≥ 2 in time O(m) + 2 O(k2) whether a given multiset of m clauses of size r admits a truth assignment that satisfies at least ((2 r − 1)m + k)/2 r clauses. Thus MaxrSat is fixedparameter tractable when parameterized by the number of satisfied clauses above the tight lower bound (1 − 2 −r)m. This solves an open problem of Mahajan, Raman and Sikdar (J. Comput. System Sci., 75, 2009). Our algorithm is based on a polynomialtime data reduction procedure that reduces a problem instance to an equivalent algebraically represented problem with O(k 2) variables. This is done by representing the instance as an appropriate polynomial, and by applying a probabilistic argument combined with some simple tools from Harmonic analysis to show that if the polynomial cannot be reduced to one of size O(k 2), then there is a truth assignment satisfying the required number of clauses. We introduce a new notion of bikernelization from a parameterized problem to another one and apply it to prove that the abovementioned parameterized MaxrSat admits a polynomialsize kernel. Combining another probabilistic argument with tools from graph matching theory and signed graphs, we show that if an instance of Max2Sat with m clauses has at least 3k variables after application of certain polynomial time reduction rules to it, then there is a truth assignment that satisfies at least (3m + k)/4 clauses. We also outline how the fixedparameter tractability and polynomialsize kernel results on MaxrSat can be extended to more general families of Boolean
CrossComposition: A New Technique for Kernelization Lower Bounds
, 2011
"... We introduce a new technique for proving kernelization lower bounds, called crosscomposition. A classical problem L crosscomposes into a parameterized problem Q if an instance of Q with polynomially bounded parameter value can express the logical OR of a sequence of instances of L. Building on wor ..."
Abstract

Cited by 38 (8 self)
 Add to MetaCart
We introduce a new technique for proving kernelization lower bounds, called crosscomposition. A classical problem L crosscomposes into a parameterized problem Q if an instance of Q with polynomially bounded parameter value can express the logical OR of a sequence of instances of L. Building on work by Bodlaender et al. (ICALP 2008) and using a result by Fortnow and Santhanam (STOC 2008) we show that if an NPhard problem crosscomposes into a parameterized problem Q then Q does not admit a polynomial kernel unless the polynomial hierarchy collapses. Our technique generalizes and strengthens the recent techniques of using orcomposition algorithms and of transferring the lower bounds via polynomial parameter transformations. We show its applicability by proving kernelization lower bounds for a number of important graphs problems with structural (nonstandard) parameterizations, e.g., Chromatic Number, Clique, and Weighted Feedback Vertex Set do not admit polynomial kernels with respect to the vertex cover number of the input graphs unless the polynomial hierarchy collapses, contrasting the fact that these problems are trivially fixedparameter tractable for this parameter. We have similar lower bounds for Feedback Vertex Set.
New Limits to Classical and Quantum Instance Compression
 ELECTRONIC COLLOQUIUM ON COMPUTATIONAL COMPLEXITY, REPORT NO. 112
, 2012
"... Given an instance of a hard decision problem, a limited goal is to compress that instance into a smaller, equivalent instance of a second problem. As one example, consider the problem where, given Boolean formulas ψ 1,...,ψ t, we must determine if at least one ψ j is satisfiable. An ORcompression s ..."
Abstract

Cited by 20 (1 self)
 Add to MetaCart
Given an instance of a hard decision problem, a limited goal is to compress that instance into a smaller, equivalent instance of a second problem. As one example, consider the problem where, given Boolean formulas ψ 1,...,ψ t, we must determine if at least one ψ j is satisfiable. An ORcompression scheme for SAT is a polynomialtime reduction R that maps (ψ 1,...,ψ t) to a string z, such that z lies in some “target ” language L ′ if and only if ∨ j [ψj ∈ SAT] holds. (Here, L ′ can be arbitrarily complex.) ANDcompression schemes are defined similarly. A compression scheme is strong if z  is polynomially bounded in n = maxj ψ j , independent of t. Strong compression for SAT seems unlikely. Work of Harnik and Naor (FOCS ’06/SICOMP ’10) and Bodlaender, Downey, Fellows, and Hermelin (ICALP ’08/JCSS ’09) showed that the infeasibility of strong ORcompression for SAT would show limits to instance compression for a large number of natural problems. Bodlaender et al. also showed that the infeasibility of strong ANDcompression for SAT would have consequences for a different list of problems. Motivated by this, Fortnow and Santhanam (STOC ’08/JCSS ’11) showed that if SAT is strongly ORcompressible,
Kernelization of Packing Problems
, 2011
"... Kernelization algorithms are polynomialtime reductions from a problem to itself that guarantee their output to have a size not exceeding some bound. For example, dSet Matching for integers d ≥ 3 is the problem of nding a matching of size at least k in a given duniform hypergraph and has kernels w ..."
Abstract

Cited by 20 (2 self)
 Add to MetaCart
Kernelization algorithms are polynomialtime reductions from a problem to itself that guarantee their output to have a size not exceeding some bound. For example, dSet Matching for integers d ≥ 3 is the problem of nding a matching of size at least k in a given duniform hypergraph and has kernels with O(k d) edges. Recently, Bodlaender et al. [ICALP 2008], Fortnow and Santhanam [STOC 2008], Dell and Van Melkebeek [STOC 2010] developed a framework for proving lower bounds on the kernel size for certain problems, under the complexitytheoretic hypothesis that coNP is not contained in NP/poly. Under the same hypothesis, we show lower bounds for the kernelization of dSet Matching and other packing problems. Our bounds are tight for dSet Matching: It does not have kernels with O(k d−ɛ) edges for any ɛ> 0 unless the hypothesis fails. By reduction, this transfers to a bound of O(k d−1−ɛ) for the problem of nding k vertexdisjoint cliques of size d in standard graphs. It is natural to ask for tight bounds on the kernel sizes of such graph packing problems. We make rst progress in that direction by showing nontrivial kernels with O(k 2.5) edges for the problem of nding k vertexdisjoint paths of three edges each. This does not quite match the best lower bound of O(k 2−ɛ) that we can prove. Most of our lower bound proofs follow a general scheme that we discover: To exclude kernels of size O(k d−ɛ) for a problem in duniform hypergraphs, one should reduce from a carefully chosen dpartite problem that is still NPhard. As an illustration, we apply this scheme to the vertex cover problem, which allows us to replace the numbertheoretical construction by Dell and Van Melkebeek [STOC 2010] with shorter elementary arguments. 1
Compression via Matroids: A Randomized Polynomial Kernel for Odd Cycle Transversal
"... The Odd Cycle Transversal problem (OCT) asks whether a given graph can be made bipartite by deleting at most k of its vertices. In a breakthrough result Reed, Smith, and Vetta (Operations Research Letters, 2004) gave a O(4 k kmn) time algorithm for it, the first algorithm with polynomial runtime of ..."
Abstract

Cited by 19 (4 self)
 Add to MetaCart
(Show Context)
The Odd Cycle Transversal problem (OCT) asks whether a given graph can be made bipartite by deleting at most k of its vertices. In a breakthrough result Reed, Smith, and Vetta (Operations Research Letters, 2004) gave a O(4 k kmn) time algorithm for it, the first algorithm with polynomial runtime of uniform degree for every fixed k. It is known that this implies a polynomialtime compression algorithm that turns OCT instances into equivalent instances of size at most O(4 k), a socalled kernelization. Since then the existence of a polynomial kernel for OCT, i.e., a kernelization with size bounded polynomially in k, has turned into one of the main open questions in the study of kernelization. Despite the impressive progress in the area, including the recent development of lower bound techniques (Bodlaender
Weak Compositions and Their Applications to Polynomial Lower Bounds for Kernelization
"... Abstract. We introduce a new form of composition called weak composition that allows us to obtain polynomial kernelization lowerbounds for several natural parameterized problems. Let d ≥ 2 be some constant and let L1, L2 ⊆ {0, 1} ∗ × N be two parameterized problems where the unparameterized versi ..."
Abstract

Cited by 18 (2 self)
 Add to MetaCart
(Show Context)
Abstract. We introduce a new form of composition called weak composition that allows us to obtain polynomial kernelization lowerbounds for several natural parameterized problems. Let d ≥ 2 be some constant and let L1, L2 ⊆ {0, 1} ∗ × N be two parameterized problems where the unparameterized version of L1 is NPhard. Assuming coNP ̸ ⊆ NP/poly, our framework essentially states that composing t L1instances each with parameter k, to an L2instance with parameter k ′ ≤ t 1/d k O(1) , implies that L2 does not have a kernel of size O(k d−ε) for any ε> 0. We show two examples of weak composition and derive polynomial kernelization lower bounds for dBipartite Regular Perfect Code and dDimensional Matching, parameterized by the solution size k. By reduction, using linear parameter transformations, we then derive the following lowerbounds for kernel sizes when the parameter is the solution size k (assuming coNP ̸ ⊆ NP/poly): – dSet Packing, dSet Cover, dExact Set Cover, Hitting Set with dBounded Occurrences, and Exact Hitting Set with dBounded Occurrences have no kernels of size O(k d−3−ε) for any ε> 0. – Kd Packing and Induced K1,d Packing have no kernels of size O(k d−4−ε) for any ε> 0. – dRedBlue Dominating Set and dSteiner Tree have no kernels of sizes O(k d−3−ε) and
Linear kernels and singleexponential algorithms via protrusion decompositions.
 In Proc. of the 40th International Colloquium on Automata, Languages and Programming (ICALP),
, 2013
"... Abstract A ttreewidthmodulator of a graph G is a set X ⊆ V (G) such that the treewidth of G − X is at most t − 1. In this paper, we present a novel algorithm to compute a decomposition scheme for graphs G that come equipped with a ttreewidthmodulator. Similar decompositions have already been ex ..."
Abstract

Cited by 15 (4 self)
 Add to MetaCart
Abstract A ttreewidthmodulator of a graph G is a set X ⊆ V (G) such that the treewidth of G − X is at most t − 1. In this paper, we present a novel algorithm to compute a decomposition scheme for graphs G that come equipped with a ttreewidthmodulator. Similar decompositions have already been explicitly or implicitly used for obtaining polynomial kernels Our first result is that any parameterized graph problem (with parameter k) that has finite integer index and is treewidthbounding admits a linear kernel on the class of Htopologicalminorfree graphs, where H is some arbitrary but fixed graph. A parameterized graph problem is called treewidthbounding if all positive instances have a ttreewidthmodulator of size O(k), for some constant t. This result partially extends previous metatheorems on the existence of linear kernels on graphs of bounded genus Our second application concerns the PlanarFDeletion problem. Let F be a fixed finite family of graphs containing at least one planar graph. Given an nvertex graph G and a nonnegative integer k, PlanarFDeletion asks whether G has a set X ⊆ V (G) such that X k and G − X is Hminorfree for every H ∈ F. This problem encompasses a number of wellstudied parameterized problems such as Vertex Cover, Feedback Vertex Set, and Treewidtht Vertex Deletion. Very recently, an algorithm for PlanarFDeletion with running time 2 O(k) · n log 2 n (such an algorithm is called singleexponential) has been presented in