Results 1  10
of
14
Recent developments in kernelization: A survey
"... Kernelization is a formalization of efficient preprocessing, aimed mainly at combinatorially hard problems. Empirically, preprocessing is highly successful in practice, e.g., in stateoftheart SAT and ILP solvers. The notion of kernelization from parameterized complexity makes it possible to rigo ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
(Show Context)
Kernelization is a formalization of efficient preprocessing, aimed mainly at combinatorially hard problems. Empirically, preprocessing is highly successful in practice, e.g., in stateoftheart SAT and ILP solvers. The notion of kernelization from parameterized complexity makes it possible to rigorously prove upper and lower bounds on, e.g., the maximum output size of a preprocessing in terms of one or more problemspecific parameters. This avoids the oftenraised issue that we should not expect an efficient algorithm that provably shrinks every instance of any NPhard problem. In this survey, we give a general introduction to the area of kernelization and then discuss some recent developments. After the introductory material we attempt a reasonably selfcontained update and introduction on the following topics: (1) Lower bounds for kernelization, taking into account the recent progress on the andconjecture. (2) The use of matroids and representative sets for kernelization. (3) Turing kernelization, i.e., understanding preprocessing that adaptively or nonadaptively creates a large number of small outputs. 1
Turing kernelization for finding long paths and cycles in restricted graph classes
 In Proc. 22nd ESA
, 2014
"... Abstract. We analyze the potential for provably effective preprocessing for the problems of finding paths and cycles with at least k edges. Several years ago, the question was raised whether the existing superpolynomial kernelization lower bounds for kPath and kCycle can be circumvented by relaxin ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
(Show Context)
Abstract. We analyze the potential for provably effective preprocessing for the problems of finding paths and cycles with at least k edges. Several years ago, the question was raised whether the existing superpolynomial kernelization lower bounds for kPath and kCycle can be circumvented by relaxing the requirement that the preprocessing algorithm outputs a single instance. To this date, very few examples are known where the relaxation to Turing kernelization is fruitful. We provide a novel example by giving polynomialsize Turing kernels for kPath and kCycle on planar graphs, graphs of maximum degree t, clawfree graphs, and K3,tminorfree graphs, for each constant t ≥ 3. Concretely, we present algorithms for kPath (kCycle) on these restricted graph families that run in polynomial time when they are allowed to query an external oracle for the answers to kPath (kCycle) instances of size and parameter bounded polynomially in k. Our kernelization schemes are based on a new methodology called DecomposeQueryReduce. 1
On sparsification for computing treewidth
 IN PROCEEDINGS OF IPEC
, 2013
"... We investigate whether an nvertex instance (G, k) of Treewidth, asking whether the graph G has treewidth at most k, can efficiently be made sparse without changing its answer. By giving a special form of orcrosscomposition, we prove that this is unlikely: if there is an > 0 and a polynomial ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
(Show Context)
We investigate whether an nvertex instance (G, k) of Treewidth, asking whether the graph G has treewidth at most k, can efficiently be made sparse without changing its answer. By giving a special form of orcrosscomposition, we prove that this is unlikely: if there is an > 0 and a polynomialtime algorithm that reduces nvertex Treewidth instances to equivalent instances, of an arbitrary problem, with O(n2−) bits, then NP ⊆ coNP/poly and the polynomial hierarchy collapses to its third level. Our sparsification lower bound has implications for structural parameterizations of Treewidth: parameterizations by measures that do not exceed the vertex count, cannot have kernels with O(k2−) bits for any > 0, unless NP ⊆ coNP/poly. Motivated by the question of determining the optimal kernel size for Treewidth parameterized by vertex cover, we improve the O(k3)vertex kernel from Bodlaender et al. (STACS 2011) to a kernel with O(k2) vertices. Our improved kernel is based on a novel form of treewidthinvariant set. We use the qexpansion lemma of Fomin et al. (STACS 2011) to find such sets efficiently in graphs whose vertex count is superquadratic in their vertex cover number.
A simple proof that ANDcompression of NPcomplete problems is hard
 Electronic Colloquium on Computational Complexity (ECCC), 2014. Available at http://eccc.hpiweb.de/report/2014/075
"... Drucker [1] proved the following result: Unless the unlikely complexitytheoretic collapse coNP ⊆ NP/poly occurs, there is no ANDcompression for SAT. The result has implications for the compressibility and kernelizability of a whole range of NPcomplete parameterized problems. We present a simple p ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
Drucker [1] proved the following result: Unless the unlikely complexitytheoretic collapse coNP ⊆ NP/poly occurs, there is no ANDcompression for SAT. The result has implications for the compressibility and kernelizability of a whole range of NPcomplete parameterized problems. We present a simple proof of this result. An ANDcompression is a deterministic polynomialtime algorithm that maps a set of SATinstances x1,..., xt to a single SATinstance y of size poly(maxi xi) such that y is satisfiable if and only if all xi are satisfiable. The “AND ” in the name stems from the fact that the predicate “y is satisfiable ” can be written as the AND of all predicates “xi is satisfiable”. Drucker’s result complements the result by Bodlaender et al. [2] and Fortnow and Santhanam [3], who proved the analogous statement for ORcompressions, and Drucker’s proof not only subsumes that result but also extends it to randomized compression algorithms that are allowed to have a certain probability of failure. The overall structure of our proof is similar to the arguments of Ko [4] for Pselective sets, which use the fact that tournaments have dominating sets of logarithmic size. We generalize this fact to hypergraph tournaments. For the informationtheoretic part of the proof, we consider a natural generalization of the average noise sensitivity of a Boolean function, which is bounded for compressive maps. We prove this with mechanical calculations that involve the Kullback–Leibler divergence. 1
WinWin Kernelization for Degree Sequence Completion Problems
, 2014
"... We study the kernelizability of a class of NPhard graph modification problems based on vertex degree properties. Our main positive results refer to NPhard graph completion (that is, edge addition) cases while we show that there is no hope to achieve analogous results for the corresponding vertex ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
(Show Context)
We study the kernelizability of a class of NPhard graph modification problems based on vertex degree properties. Our main positive results refer to NPhard graph completion (that is, edge addition) cases while we show that there is no hope to achieve analogous results for the corresponding vertex or edge deletion versions. Our algorithms are based on a method that transforms graph completion problems into efficiently solvable number problems and exploits ffactor computations for translating the results back into the graph setting. Indeed, our core observation is that we encounter a winwin situation in the sense that either the number of edge additions is small (and thus faster to find) or the problem is polynomialtime solvable. This approach helps in answering an open question by Mathieson and Szeider [JCSS 2012].
unknown title
, 2015
"... Abstract Given a fixed graph H , the H Free Edge Deletion (resp., Completion, Editing) problem asks whether it is possible to delete from (resp., add to, delete from or add to) the input graph at most k edges so that the resulting graph is H free, i.e., contains no induced subgraph isomorphic to ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract Given a fixed graph H , the H Free Edge Deletion (resp., Completion, Editing) problem asks whether it is possible to delete from (resp., add to, delete from or add to) the input graph at most k edges so that the resulting graph is H free, i.e., contains no induced subgraph isomorphic to H . These H free edge modification problems are well known to be fixedparameter tractable for every fixed H . In this paper we study the incompressibility, i.e., nonexistence of polynomial kernels, for these H free edge modification problems in terms of the structure of H , and completely characterize their nonexistence for H being paths, cycles or 3connected graphs. We also give a sufficient condition for the nonexistence of polynomial kernels for FFree Edge Deletion problems, where F is a finite set of forbidden induced subgraphs. As an effective tool, we have introduced an incompressible constraint satisfiability problem Propagationalf Satisfiability to express common propagational behaviors of events, and we expect the problem to be useful in studying the nonexistence of polynomial kernels in general.
Incompressibility of HFree Edge Modification Problems
, 2014
"... Given a fixed graph H, the HFree Edge Deletion (resp., Completion, Editing) problem asks whether it is possible to delete from (resp., add to, delete from or add to) the input graph at most k edges so that the resulting graph is Hfree, i.e., contains no induced subgraph isomorphic to H. These Hfr ..."
Abstract
 Add to MetaCart
Given a fixed graph H, the HFree Edge Deletion (resp., Completion, Editing) problem asks whether it is possible to delete from (resp., add to, delete from or add to) the input graph at most k edges so that the resulting graph is Hfree, i.e., contains no induced subgraph isomorphic to H. These Hfree edge modification problems are well known to be fixedparameter tractable for every xed H. In this paper we study the incompressibility, i.e., nonexistence of polynomial kernels, for these Hfree edge modification problems in terms of the structure of H, and completely characterize their nonexistence for H being paths, cycles or 3connected graphs. We also give a sufficient condition for the nonexistence of polynomial kernels for FFree Edge Deletion problems, where F is a finite set of forbidden induced subgraphs. As an eective tool, we have introduced an incompressible constraint satisability problem Propagationalf Satisfiability to express common propagational behaviors of events, and we expect the problem to be useful in studying the nonexistence of polynomial kernels in general.
Sparsification upper and lower bounds for graphs problems and notallequal SAT Sparsification Upper and Lower Bounds for Graphs Problems and NotAllEqual SAT *
"... Abstract We present several sparsification lower and upper bounds for classic problems in graph theory and logic. For the problems 4Coloring, (Directed) Hamiltonian Cycle, and (Connected) Dominating Set, we prove that there is no polynomialtime algorithm that reduces any nvertex input to an equiv ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract We present several sparsification lower and upper bounds for classic problems in graph theory and logic. For the problems 4Coloring, (Directed) Hamiltonian Cycle, and (Connected) Dominating Set, we prove that there is no polynomialtime algorithm that reduces any nvertex input to an equivalent instance, of an arbitrary problem, with bitsize O(n 2−ε ) for ε > 0, unless NP ⊆ coNP/poly and the polynomialtime hierarchy collapses. These results imply that existing linearvertex kernels for kNonblocker and kMax Leaf Spanning Tree (the parametric duals of (Connected) Dominating Set) cannot be improved to have O(k 2−ε ) edges, unless NP ⊆ coNP/poly. We also present a positive result and exhibit a nontrivial sparsification algorithm for dNotAllEqualSAT. We give an algorithm that reduces an nvariable input with clauses of size at most d to an equivalent input with O(n d−1 ) clauses, for any fixed d. Our algorithm is based on a linearalgebraic proof of Lovász that bounds the number of hyperedges in critically 3chromatic duniform nvertex hypergraphs by n d−1 . We show that our kernel is tight under the assumption that NP coNP/poly.