Results 1  10
of
20
New races in parameterized algorithmics
 IN: PROCEEDINGS OF THE 37TH INTERNATIONAL SYMPOSIUM ON MATHEMATICAL FOUNDATIONS OF COMPUTER SCIENCE (MFCS ’12), LNCS
"... Once having classified an NPhard problem fixedparameter tractable with respect to a certain parameter, the race for the most efficient fixedparameter algorithm starts. Herein, the attention usually focuses on improving the running time factor exponential in the considered parameter, and, in case ..."
Abstract

Cited by 10 (7 self)
 Add to MetaCart
(Show Context)
Once having classified an NPhard problem fixedparameter tractable with respect to a certain parameter, the race for the most efficient fixedparameter algorithm starts. Herein, the attention usually focuses on improving the running time factor exponential in the considered parameter, and, in case of kernelization algorithms, to improve the bound on the kernel size. Both from a practical as well as a theoretical point of view, however, there are further aspects of efficiency that deserve attention. We discuss several of these aspects and particularly focus on the search for “stronger parameterizations” in developing fixedparameter algorithms.
Shielding circuits with groups
, 2012
"... We show how to efficiently compile any given circuit C into a leakageresistant circuit Ĉ such that any function on the wires of Ĉ that leaks information during a computation Ĉ(x) yields advantage in computing the product of ĈΩ(1) elements of the alternating group Au. In combination with new compr ..."
Abstract

Cited by 9 (2 self)
 Add to MetaCart
We show how to efficiently compile any given circuit C into a leakageresistant circuit Ĉ such that any function on the wires of Ĉ that leaks information during a computation Ĉ(x) yields advantage in computing the product of ĈΩ(1) elements of the alternating group Au. In combination with new compression bounds for Au products, also obtained here, Ĉ withstands leakage from virtually any class of functions against which averagecase lower bounds are known. This includes communication protocols, and AC 0 circuits augmented with few arbitrary symmetric gates. If NC 1 = TC 0 then the construction resists TC 0 leakage as well. We also conjecture that our construction resists NC 1 leakage. In addition, we extend the construction to the multiquery setting by relying on a simple secure hardware component. We build on Barrington’s theorem [JCSS ’89] and on the previous leakageresistant constructions by Ishai et al. [Crypto ’03] and Faust et al. [Eurocrypt ’10]. Our construction exploits properties of Au beyond what is sufficient for Barrington’s theorem.
Recent developments in kernelization: A survey
"... Kernelization is a formalization of efficient preprocessing, aimed mainly at combinatorially hard problems. Empirically, preprocessing is highly successful in practice, e.g., in stateoftheart SAT and ILP solvers. The notion of kernelization from parameterized complexity makes it possible to rigo ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
(Show Context)
Kernelization is a formalization of efficient preprocessing, aimed mainly at combinatorially hard problems. Empirically, preprocessing is highly successful in practice, e.g., in stateoftheart SAT and ILP solvers. The notion of kernelization from parameterized complexity makes it possible to rigorously prove upper and lower bounds on, e.g., the maximum output size of a preprocessing in terms of one or more problemspecific parameters. This avoids the oftenraised issue that we should not expect an efficient algorithm that provably shrinks every instance of any NPhard problem. In this survey, we give a general introduction to the area of kernelization and then discuss some recent developments. After the introductory material we attempt a reasonably selfcontained update and introduction on the following topics: (1) Lower bounds for kernelization, taking into account the recent progress on the andconjecture. (2) The use of matroids and representative sets for kernelization. (3) Turing kernelization, i.e., understanding preprocessing that adaptively or nonadaptively creates a large number of small outputs. 1
Abusing the Tutte Matrix: An Algebraic Instance Compression for the Ksetcycle Problem
"... We give an algebraic, determinantbased algorithm for the KCycle problem, i.e., the problem of finding a cycle through a set of specified elements. Our approach gives a simple FPT algorithm for the problem, matching the O ∗ (2 K  ) running time of the algorithm of Björklund et al. (SODA, 2012). F ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
We give an algebraic, determinantbased algorithm for the KCycle problem, i.e., the problem of finding a cycle through a set of specified elements. Our approach gives a simple FPT algorithm for the problem, matching the O ∗ (2 K  ) running time of the algorithm of Björklund et al. (SODA, 2012). Furthermore, our approach is open for treatment by classical algebraic tools (e.g., Gaussian elimination), and we show that it leads to a polynomial compression of the problem, i.e., a polynomialtime reduction of the KCycle problem into an algebraic problem with coding size O(K  3). This is surprising, as several related problems (e.g., kCycle and the Disjoint Paths problem) are known not to admit such a reduction unless the polynomial hierarchy collapses. Furthermore, despite the result, we are not aware of any witness for the KCycle problem of size polynomial in K  + log n, which seems (for now) to separate the notions of polynomial compression and polynomial kernelization (as a polynomial kernelization for a problem in NP necessarily implies a small witness).
On cutwidth parameterized by vertex cover
"... We study the Cutwidth problem, where input is a graph G, and the objective is find a linear layout of the vertices that minimizes the maximum number of edges intersected by any vertical line inserted between two consecutive vertices of the layout. We give an algorithm for Cutwidth with running tim ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
We study the Cutwidth problem, where input is a graph G, and the objective is find a linear layout of the vertices that minimizes the maximum number of edges intersected by any vertical line inserted between two consecutive vertices of the layout. We give an algorithm for Cutwidth with running time O(2 k n O(1)). Here k is the size of a minimum vertex cover of the input graph G, and n is the number of vertices in G. Our algorithm gives a a O(2 n/2 n O(1) ) time algorithm for Cutwidth on bipartite graphs as a corollary. This is the first nontrivial exact exponential time algorithm for Cutwidth on a graph class where the problem remains NPcomplete. Additionally we show that Cutwidth parameterized by the size of the minimum vertex cover of the input graph does not admit a polynomial kernel unless CoNP ⊆ NP/poly. Our kernelization lower bound contrasts the recent result of Bodlaender et al.[ICALP 2011] that Treewidth parameterized by vertex cover does admits a polynomial kernel.
Leeuwen, Network sparsification for steiner problems on planar and boundedgenus graphs
 Proc. 55th FOCS, abs/1306.6593
, 2013
"... We propose polynomialtime algorithms that sparsify planar and boundedgenus graphs while preserving optimal or nearoptimal solutions to Steiner problems. Our main contribution is a polynomialtime algorithm that, given an unweighted graph G embedded on a surface of genus g and a designated face f ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
(Show Context)
We propose polynomialtime algorithms that sparsify planar and boundedgenus graphs while preserving optimal or nearoptimal solutions to Steiner problems. Our main contribution is a polynomialtime algorithm that, given an unweighted graph G embedded on a surface of genus g and a designated face f bounded by a simple cycle of length k, uncovers a set F ⊆ E(G) of size polynomial in g and k that contains an optimal Steiner tree for any set of terminals that is a subset of the vertices of f. We apply this general theorem to prove that: • given an unweighted graph G embedded on a surface of genus g and a terminal set S ⊆ V (G), one can in polynomial time find a set F ⊆ E(G) that contains an optimal Steiner tree T for S and that has size polynomial in g and E(T); • an analogous result holds for an optimal Steiner forest for a set S of terminal pairs; • given an unweighted planar graph G and a terminal set S ⊆ V (G), one can in polynomial time find a set F ⊆ E(G) that contains an optimal (edge) multiway cut C separating S (i.e., a cutset that intersects any path with endpoints in different termi
On sparsification for computing treewidth
 IN PROCEEDINGS OF IPEC
, 2013
"... We investigate whether an nvertex instance (G, k) of Treewidth, asking whether the graph G has treewidth at most k, can efficiently be made sparse without changing its answer. By giving a special form of orcrosscomposition, we prove that this is unlikely: if there is an > 0 and a polynomial ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
(Show Context)
We investigate whether an nvertex instance (G, k) of Treewidth, asking whether the graph G has treewidth at most k, can efficiently be made sparse without changing its answer. By giving a special form of orcrosscomposition, we prove that this is unlikely: if there is an > 0 and a polynomialtime algorithm that reduces nvertex Treewidth instances to equivalent instances, of an arbitrary problem, with O(n2−) bits, then NP ⊆ coNP/poly and the polynomial hierarchy collapses to its third level. Our sparsification lower bound has implications for structural parameterizations of Treewidth: parameterizations by measures that do not exceed the vertex count, cannot have kernels with O(k2−) bits for any > 0, unless NP ⊆ coNP/poly. Motivated by the question of determining the optimal kernel size for Treewidth parameterized by vertex cover, we improve the O(k3)vertex kernel from Bodlaender et al. (STACS 2011) to a kernel with O(k2) vertices. Our improved kernel is based on a novel form of treewidthinvariant set. We use the qexpansion lemma of Fomin et al. (STACS 2011) to find such sets efficiently in graphs whose vertex count is superquadratic in their vertex cover number.
Guarantees and limits of preprocessing in constraint satisfaction and reasoning
 Artificial Intelligence
, 2014
"... We present a first theoretical analysis of the power of polynomialtime preprocessing for important combinatorial problems from various areas in AI. We consider problems from Constraint Satisfaction, Global Constraints, Satisfiability, Nonmonotonic and Bayesian Reasoning under structural restriction ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
(Show Context)
We present a first theoretical analysis of the power of polynomialtime preprocessing for important combinatorial problems from various areas in AI. We consider problems from Constraint Satisfaction, Global Constraints, Satisfiability, Nonmonotonic and Bayesian Reasoning under structural restrictions. All these problems involve two tasks: (i) identifying the structure in the input as required by the restriction, and (ii) using the identified structure to solve the reasoning task efficiently. We show that for most of the considered problems, task (i) admits a polynomialtime preprocessing to a problem kernel whose size is polynomial in a structural problem parameter of the input, in contrast to task (ii) which does not admit such a reduction to a problem kernel of polynomial size, subject to a complexity theoretic assumption. As a notable exception we show that the consistency problem for the AtMostNValue constraint admits a polynomial kernel consisting of a quadratic number of variables and domain values. Our results provide a firm worstcase guarantees and theoretical boundaries for the performance of polynomialtime preprocessing algorithms for the considered problems.
A simple proof that ANDcompression of NPcomplete problems is hard
 Electronic Colloquium on Computational Complexity (ECCC), 2014. Available at http://eccc.hpiweb.de/report/2014/075
"... Drucker [1] proved the following result: Unless the unlikely complexitytheoretic collapse coNP ⊆ NP/poly occurs, there is no ANDcompression for SAT. The result has implications for the compressibility and kernelizability of a whole range of NPcomplete parameterized problems. We present a simple p ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
Drucker [1] proved the following result: Unless the unlikely complexitytheoretic collapse coNP ⊆ NP/poly occurs, there is no ANDcompression for SAT. The result has implications for the compressibility and kernelizability of a whole range of NPcomplete parameterized problems. We present a simple proof of this result. An ANDcompression is a deterministic polynomialtime algorithm that maps a set of SATinstances x1,..., xt to a single SATinstance y of size poly(maxi xi) such that y is satisfiable if and only if all xi are satisfiable. The “AND ” in the name stems from the fact that the predicate “y is satisfiable ” can be written as the AND of all predicates “xi is satisfiable”. Drucker’s result complements the result by Bodlaender et al. [2] and Fortnow and Santhanam [3], who proved the analogous statement for ORcompressions, and Drucker’s proof not only subsumes that result but also extends it to randomized compression algorithms that are allowed to have a certain probability of failure. The overall structure of our proof is similar to the arguments of Ko [4] for Pselective sets, which use the fact that tournaments have dominating sets of logarithmic size. We generalize this fact to hypergraph tournaments. For the informationtheoretic part of the proof, we consider a natural generalization of the average noise sensitivity of a Boolean function, which is bounded for compressive maps. We prove this with mechanical calculations that involve the Kullback–Leibler divergence. 1