Results 1  10
of
14
Solving MAXrSAT above a Tight Lower Bound
, 2010
"... We present an exact algorithm that decides, for every fixed r ≥ 2 in time O(m) + 2 O(k2) whether a given multiset of m clauses of size r admits a truth assignment that satisfies at least ((2 r − 1)m + k)/2 r clauses. Thus MaxrSat is fixedparameter tractable when parameterized by the number of sat ..."
Abstract

Cited by 43 (17 self)
 Add to MetaCart
We present an exact algorithm that decides, for every fixed r ≥ 2 in time O(m) + 2 O(k2) whether a given multiset of m clauses of size r admits a truth assignment that satisfies at least ((2 r − 1)m + k)/2 r clauses. Thus MaxrSat is fixedparameter tractable when parameterized by the number of satisfied clauses above the tight lower bound (1 − 2 −r)m. This solves an open problem of Mahajan, Raman and Sikdar (J. Comput. System Sci., 75, 2009). Our algorithm is based on a polynomialtime data reduction procedure that reduces a problem instance to an equivalent algebraically represented problem with O(k 2) variables. This is done by representing the instance as an appropriate polynomial, and by applying a probabilistic argument combined with some simple tools from Harmonic analysis to show that if the polynomial cannot be reduced to one of size O(k 2), then there is a truth assignment satisfying the required number of clauses. We introduce a new notion of bikernelization from a parameterized problem to another one and apply it to prove that the abovementioned parameterized MaxrSat admits a polynomialsize kernel. Combining another probabilistic argument with tools from graph matching theory and signed graphs, we show that if an instance of Max2Sat with m clauses has at least 3k variables after application of certain polynomial time reduction rules to it, then there is a truth assignment that satisfies at least (3m + k)/4 clauses. We also outline how the fixedparameter tractability and polynomialsize kernel results on MaxrSat can be extended to more general families of Boolean
Parameterizing above or below guaranteed values
 J. Comput. System Sci
"... We consider new parameterizations of NPoptimization problems that have nontrivial lower and/or upper bounds on their optimum solution size. The natural parameter, we argue, is the quantity above the lower bound or below the upper bound. We show that for every problem in MAX SNP, the optimum value i ..."
Abstract

Cited by 31 (3 self)
 Add to MetaCart
(Show Context)
We consider new parameterizations of NPoptimization problems that have nontrivial lower and/or upper bounds on their optimum solution size. The natural parameter, we argue, is the quantity above the lower bound or below the upper bound. We show that for every problem in MAX SNP, the optimum value is bounded below by an unbounded function of the inputsize, and that the aboveguarantee parameterization with respect to this lower bound is fixedparameter tractable. We also observe that approximation algorithms give nontrivial lower or upper bounds on the solution size and that the above or below guarantee question with respect to these bounds is fixedparameter tractable for a subclass of NPoptimization problems. We then introduce the notion of ‘tight ’ lower and upper bounds and exhibit a number of problems for which the aboveguarantee and belowguarantee parameterizations with respect to a tight bound is fixedparameter tractable or Whard. We show that if we parameterize “sufficiently ” above or below the tight bounds, then these parameterized versions are not fixedparameter tractable unless P = NP, for a subclass of NPoptimization problems. We also list several directions to explore in this paradigm. 1
Algorithmic and complexity results for decompositions of biological networks into monotone subsystems
 IN LECTURE NOTES IN COMPUTER SCIENCE: EXPERIMENTAL ALGORITHMS: 5TH INTERNATIONAL WORKSHOP, WEA 2006, SPRINGERVERLAG, 253–264. (CALA GALDANA, MENORCA
, 2006
"... A useful approach to the mathematical analysis of largescale biological networks is based upon their decompositions into monotone dynamical systems. This paper deals with two computational problems associated to finding decompositions which are optimal in an appropriate sense. In graphtheoretic la ..."
Abstract

Cited by 22 (6 self)
 Add to MetaCart
A useful approach to the mathematical analysis of largescale biological networks is based upon their decompositions into monotone dynamical systems. This paper deals with two computational problems associated to finding decompositions which are optimal in an appropriate sense. In graphtheoretic language, the problems can be recast in terms of maximal signconsistent subgraphs. The theoretical results include polynomialtime approximation algorithms as well as constantratio inapproximability results. One of the algorithms, which has a worstcase guarantee of 87.9 % from optimality, is based on the semidefinite programming relaxation approach of GoemansWilliamson [23]. The algorithm was implemented and tested on a Drosophila segmentation network and an Epidermal Growth Factor Receptor pathway model, and it was found to perform close to optimally.
A probabilistic approach to problems parameterized above or below tight bounds
 UNIVERSITY OF COPENHAGEN
, 2009
"... We introduce a new approach for establishing fixedparameter tractability of problems parameterized above tight lower bounds or below tight upper bounds. To illustrate the approach we consider three problems of this type of unknown complexity that were introduced by Mahajan, Raman and Sikdar (J. Com ..."
Abstract

Cited by 20 (16 self)
 Add to MetaCart
(Show Context)
We introduce a new approach for establishing fixedparameter tractability of problems parameterized above tight lower bounds or below tight upper bounds. To illustrate the approach we consider three problems of this type of unknown complexity that were introduced by Mahajan, Raman and Sikdar (J. Comput. Syst. Sci. 75, 2009). We show that a generalization of one of the problems and nontrivial special cases of the other two are fixedparameter tractable.
Simultaneously Satisfying Linear Equations Over F2: MaxLin2 and MaxrLin2 Parameterized Above Average
 IN FSTTCS 2011, LIPICS
, 2011
"... In the parameterized problem MAXLIN2AA[k], we are given a system with variables x1,..., xn consisting of equations of the form ∏i∈I x i = b, where x i, b ∈ {−1, 1} and I ⊆ [n], each equation has a positive integral weight, and we are to decide whether it is possible to simultaneously satisfy equa ..."
Abstract

Cited by 13 (8 self)
 Add to MetaCart
In the parameterized problem MAXLIN2AA[k], we are given a system with variables x1,..., xn consisting of equations of the form ∏i∈I x i = b, where x i, b ∈ {−1, 1} and I ⊆ [n], each equation has a positive integral weight, and we are to decide whether it is possible to simultaneously satisfy equations of total weight at least W/2 + k, where W is the total weight of all equations and k is the parameter (if k = 0, the possibility is assured). We show that MAXLIN2AA[k] has a kernel with at most O(k 2 log k) variables and can be solved in time 2 O(k log k) (nm) O(1). This solves an open problem of Mahajan et al. (2006). The problem MAXrLIN2AA[k, r] is the same as MAXLIN2AA[k] with two differences: each equation has at most r variables and r is the second parameter. We prove a theorem on MAXrLIN2AA[k, r] which implies that MAXrLIN2AA[k, r] has a kernel with at most (2k − 1)r variables, improving a number of results including one by Kim and Williams (2010). The theorem also implies a lower bound on the maximum of a function f: {−1, 1} n → R whose Fourier expansion (which is a multilinear polynomial) is of degree r. We show applicability of the lower bound by giving a new proof of the EdwardsErdős bound (each connected graph on n vertices and m edges has a bipartite subgraph with at least m/2 + (n − 1)/4 edges) and obtaining a generalization.
Kernelization  Preprocessing with A Guarantee
"... Data reduction techniques are widely applied to deal with computationally hard problems in real world applications. It has been a longstanding challenge to formally express the efficiency and accuracy of these “preprocessing” procedures. The framework of parameterized complexity turns out to be ..."
Abstract

Cited by 7 (1 self)
 Add to MetaCart
Data reduction techniques are widely applied to deal with computationally hard problems in real world applications. It has been a longstanding challenge to formally express the efficiency and accuracy of these “preprocessing” procedures. The framework of parameterized complexity turns out to be particularly suitable for a mathematical analysis of preprocessing heuristics. A kernelization algorithm is a preprocessing algorithm which simplifies the instances given as input in polynomial time, and the extent of simplification desired is quantified with the help of the additional parameter. We give an overview of some of the early work in the area and also survey newer techniques that have emerged in the design and analysis of kernelization algorithms. We also outline the framework of Bodlaender et al. [9] and Fortnow and Santhanam [38] for showing kernelization lower bounds under reasonable assumptions from classical complexity theory, and highlight some of the recent results that strengthen and generalize this framework.
Parameterizing MAX SNP Problems Above Guaranteed Values
 Proc. of IWPEC, Springer LNCS 4169
, 2006
"... Abstract. We show that every problem in MAX SNP has a lower bound on the optimum solution size and that the above guarantee question with respect to that lower bound is fixed parameter tractable. We next introduce the notion of ‘tight ’ upper and lower bounds for the optimum solution and show that t ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
(Show Context)
Abstract. We show that every problem in MAX SNP has a lower bound on the optimum solution size and that the above guarantee question with respect to that lower bound is fixed parameter tractable. We next introduce the notion of ‘tight ’ upper and lower bounds for the optimum solution and show that the parameterized version of a variant of the above guarantee question with respect to the tight lower bound cannot be fixed parameter tractable unless P = NP, for a number of NPoptimization problems. 1
COMPUTING THE PARTITION FUNCTION OF A POLYNOMIAL ON THE BOOLEAN CUBE
, 2015
"... Abstract. For a polynomial f: {−1, 1}n − → C, we define the partition function as the average of eλf(x) over all points x ∈ {−1, 1}n, where λ ∈ C is a parameter. We present an algorithm, which, given such f, λ and > 0 approximates the partition function within a relative error of in NO(lnn−ln ) ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract. For a polynomial f: {−1, 1}n − → C, we define the partition function as the average of eλf(x) over all points x ∈ {−1, 1}n, where λ ∈ C is a parameter. We present an algorithm, which, given such f, λ and > 0 approximates the partition function within a relative error of in NO(lnn−ln ) time provided λ  ≤ (2L√d)−1, where d is the degree, L is (roughly) the Lipschitz constant of f and N is the number of monomials in f. We apply the algorithm to approximate the maximum of a polynomial f: {−1, 1}n − → R. 1. Introduction and
Defying Hardness With a Hybrid Approach
, 2004
"... A hybrid algorithm is a collection of heuristics, paired with a polynomial time procedure S (called a selector) that decides based on a preliminary scan of the input which heuristic should be executed. We investigate scenarios where the selector must decide between heuristics that are “good ” with r ..."
Abstract
 Add to MetaCart
(Show Context)
A hybrid algorithm is a collection of heuristics, paired with a polynomial time procedure S (called a selector) that decides based on a preliminary scan of the input which heuristic should be executed. We investigate scenarios where the selector must decide between heuristics that are “good ” with respect to different complexity measures, e.g. heuristic h1 is efficient but approximately solves instances, whereas h2 exactly solves instances but takes superpolynomial time. We present hybrid algorithms for several interesting problems Π with a “hardnessdefying ” property: there is a set of complexity measures {mi} whereby Π is conjectured or known to be hard (or unsolvable) for each mi, but for each heuristic hi of the hybrid algorithm, one can give a complexity guarantee for hi on the instances of Π that S selects for hi that is strictly better than mi. For example, some NPhard problems admit a hybrid algorithm that given an instance can either solve it exactly in “subexponential ” time, or approximately solve it in polytime with a performance ratio exceeding that of the known inapproximability of the problem (under P � = NP).