Results 1  10
of
234
The NPcompleteness column: an ongoing guide
 JOURNAL OF ALGORITHMS
, 1987
"... This is the nineteenth edition of a (usually) quarterly column that covers new developments in the theory of NPcompleteness. The presentation is modeled on that used by M. R. Garey and myself in our book "Computers and Intractability: A Guide to the Theory of NPCompleteness," W. H. Freem ..."
Abstract

Cited by 239 (0 self)
 Add to MetaCart
This is the nineteenth edition of a (usually) quarterly column that covers new developments in the theory of NPcompleteness. The presentation is modeled on that used by M. R. Garey and myself in our book "Computers and Intractability: A Guide to the Theory of NPCompleteness," W. H. Freeman & Co., New York, 1979 (hereinafter referred to as "[G&J]"; previous columns will be referred to by their dates). A background equivalent to that provided by [G&J] is assumed, and, when appropriate, crossreferences will be given to that book and the list of problems (NPcomplete and harder) presented there. Readers who have results they would like mentioned (NPhardness, PSPACEhardness, polynomialtimesolvability, etc.) or open problems they would like publicized, should
The Complexity of Multiterminal Cuts
 SIAM Journal on Computing
, 1994
"... In the Multiterminal Cut problem we are given an edgeweighted graph and a subset of the vertices called terminals, and asked for a minimum weight set of edges that separates each terminal from all the others. When the number k of terminals is two, this is simply the mincut, maxflow problem, and ..."
Abstract

Cited by 194 (0 self)
 Add to MetaCart
(Show Context)
In the Multiterminal Cut problem we are given an edgeweighted graph and a subset of the vertices called terminals, and asked for a minimum weight set of edges that separates each terminal from all the others. When the number k of terminals is two, this is simply the mincut, maxflow problem, and can be solved in polynomial time. We show that the problem becomes NPhard as soon as k = 3, but can be solved in polynomial time for planar graphs for any fixed k. The planar problem is NPhard, however, if k is not fixed. We also describe a simple approximation algorithm for arbitrary graphs that is guaranteed to come within a factor of 2  2/k of the optimal cut weight.
Minimumenergy broadcast in allwireless networks: Npcompleteness and distribution
 In Proc. of ACM MobiCom
, 2002
"... In allwireless networks a crucial problem is to minimize energy consumption, as in most cases the nodes are batteryoperated. We focus on the problem of poweroptimal broadcast, for which it is well known that the broadcast nature of the radio transmission can be exploited to optimize energy consump ..."
Abstract

Cited by 177 (2 self)
 Add to MetaCart
(Show Context)
In allwireless networks a crucial problem is to minimize energy consumption, as in most cases the nodes are batteryoperated. We focus on the problem of poweroptimal broadcast, for which it is well known that the broadcast nature of the radio transmission can be exploited to optimize energy consumption. Several authors have conjectured that the problem of poweroptimal broadcast is NPcomplete. We provide here a formal proof, both for the general case and for the geometric one; in the former case, the network topology is represented by a generic graph with arbitrary weights, whereas in the latter a Euclidean distance is considered. We then describe a new heuristic, Embedded Wireless Multicast Advantage. We show that it compares well with other proposals and we explain how it can be distributed. Categories and Subject Descriptors
Improvements To Propositional Satisfiability Search Algorithms
, 1995
"... ... quickly across a wide range of hard SAT problems than any other SAT tester in the literature on comparable platforms. On a Sun SPARCStation 10 running SunOS 4.1.3 U1, POSIT can solve hard random 400variable 3SAT problems in about 2 hours on the average. In general, it can solve hard nvariable ..."
Abstract

Cited by 174 (0 self)
 Add to MetaCart
... quickly across a wide range of hard SAT problems than any other SAT tester in the literature on comparable platforms. On a Sun SPARCStation 10 running SunOS 4.1.3 U1, POSIT can solve hard random 400variable 3SAT problems in about 2 hours on the average. In general, it can solve hard nvariable random 3SAT problems with search trees of size O(2 n=18:7 ). In addition to justifying these claims, this dissertation describes the most significant achievements of other researchers in this area, and discusses all of the widely known general techniques for speeding up SAT search algorithms. It should be useful to anyone interested in NPcomplete problems or combinatorial optimization in general, and it should be particularly useful to researchers in either Artificial Intelligence or Operations Research.
Algorithms for the Satisfiability (SAT) Problem: A Survey
 DIMACS Series in Discrete Mathematics and Theoretical Computer Science
, 1996
"... . The satisfiability (SAT) problem is a core problem in mathematical logic and computing theory. In practice, SAT is fundamental in solving many problems in automated reasoning, computeraided design, computeraided manufacturing, machine vision, database, robotics, integrated circuit design, compute ..."
Abstract

Cited by 145 (3 self)
 Add to MetaCart
(Show Context)
. The satisfiability (SAT) problem is a core problem in mathematical logic and computing theory. In practice, SAT is fundamental in solving many problems in automated reasoning, computeraided design, computeraided manufacturing, machine vision, database, robotics, integrated circuit design, computer architecture design, and computer network design. Traditional methods treat SAT as a discrete, constrained decision problem. In recent years, many optimization methods, parallel algorithms, and practical techniques have been developed for solving SAT. In this survey, we present a general framework (an algorithm space) that integrates existing SAT algorithms into a unified perspective. We describe sequential and parallel SAT algorithms including variable splitting, resolution, local search, global optimization, mathematical programming, and practical SAT algorithms. We give performance evaluation of some existing SAT algorithms. Finally, we provide a set of practical applications of the sat...
Surface Approximation and Geometric Partitions
 IN PROC. 5TH ACMSIAM SYMPOS. DISCRETE ALGORITHMS
, 1994
"... Motivated by applications in computer graphics, visualization, and scientific computation, we study the computational complexity of the following problem: Given a set S of n points sampled from a bivariate function f(x; y) and an input parameter " ? 0, compute a piecewise linear function \Si ..."
Abstract

Cited by 100 (14 self)
 Add to MetaCart
(Show Context)
Motivated by applications in computer graphics, visualization, and scientific computation, we study the computational complexity of the following problem: Given a set S of n points sampled from a bivariate function f(x; y) and an input parameter " ? 0, compute a piecewise linear function \Sigma(x; y) of minimum complexity (that is, a xymonotone polyhedral surface, with a minimum number of vertices, edges, or faces) such that j\Sigma(x p ; y p ) \Gamma z p j "; for all (x p ; y p ; z p ) 2 S: We prove that the decision version of this problem is NPHard . The main result of our paper is a polynomialtime approximation algorithm that computes a piecewise linear surface of size O(K o log K o ), where K o is the complexity of an optimal surface satisfying the constraints of the problem. The technique
Minimizing broadcast latency and redundancy in ad hoc networks
 In Proc. of the Fourth ACM Int. Symposium on Mobile Ad Hoc Networking and Computing (MOBIHOC'03
, 2003
"... z ..."
On rectangular partitionings in two dimensions: algorithms, complexity and applications
, 1998
"... Partitioning a multidimensional data set into rectangular partitions subject to certain constraints is an important problem that arises in many database applications, including histogrambased selectivity estimation, loadbalancing, and construction of index structures. While provably optimal and ..."
Abstract

Cited by 57 (7 self)
 Add to MetaCart
(Show Context)
Partitioning a multidimensional data set into rectangular partitions subject to certain constraints is an important problem that arises in many database applications, including histogrambased selectivity estimation, loadbalancing, and construction of index structures. While provably optimal and efficient algorithms exist for partitioning onedimensional data, the multidimensional problem has received less attention, except for a few special cases. As a result, the heuristic partitioning techniques that are used in practice are not well understood, and come with no guarantees on the quality of the solution. In this paper, we present algorithmic and complexitytheoretic results for the fundamental problem of partitioning a twodimensional array into rectangular tiles of arbitrary size in a way that minimizes the number of tiles required to satisfy a given constraint. Our main results are approximation algorithms for several partitioning problems that provably approximate the optimal solutions within small constant factors, and that run in linear or close to linear time. We also establish the NPhardness of several partitioning problems, therefore it is unlikely that there are efficient, i.e., polynomial time, algorithms for solving these problems exactly. We also discuss a few applications in which partitioning problems arise. One of the applications is the problem of constructing multidimensional histograms. Our results, for example, give an efficient algorithm to construct the VOptimal histograms which are known to be the most accurate histograms in several selectivity estimation problems. Our algorithms are the first to provide guaranteed bounds on the quality of the solution.
Accidental Algorithms
 In Proc. 47th Annual IEEE Symposium on Foundations of Computer Science 2006
, 2004
"... Complexity theory is built fundamentally on the notion of efficient reduction among computational problems. Classical reductions involve gadgets that map solution fragments of one problem to solution fragments of another in onetoone, or possibly onetomany, fashion. In this paper we propose a new ..."
Abstract

Cited by 57 (2 self)
 Add to MetaCart
Complexity theory is built fundamentally on the notion of efficient reduction among computational problems. Classical reductions involve gadgets that map solution fragments of one problem to solution fragments of another in onetoone, or possibly onetomany, fashion. In this paper we propose a new kind of reduction that allows for gadgets with manytomany correspondences, in which the individual correspondences among the solution fragments can no longer be identified. Their objective may be viewed as that of generating interference patterns among these solution fragments so as to conserve their sum. We show that such holographic reductions provide a method of translating a combinatorial problem to finite systems of polynomial equations with integer coefficients such that the number of solutions of the combinatorial problem can be counted in polynomial time if one of the systems has a solution over the complex numbers. We derive polynomial time algorithms in this way for a number of problems for which only exponential time algorithms were known before. General questions about complexity classes can also be formulated. If the method is applied to a #Pcomplete problem then polynomial systems can be obtained the solvability of which would imply P #P = NC2. 1
Parameterized complexity and approximation algorithms
 Comput. J
, 2006
"... Approximation algorithms and parameterized complexity are usually considered to be two separate ways of dealing with hard algorithmic problems. In this paper, our aim is to investigate how these two fields can be combined to achieve better algorithms than what any of the two theories could offer. We ..."
Abstract

Cited by 56 (2 self)
 Add to MetaCart
(Show Context)
Approximation algorithms and parameterized complexity are usually considered to be two separate ways of dealing with hard algorithmic problems. In this paper, our aim is to investigate how these two fields can be combined to achieve better algorithms than what any of the two theories could offer. We discuss the different ways parameterized complexity can be extended to approximation algorithms, survey results of this type and propose directions for future research. 1.