Results 11  20
of
1,050
Free Bits, PCPs and NonApproximability  Towards Tight Results
, 1996
"... This paper continues the investigation of the connection between proof systems and approximation. The emphasis is on proving tight nonapproximability results via consideration of measures like the "free bit complexity" and the "amortized free bit complexity" of proof systems. ..."
Abstract

Cited by 212 (39 self)
 Add to MetaCart
This paper continues the investigation of the connection between proof systems and approximation. The emphasis is on proving tight nonapproximability results via consideration of measures like the "free bit complexity" and the "amortized free bit complexity" of proof systems.
The Tractability of Subsumption in FrameBased Description Languages
, 1983
"... A knowledge representation system provides an important service to the rest of a knowledgebased system: it computes automatically a set of inferences over the beliefs encoded within it. Given that the knowledgebased system relies on these inferences in the midst of its operation (i.e., its diag ..."
Abstract

Cited by 205 (5 self)
 Add to MetaCart
A knowledge representation system provides an important service to the rest of a knowledgebased system: it computes automatically a set of inferences over the beliefs encoded within it. Given that the knowledgebased system relies on these inferences in the midst of its operation (i.e., its diagnosis, planning, or whatever), their computational tractability is an important concern. Here we present evidence as to how the cost of computing one kind of inference is directly related to the expressiveness of the representation language. As it turns out, this cost is perilously sensitive to small changes in the representation language. Even a seemingly simple framebased description language can pose intractable computational obstacles. 1.
Automated analysis of feature models 20 years later: A literature review
 INFORMATION SYSTEMS
, 2010
"... Software product line engineering is about producing a set of related products that share more commonalities than variabilities. Feature models are widely used for variability and commonality management in software product lines. Feature models are information models where a set of products are repr ..."
Abstract

Cited by 186 (20 self)
 Add to MetaCart
(Show Context)
Software product line engineering is about producing a set of related products that share more commonalities than variabilities. Feature models are widely used for variability and commonality management in software product lines. Feature models are information models where a set of products are represented as a set of features in a single model. The automated analysis of feature models deals with the computer–aided extraction of information from feature models. The literature on this topic has contributed with a set of operations, techniques, tools and empirical results which have not been surveyed until now. This paper provides a comprehensive literature review on the automated analysis of feature models 20 years after of their invention. This paper contributes by bringing together previouslydisparate streams of work to help shed light on this thriving area. We also present a conceptual framework to understand the different proposals as well as categorise future contributions. We finally discuss the different studies and propose some challenges to be faced in the future.
Knowledge compilation and theory approximation
 Journal of the ACM
, 1996
"... Computational efficiency is a central concern in the design of knowledge representation systems. In order to obtain efficient systems, it has been suggested that one should limit the form of the statements in the knowledge base or use an incomplete inference mechanism. The former approach is often t ..."
Abstract

Cited by 185 (5 self)
 Add to MetaCart
Computational efficiency is a central concern in the design of knowledge representation systems. In order to obtain efficient systems, it has been suggested that one should limit the form of the statements in the knowledge base or use an incomplete inference mechanism. The former approach is often too restrictive for practical applications, whereas the latter leads to uncertainty about exactly what can and cannot be inferred from the knowledge base. We present a third alternative, in which knowledge given in a general representation language is translated (compiled) into a tractable form — allowing for efficient subsequent query answering. We show how propositional logical theories can be compiled into Horn theories that approximate the original information. The approximations bound the original theory from below and above in terms of logical strength. The procedures are extended to other tractable languages (for example, binary clauses) and to the firstorder case. Finally, we demonstrate the generality of our approach by compiling concept descriptions in a general framebased language into a tractable form.
Improvements To Propositional Satisfiability Search Algorithms
, 1995
"... ... quickly across a wide range of hard SAT problems than any other SAT tester in the literature on comparable platforms. On a Sun SPARCStation 10 running SunOS 4.1.3 U1, POSIT can solve hard random 400variable 3SAT problems in about 2 hours on the average. In general, it can solve hard nvariable ..."
Abstract

Cited by 174 (0 self)
 Add to MetaCart
... quickly across a wide range of hard SAT problems than any other SAT tester in the literature on comparable platforms. On a Sun SPARCStation 10 running SunOS 4.1.3 U1, POSIT can solve hard random 400variable 3SAT problems in about 2 hours on the average. In general, it can solve hard nvariable random 3SAT problems with search trees of size O(2 n=18:7 ). In addition to justifying these claims, this dissertation describes the most significant achievements of other researchers in this area, and discusses all of the widely known general techniques for speeding up SAT search algorithms. It should be useful to anyone interested in NPcomplete problems or combinatorial optimization in general, and it should be particularly useful to researchers in either Artificial Intelligence or Operations Research.
Practical Dependence Testing
, 1991
"... Precise and efficient dependence tests are essential to the effectiveness of a parallelizing compiler. This paper proposes a dependence testing scheme based on classifying pairs of subscripted variable references. Exact yet fast dependence tests are presented for certain classes of array references, ..."
Abstract

Cited by 148 (16 self)
 Add to MetaCart
Precise and efficient dependence tests are essential to the effectiveness of a parallelizing compiler. This paper proposes a dependence testing scheme based on classifying pairs of subscripted variable references. Exact yet fast dependence tests are presented for certain classes of array references, as well as empirical results showing that these references dominate scientific Fortran codes. These dependence tests are being implemented at Rice University in both PFC, a parallelizing compiler, and ParaScope, a parallel programming environment.
Algorithms for the Satisfiability (SAT) Problem: A Survey
 DIMACS Series in Discrete Mathematics and Theoretical Computer Science
, 1996
"... . The satisfiability (SAT) problem is a core problem in mathematical logic and computing theory. In practice, SAT is fundamental in solving many problems in automated reasoning, computeraided design, computeraided manufacturing, machine vision, database, robotics, integrated circuit design, compute ..."
Abstract

Cited by 145 (3 self)
 Add to MetaCart
. The satisfiability (SAT) problem is a core problem in mathematical logic and computing theory. In practice, SAT is fundamental in solving many problems in automated reasoning, computeraided design, computeraided manufacturing, machine vision, database, robotics, integrated circuit design, computer architecture design, and computer network design. Traditional methods treat SAT as a discrete, constrained decision problem. In recent years, many optimization methods, parallel algorithms, and practical techniques have been developed for solving SAT. In this survey, we present a general framework (an algorithm space) that integrates existing SAT algorithms into a unified perspective. We describe sequential and parallel SAT algorithms including variable splitting, resolution, local search, global optimization, mathematical programming, and practical SAT algorithms. We give performance evaluation of some existing SAT algorithms. Finally, we provide a set of practical applications of the sat...
On the Complexity of Qualitative Spatial Reasoning: A Maximal Tractable Fragment of the Region Connection Calculus
 Artificial Intelligence
, 1997
"... The computational properties of qualitative spatial reasoning have been investigated to some degree. However, the question for the boundary between polynomial and NPhard reasoning problems has not been addressed yet. In this paper we explore this boundary in the "Region Connection Calculus&quo ..."
Abstract

Cited by 144 (23 self)
 Add to MetaCart
(Show Context)
The computational properties of qualitative spatial reasoning have been investigated to some degree. However, the question for the boundary between polynomial and NPhard reasoning problems has not been addressed yet. In this paper we explore this boundary in the "Region Connection Calculus" RCC8. We extend Bennett's encoding of RCC8 in modal logic. Based on this encoding, we prove that reasoning is NPcomplete in general and identify a maximal tractable subset of the relations in RCC8 that contains all base relations. Further, we show that for this subset pathconsistency is sufficient for deciding consistency. 1 Introduction When describing a spatial configuration or when reasoning about such a configuration, often it is not possible or desirable to obtain precise, quantitative data. In these cases, qualitative reasoning about spatial configurations may be used. One particular approach in this context has been developed by Randell, Cui, and Cohn [20], the socalled Region Connecti...
On problems without polynomial kernels
 LECT. NOTES COMPUT. SCI
, 2007
"... Kernelization is a strong and widelyapplied technique in parameterized complexity. In a nutshell, a kernelization algorithm, or simply a kernel, is a polynomialtime transformation that transforms any given parameterized instance to an equivalent instance of the same problem, with size and parame ..."
Abstract

Cited by 143 (17 self)
 Add to MetaCart
Kernelization is a strong and widelyapplied technique in parameterized complexity. In a nutshell, a kernelization algorithm, or simply a kernel, is a polynomialtime transformation that transforms any given parameterized instance to an equivalent instance of the same problem, with size and parameter bounded by a function of the parameter in the input. A kernel is polynomial if the size and parameter of the output are polynomiallybounded by the parameter of the input. In this paper we develop a framework which allows showing that a wide range of FPT problems do not have polynomial kernels. Our evidence relies on hypothesis made in the classical world (i.e. nonparametric complexity), and evolves around a new type of algorithm for classical decision problems, called a distillation algorithm, which might be of independent interest. Using the notion of distillation algorithms, we develop a generic lowerbound engine which allows us to show that a variety of FPT problems, fulfilling certain criteria, cannot have polynomial kernels unless the polynomial hierarchy collapses. These problems include kPath, kCycle, kExact Cycle, kShort Cheap Tour, kGraph Minor Order Test, kCutwidth, kSearch Number, kPathwidth, kTreewidth, kBranchwidth, and several optimization problems parameterized by treewidth or cliquewidth.