Results 1  10
of
138
Scalable error detection using Boolean satisfiability
 In Proc. 32Ç È POPL. ACM
, 2005
"... We describe a software errordetection tool that exploits recent advances in boolean satisfiability (SAT) solvers. Our analysis is path sensitive, precise down to the bit level, and models pointers and heap data. Our approach is also highly scalable, which we achieve using two techniques. First, for ..."
Abstract

Cited by 117 (10 self)
 Add to MetaCart
(Show Context)
We describe a software errordetection tool that exploits recent advances in boolean satisfiability (SAT) solvers. Our analysis is path sensitive, precise down to the bit level, and models pointers and heap data. Our approach is also highly scalable, which we achieve using two techniques. First, for each program function, several optimizations compress the size of the boolean formulas that model the control and dataflow and the heap locations accessed by a function. Second, summaries in the spirit of type signatures are computed for each function, allowing interprocedural analysis without a dramatic increase in the size of the boolean constraints to be solved. We demonstrate the effectiveness of our approach by constructing a lock interface inference and checking tool. In an interprocedural analysis of more than 23,000 lock related functions in the latest Linux kernel, the checker generated 300 warnings, of which 179 were unique locking errors, a false positive rate of only 40%.
TestEra: A Novel Framework for Automated Testing of Java Programs
, 2001
"... We present TestEra, a novel framework for automated testing of Java programs. TestEra automatically generates all nonisomorphic test cases, within a given input size, and evaluates correctness criteria. As an enabling technology, TestEra uses Alloy, a firstorder relational language, and the Alloy ..."
Abstract

Cited by 114 (32 self)
 Add to MetaCart
We present TestEra, a novel framework for automated testing of Java programs. TestEra automatically generates all nonisomorphic test cases, within a given input size, and evaluates correctness criteria. As an enabling technology, TestEra uses Alloy, a firstorder relational language, and the Alloy Analyzer. Checking a program with TestEra involves modeling the correctness criteria for the program in Alloy and specifying abstraction and concretization translations between instances of Alloy models and Java data structures. TestEra produces concrete Java inputs as counterexamples to violated correctness criteria. This paper discusses TestEra's analyses of several case studies: methods that manipulate singly linked lists and redblack trees, a naming architecture, and a part of the Alloy Analyzer.
The Model Evolution Calculus
, 2003
"... The DPLL procedure is the basis of some of the most successful propositional satisfiability solvers to date. Although originally devised as a proofprocedure for firstorder logic, it has been used almost exclusively for propositional logic so far because of its highly inefficient treatment of quanti ..."
Abstract

Cited by 110 (19 self)
 Add to MetaCart
The DPLL procedure is the basis of some of the most successful propositional satisfiability solvers to date. Although originally devised as a proofprocedure for firstorder logic, it has been used almost exclusively for propositional logic so far because of its highly inefficient treatment of quantifiers, based on instantiation into ground formulas. The recent FDPLL calculus by Baumgartner was the first successful attempt to lift the procedure to the firstorder level without resorting to ground instantiations. FDPLL lifts to the firstorder case the core of the DPLL procedure, the splitting rule, but ignores other aspects of the procedure that, although not necessary for completeness, are crucial for its effectiveness in practice. In this paper, we present a new calculus loosely based on FDPLL that lifts these aspects as well. In addition to being a more faithful litfing of the DPLL procedure, the new calculus contains a more systematic treatment of universal literals, one of FDPLL's optimizations, and so has the potential of leading to much faster implementations.
Deriving Operational Software Specifications from System Goals
, 2002
"... Goal orientation is an increasingly recognized paradigm for eliciting, modeling, specifying and analyzing software requirements. Goals are statements of intent organized in AND/OR refinement structures; they range from highlevel, strategic concerns to lowlevel, technical requirements on the softwar ..."
Abstract

Cited by 85 (8 self)
 Add to MetaCart
Goal orientation is an increasingly recognized paradigm for eliciting, modeling, specifying and analyzing software requirements. Goals are statements of intent organized in AND/OR refinement structures; they range from highlevel, strategic concerns to lowlevel, technical requirements on the softwaretobe and assumptions on its environment. The operationalization of system goals into specifications of software services is a core aspect of the requirements elaboration process for which little systematic and constructive support is available. In particular, most formal methods assume such operational specifications to be given and focus on their a posteriori analysis.
The paper considers a formal, constructive approach in which operational software specifications are built incrementally from higherlevel goal formulations in a way that guarantees their correctness by construction. The operationalization process is based on formal derivation rules that map goal specifications to specifications of software operations; more specifically, these rules map
realtime temporal logic specifications to sets of pre, post and trigger conditions. The rules define operationalization patterns that may be used for guiding and documenting the operationalization process while hiding all formal reasoning details; the patterns are formally proved correct once and for all. The catalog of operationalization patterns is structured according to a rich taxonomy of goal specification patterns.
Our constructive approach to requirements elaboration requires a multiparadigm specification language that supports incremental reasoning about partial models. The paper also provides a formal semantics for goal operationalization and discusses several semantic features of our language that allow for such incremental reasoning.
Boosting Verification by Automatic Tuning of Decision Procedures
 SEVENTH INTERNATIONAL CONFERENCE ON FORMAL METHODS IN COMPUTERAIDED DESIGN
, 2007
"... Parameterized heuristics abound in computer aided design and verification, and manual tuning of the respective parameters is difficult and timeconsuming. Very recent results from the artificial intelligence (AI) community suggest that this tuning process can be automated, and that doing so can lead ..."
Abstract

Cited by 57 (37 self)
 Add to MetaCart
(Show Context)
Parameterized heuristics abound in computer aided design and verification, and manual tuning of the respective parameters is difficult and timeconsuming. Very recent results from the artificial intelligence (AI) community suggest that this tuning process can be automated, and that doing so can lead to significant performance improvements; furthermore, automated parameter optimization can provide valuable guidance during the development of heuristic algorithms. In this paper, we study how such an AI approach can improve a stateoftheart SAT solver for large, realworld bounded modelchecking and software verification instances. The resulting, automaticallyderived parameter settings yielded runtimes on average 4.5 times faster on bounded model checking instances and 500 times faster on software verification problems than extensive handtuning of the decision procedure. Furthermore, the availability of automatic tuning influenced the design of the solver, and the automaticallyderived parameter settings provided a deeper insight into the properties of problem instances.
Specifying and reasoning about dynamic accesscontrol policies
 of Lecture Notes in Computer Science
, 2006
"... Abstract. Accesscontrol policies have grown from simple matrices to nontrivial specifications written in sophisticated languages. The increasing complexity of these policies demands correspondingly strong automated reasoning techniques for understanding and debugging them. The need for these techn ..."
Abstract

Cited by 51 (3 self)
 Add to MetaCart
(Show Context)
Abstract. Accesscontrol policies have grown from simple matrices to nontrivial specifications written in sophisticated languages. The increasing complexity of these policies demands correspondingly strong automated reasoning techniques for understanding and debugging them. The need for these techniques is even more pressing given the rich and dynamic nature of the environments in which these policies evaluate. We define a framework to represent the behavior of accesscontrol policies in a dynamic environment. We then specify several interesting, decidable analyses using firstorder temporal logic. Our work illustrates the subtle interplay between logical and statebased methods, particularly in the presence of threevalued policies. We also define a notion of policy equivalence that is especially useful for modular reasoning. 1
Memoryefficient inference in relational domains
 In Proceedings of the TwentyFirst National Conference on Artificial Intelligence
, 2006
"... Propositionalization of a firstorder theory followed by satisfiability testing has proved to be a remarkably efficient approach to inference in relational domains such as planning (Kautz & Selman 1996) and verification (Jackson 2000). More recently, weighted satisfiability solvers have been use ..."
Abstract

Cited by 45 (9 self)
 Add to MetaCart
Propositionalization of a firstorder theory followed by satisfiability testing has proved to be a remarkably efficient approach to inference in relational domains such as planning (Kautz & Selman 1996) and verification (Jackson 2000). More recently, weighted satisfiability solvers have been used successfully for MPE inference in statistical relational learners (Singla & Domingos 2005). However, fully instantiating a finite firstorder theory requires memory on the order of the number of constants raised to the arity of the clauses, which significantly limits the size of domains it can be applied to. In this paper we propose LazySAT, a variation of the WalkSAT solver that avoids this blowup by taking advantage of the extreme sparseness that is typical of relational domains (i.e., only a small fraction of ground atoms are true, and most clauses are trivially satisfied). Experiments on entity resolution and planning problems show that LazySAT reduces memory usage by orders of magnitude compared to WalkSAT, while taking comparable time to run and producing the same solutions.
Compiling Problem Specifications into SAT
, 2001
"... We present a compiler that translates a problem specification into a propositional satisfiability test (SAT). Problems are specified in a logicbased language, called npspec, which allows the definition of complex problems in a highly declarative way, and whose expressive power is such to captu ..."
Abstract

Cited by 44 (12 self)
 Add to MetaCart
We present a compiler that translates a problem specification into a propositional satisfiability test (SAT). Problems are specified in a logicbased language, called npspec, which allows the definition of complex problems in a highly declarative way, and whose expressive power is such to capture exactly all problems which belong to the complexity class NP. The target SAT instance is solved using any of the various stateoftheart solvers available from the community. The system obtained is an executable specification language for all NP problems which shows interesting computational properties. The performances of the system have been tested on a few classical problems, namely graph coloring, Hamiltonian cycle, and jobshop scheduling. c flSpringerVerlag. To appear on the Proceedings of the European Symposium On Programming (ESOP 2001) Genova, Italy, April 26, 2001 Lecture Notes in Computer Science, SpringerVerlag, 2001. ftp://ftp.dis.uniroma1.it/pub/ai/papers/cadoscha01.ps.gz 1
Evaluating QBFs via symbolic Skolemization
 In Int. Conf. on Logic for Programming Artificial Intelligence and Reasoning (LPAR’04
"... Abstract. We describe a novel decision procedure for Quantified Boolean Formulas (QBFs) which aims to unleash the hidden potential of quantified reasoning in applications. The Skolem theorem acts like a glue holding several ingredients together: BDDbased representations for boolean functions, sear ..."
Abstract

Cited by 41 (9 self)
 Add to MetaCart
(Show Context)
Abstract. We describe a novel decision procedure for Quantified Boolean Formulas (QBFs) which aims to unleash the hidden potential of quantified reasoning in applications. The Skolem theorem acts like a glue holding several ingredients together: BDDbased representations for boolean functions, searchbased QBF decision procedure, and compilationtoSAT techniques, among the others. To leverage all these techniques at once we show how to evaluate QBFs by symbolically reasoning on a compact representation for the propositional expansion of the skolemized problem. We also report about a first implementation of the procedure, which yields very interesting experimental results. 1
Event calculus reasoning through satisfiability
 Journal of Logic and Computation
, 2004
"... This is a precopyediting, authorproduced PDF of an article accepted for publication in the Journal of Logic and Computation following peer review. The definitive publisherauthenticated version (Mueller, Erik T. (2004). Event calculus reasoning through satisfiability. Journal of Logic and Computa ..."
Abstract

Cited by 36 (8 self)
 Add to MetaCart
(Show Context)
This is a precopyediting, authorproduced PDF of an article accepted for publication in the Journal of Logic and Computation following peer review. The definitive publisherauthenticated version (Mueller, Erik T. (2004). Event calculus reasoning through satisfiability. Journal of Logic and Computation, 14(5), 703–730.) is available online at: