Results 1 
7 of
7
Symbolic Implementation of the Best Transformer
, 2004
"... This paper shows how to achieve, under certain conditions, abstractinterpretation algorithms that enjoy the best possible precision for a given abstraction. The key idea is a simple process of successive approximation that makes repeated calls to a decision procedure, and obtains the best abstra ..."
Abstract

Cited by 53 (18 self)
 Add to MetaCart
(Show Context)
This paper shows how to achieve, under certain conditions, abstractinterpretation algorithms that enjoy the best possible precision for a given abstraction. The key idea is a simple process of successive approximation that makes repeated calls to a decision procedure, and obtains the best abstract value for a set of concrete stores that are represented symbolically, using a logical formula.
Finite differencing of logical formulas for static analysis
 IN PROC. 12TH ESOP
, 2003
"... This paper concerns mechanisms for maintaining the value of an instrumentationpredicate (a.k.a. derived predicate or view), defined via a logical formula over core predicates, in response to changes in the values of the core predicates. It presents an algorithm fortransforming the instrumentation p ..."
Abstract

Cited by 37 (17 self)
 Add to MetaCart
This paper concerns mechanisms for maintaining the value of an instrumentationpredicate (a.k.a. derived predicate or view), defined via a logical formula over core predicates, in response to changes in the values of the core predicates. It presents an algorithm fortransforming the instrumentation predicate's defining formula into a predicatemaintenance formula that captures what the instrumentation predicate's new value should be.This technique applies to programanalysis problems in which the semantics of statements is expressed using logical formulas that describe changes to corepredicate values,and provides a way to reflect those changes in the values of the instrumentation predicates.
Abstraction for shape analysis with fast and precise transfomers
 In CAV
, 2006
"... Abstract. This paper addresses the problem of proving safety properties of imperative programs manipulating dynamically allocated data structures using destructive pointer updates. We present a new abstraction for linked data structures whose underlying graphs do not contain cycles. The abstraction ..."
Abstract

Cited by 24 (3 self)
 Add to MetaCart
(Show Context)
Abstract. This paper addresses the problem of proving safety properties of imperative programs manipulating dynamically allocated data structures using destructive pointer updates. We present a new abstraction for linked data structures whose underlying graphs do not contain cycles. The abstraction is simple and allows us to decide reachability between dynamically allocated heap cells. We present an efficient algorithm that computes the effect of low level heap mutations in the most precise way. The algorithm does not rely on the usage of a theorem prover. In particular, the worst case complexity of computing a single successor abstract state is O(V log V) states can be exponential in V. A prototype of the algorithm was implemented and is shown to be fast. Our method also handles programs with “simple cycles ” such as cyclic singlylinked lists, (cyclic) doublylinked lists, and trees with parent pointers. Moreover, we allow programs which temporarily violate these restrictions as long as they are restored in loop boundaries. 1
On the complexity of semantic selfminimization
 In Proc
, 2007
"... Partial Kripke structures model only parts of a state space and so enable aggressive abstraction of systems prior to verifying them with respect to a formula of temporal logic. This partiality of models means that verifications may reply with true (all refinements satisfy the formula under check), f ..."
Abstract

Cited by 4 (4 self)
 Add to MetaCart
(Show Context)
Partial Kripke structures model only parts of a state space and so enable aggressive abstraction of systems prior to verifying them with respect to a formula of temporal logic. This partiality of models means that verifications may reply with true (all refinements satisfy the formula under check), false (no refinement satisfies the formula under check) or don’t know. Generalized model checking is the most precise verification for such models (all don’t know answers imply that some refinements satisfy the formula, some don’t), but computationally expensive. A compositional modelchecking algorithm for partial Kripke structures is efficient, sound (all answers true and false are truthful), but may lose precision by answering don’t know instead of a factual true or false. Recent work has shown that such a loss of precision does not occur for this compositional algorithm for most practically relevant patterns of temporal logic formulas. Formulas that never lose precision in this manner are called semantically selfminimizing. In this paper we provide a systematic study of the complexity of deciding whether a formula of propositional logic, propositional modal logic or the propositional modal mucalculus is semantically selfminimizing. Keywords: 3valued model checking, partial state spaces, computational complexity, supervaluations.
On the Consistency, Expressiveness, and Precision of Partial Modeling Formalisms
, 2011
"... Partial transition systems support abstract model checking of complex temporal properties by combining both over and underapproximating abstractions into a single model. Over the years, three families of such modeling formalisms have emerged, represented by (1) Kripke Modal Transition Systems (KMT ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
Partial transition systems support abstract model checking of complex temporal properties by combining both over and underapproximating abstractions into a single model. Over the years, three families of such modeling formalisms have emerged, represented by (1) Kripke Modal Transition Systems (KMTSs), with restrictions on necessary and possible behaviors; (2) Mixed Transition Systems (MixTSs), with relaxation on these restrictions; and (3) Generalized Kripke MTSs (GKMTSs), with hypertransitions, respectively. In this paper, we investigate these formalisms based on two fundamental ways of using partial transition systems (PTSs) – as objects for abstracting concrete systems (and thus, a PTS is semantically consistent if it abstracts at least one concrete system) and as models for checking temporal properties (and thus, a PTS is logically consistent if it gives consistent interpretation to all temporal logic formulas). We study the connection between semantic and logical consistency of PTSs, compare the three families w.r.t. their expressive power (i.e., what can be modeled, what abstractions can be captured using them), and discuss the analysis power of these formalisms, i.e., the
Thorough Checking Revisited
"... Abstract — Recent years have seen a proliferation of 3valued models for capturing abstractions of systems, since these enable verifying both universal and existential properties. Reasoning about such systems is either inexpensive and imprecise (compositional checking), or expensive and precise (tho ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract — Recent years have seen a proliferation of 3valued models for capturing abstractions of systems, since these enable verifying both universal and existential properties. Reasoning about such systems is either inexpensive and imprecise (compositional checking), or expensive and precise (thorough checking). In this paper, we prove that thorough and compositional checks for temporal formulas in their disjunctive forms coincide, which leads to an effective procedure for thorough checking of a variety of abstract models and the entire µcalculus. I.
unknown title
"... A recent survey of 29 software projects of various size at HewlettPackard (“IEEE Software ” 10:5 [Sept/Oct ’03], pp 7885), found that on average programmers produced 26 lines of code each day. A separate informal survey asked the question “How many lines of code can you write and be confident in t ..."
Abstract
 Add to MetaCart
(Show Context)
A recent survey of 29 software projects of various size at HewlettPackard (“IEEE Software ” 10:5 [Sept/Oct ’03], pp 7885), found that on average programmers produced 26 lines of code each day. A separate informal survey asked the question “How many lines of code can you write and be confident in their correctness? ” The average answer was seven. A humorous yet notoverlypessimistic view suggests that most programmers create at least one bug every day. While the average number of lines written by programmers in a day has increased slightly in the last 20 years, it is unlikely that the number of correct lines of code that a programmer can write with confidence is growing. According to a 2002 study by the National Institute of Standards and Technology (NIST), software errors cost the U.S. economy an estimated $60 billion annually, or about 0.6 percent of the GDP. This should place research aimed at improving the quality of software near the top of the agenda for computer science institutions. I feel that tools for program verification, program understanding, and reverse engineering are among the most pressing needs in our field. In my thesis, I addressed what I believe to be the most complex aspect of software, and one that has resisted many automatic methods of quality control: linked data structures. The subject of my thesis is shape analysis, static analysis that establishes properties of programs that perform destructive manipulation of heapallocated linked data structures. Background Program verification is the process of applying formal methods to establish that a program satisfies a specification. In a