Results 1  10
of
33
Modular Data Structure Verification
 EECS DEPARTMENT, MASSACHUSETTS INSTITUTE OF TECHNOLOGY
, 2007
"... This dissertation describes an approach for automatically verifying data structures, focusing on techniques for automatically proving formulas that arise in such verification. I have implemented this approach with my colleagues in a verification system called Jahob. Jahob verifies properties of Java ..."
Abstract

Cited by 43 (20 self)
 Add to MetaCart
This dissertation describes an approach for automatically verifying data structures, focusing on techniques for automatically proving formulas that arise in such verification. I have implemented this approach with my colleagues in a verification system called Jahob. Jahob verifies properties of Java programs with dynamically allocated data structures. Developers write Jahob specifications in classical higherorder logic (HOL); Jahob reduces the verification problem to deciding the validity of HOL formulas. I present a new method for proving HOL formulas by combining automated reasoning techniques. My method consists of 1) splitting formulas into individual HOL conjuncts, 2) soundly approximating each HOL conjunct with a formula in a more tractable fragment and 3) proving the resulting approximation using a decision procedure or a theorem prover. I present three concrete logics; for each logic I show how to use it to approximate HOL formulas, and how to decide the validity of formulas in this logic. First, I present an approximation of HOL based on a translation to firstorder logic, which enables the use of existing resolutionbased theorem provers. Second, I present an approximation of HOL based on field constraint analysis, a new technique that enables
Decision procedures for algebraic data types with abstractions.
 In POPL’10,
, 2010
"... Abstract We describe a family of decision procedures that extend the decision procedure for quantifierfree constraints on recursive algebraic data types (term algebras) to support recursive abstraction functions. Our abstraction functions are catamorphisms (term algebra homomorphisms) mapping alge ..."
Abstract

Cited by 37 (15 self)
 Add to MetaCart
(Show Context)
Abstract We describe a family of decision procedures that extend the decision procedure for quantifierfree constraints on recursive algebraic data types (term algebras) to support recursive abstraction functions. Our abstraction functions are catamorphisms (term algebra homomorphisms) mapping algebraic data type values into values in other decidable theories (e.g. sets, multisets, lists, integers, booleans). Each instance of our decision procedure family is sound; we identify a widely applicable manytoone condition on abstraction functions that implies the completeness. Complete instances of our decision procedure include the following correctness statements: 1) a functional data structure implementation satisfies a recursively specified invariant, 2) such data structure conforms to a contract given in terms of sets, multisets, lists, sizes, or heights, 3) a transformation of a formula (or lambda term) abstract syntax tree changes the set of free variables in the specified way.
An integrated proof language for imperative programs
 In PLDI’09
"... We present an integrated proof language for guiding the actions of multiple reasoning systems as they work together to prove complex correctness properties of imperative programs. The language operates in the context of a program verification system that uses multiple reasoning systems to discharge ..."
Abstract

Cited by 28 (5 self)
 Add to MetaCart
(Show Context)
We present an integrated proof language for guiding the actions of multiple reasoning systems as they work together to prove complex correctness properties of imperative programs. The language operates in the context of a program verification system that uses multiple reasoning systems to discharge generated proof obligations. It is designed to 1) enable developers to resolve key choice points in complex program correctness proofs, thereby enabling automated reasoning systems to successfully prove the desired correctness properties; 2) allow developers to identify key lemmas for the reasoning systems to prove, thereby guiding the reasoning systems to find an effective proof decomposition; 3) enable multiple reasoning systems to work together productively to prove a single correctness property by providing a mechanism that developers can use to divide the property into lemmas, each of which is suitable for
Practical API Protocol Checking with Access Permissions
"... Reusable APIs often define usage protocols. We previously developed a sound modular type system that checks compliance with typestatebased protocols while affording a great deal of aliasing flexibility. We also developed Plural, a prototype tool that embodies our approach as an automated static an ..."
Abstract

Cited by 27 (11 self)
 Add to MetaCart
(Show Context)
Reusable APIs often define usage protocols. We previously developed a sound modular type system that checks compliance with typestatebased protocols while affording a great deal of aliasing flexibility. We also developed Plural, a prototype tool that embodies our approach as an automated static analysis and includes several extensions we found useful in practice. This paper evaluates our approach along the following dimensions: (1) We report on experience in specifying relevant usage rules for a large Java standard API with our approach. We also specify several other Java APIs and identify recurring patterns. (2) We summarize two case studies in verifying thirdparty opensource code bases with few false positives using our tool. We discuss how tool shortcomings can be addressed either with code refactorings or extensions to the tool itself. These results indicate that our approach can be used to specify and enforce real API protocols in practice.
Heap Analysis in the Presence of Collection Libraries
, 2007
"... Memory analysis techniques have become sophisticated enough to model, with a high degree of accuracy, the manipulation of simple memory structures (finite structures, single/double linked lists and trees). However, modern programming languages provide extensive library support including a wide range ..."
Abstract

Cited by 16 (10 self)
 Add to MetaCart
(Show Context)
Memory analysis techniques have become sophisticated enough to model, with a high degree of accuracy, the manipulation of simple memory structures (finite structures, single/double linked lists and trees). However, modern programming languages provide extensive library support including a wide range of generic collection objects that make use of complex internal data structures. While these data structures ensure that the collections are efficient, often these representations cannot be effectively modeled by existing methods (either due to excessive analysis runtime or due to the inability to represent the required information). This paper presents a method to represent collections using an abstraction of their semantics. The construction of the abstract semantics for the collection objects is done in a manner that allows individual elements in the collections to be identified. Our construction also supports iterators over the collections and is able to model the position of the iterators with respect to the elements in the collection. By ordering the contents of the collection based on the iterator position, the model can represent a notion of progress when iteratively manipulating the contents of a collection. These features allow strong updates to the individual elements in the collection as well as strong updates over the collections themselves.
Program analysis for overlaid data structures
 In 23rd CAV, Springer LNCS 6808
, 2011
"... Abstract. We call a data structure overlaid, if a node in the structure includes links for multiple data structures and these links are intended to be used at the same time. In this paper, we present a static program analysis for overlaid data structures. Our analysis implements two main ideas. The ..."
Abstract

Cited by 16 (0 self)
 Add to MetaCart
(Show Context)
Abstract. We call a data structure overlaid, if a node in the structure includes links for multiple data structures and these links are intended to be used at the same time. In this paper, we present a static program analysis for overlaid data structures. Our analysis implements two main ideas. The first is to run multiple subanalyses that track information about nonoverlaid data structures, such as lists. Each subanalysis infers shape properties of only one component of an overlaid data structure, but the results of these subanalyses are later combined to derive the desired safety properties about the whole overlaid data structure. The second idea is to control the communication among the subanalyses using ghost states and ghost instructions. The purpose of this control is to achieve a high level of efficiency by allowing only necessary information to be transferred among subanalyses and at as few program points as possible. Our analysis has been successfully applied to prove the memory safety of the Linux deadline IO scheduler and AFS server. 1
Checking Framework Interactions with Relationships
"... Abstract. Software frameworks impose constraints on how plugins may interact with them. Many of these constraints involve multiple objects, are temporal, and depend on runtime values. Additionally, they are difficult to specify because they are often extrinsic and may break behavioral subtyping. Thi ..."
Abstract

Cited by 13 (7 self)
 Add to MetaCart
(Show Context)
Abstract. Software frameworks impose constraints on how plugins may interact with them. Many of these constraints involve multiple objects, are temporal, and depend on runtime values. Additionally, they are difficult to specify because they are often extrinsic and may break behavioral subtyping. This work introduces relationships as an abstraction for specifying framework constraints in FUSION (Framework Usage SpecificatIONs), and it presents a formal description and implementation of a static analysis to find constraint violations in plugin code. We define three variants of this analysis: one is sound, one is complete, and a pragmatic variant that balances these tradeoffs. We prove soundness and completeness for the appropriate variants, and we show that the pragmatic variant can effectively check constraints from realworld programs. 1
Trustworthy Numerical Computation in Scala
"... Modern computing has adopted the floating point type as a default way to describe computations with real numbers. Thanks to dedicated hardware support, such computations are efficient on modern architectures, even in double precision. However, rigorous reasoning about the resulting programs remains ..."
Abstract

Cited by 13 (4 self)
 Add to MetaCart
(Show Context)
Modern computing has adopted the floating point type as a default way to describe computations with real numbers. Thanks to dedicated hardware support, such computations are efficient on modern architectures, even in double precision. However, rigorous reasoning about the resulting programs remains difficult. This is in part due to a large gap between the finite floating point representation and the infiniteprecision realnumber semantics that serves as the developers’ mental model. Because programming languages do not provide support for estimating errors, some computations in practice are performed more and some less precisely than needed. We present a library solution for rigorous arithmetic computation. Our numerical data type library tracks a (double) floating point value, but also a guaranteed upper bound on the error between this value and the ideal value that would be computed in the realvalue semantics. Our implementation involves a set of linear approximations based on an extension of affine arithmetic. The derived approximations cover most of the standard mathematical operations, including trigonometric functions, and are more comprehensive than any publicly available ones. Moreover, while interval arithmetic rapidly yields overly pessimistic estimates, our approach remains precise for several computational tasks of interest. We evaluate the library on a number of examples from numerical analysis and physical simulations. We found it to be a useful tool for gaining confidence in the correctness of the computation.
Verification of semantic commutativity conditions and inverse operations on linked data structures
, 2010
"... We present a new technique for verifying commutativity conditions, which are logical formulas that characterize when operations commute. Because our technique reasons with the abstract state of verified linked data structure implementations, it can verify commuting operations that produce semantical ..."
Abstract

Cited by 12 (2 self)
 Add to MetaCart
We present a new technique for verifying commutativity conditions, which are logical formulas that characterize when operations commute. Because our technique reasons with the abstract state of verified linked data structure implementations, it can verify commuting operations that produce semantically equivalent (but not necessarily identical) data structure states in different execution orders. We have used this technique to verify sound and complete commutativity conditions for all pairs of operations on a collection of linked data structure implementations, including data structures that export a set interface (ListSet and HashSet) as well as data structures that export a map interface (AssociationList, HashTable, and ArrayList). This effort involved the specification and verification of 765 commutativity conditions. Many speculative parallel systems need to undo the effects of speculatively executed operations. Inverse operations, which undo these effects, are often more efficient than alternate approaches (such as saving and restoring data structure state). We present a new technique for verifying such inverse operations. We have specified and verified, for all of our linked data structure implementations, an inverse operation for every operation that changes the data structure state. Together, the commutativity conditions and inverse operations provide a key resource that language designers, developers of program analysis systems, and implementors of software systems can draw on to build languages, program analyses, and systems with strong correctness guarantees.