Results 1 - 10
of
101
A type and effect system for deterministic parallel java
- In Proc. Intl. Conf. on Object-Oriented Programming, Systems, Languages, and Applications
, 2009
"... Today’s shared-memory parallel programming models are complex and error-prone. While many parallel programs are intended to be deterministic, unanticipated thread interleavings can lead to subtle bugs and nondeterministic semantics. In this paper, we demonstrate that a practical type and effect syst ..."
Abstract
-
Cited by 122 (13 self)
- Add to MetaCart
(Show Context)
Today’s shared-memory parallel programming models are complex and error-prone. While many parallel programs are intended to be deterministic, unanticipated thread interleavings can lead to subtle bugs and nondeterministic semantics. In this paper, we demonstrate that a practical type and effect system can simplify parallel programming by guaranteeing deterministic semantics with modular, compile-time type checking even in a rich, concurrent object-oriented language such as Java. We describe an object-oriented type and effect system that provides several new capabilities over previous systems for expressing deterministic parallel algorithms. We also describe a language called Deterministic Parallel Java (DPJ) that incorporates the new type system features, and we show that a core subset of DPJ is sound. We describe an experimental validation showing that DPJ can express a wide range of realistic parallel programs; that the new type system features are useful for such programs; and that the parallel programs exhibit good performance gains (coming close to or beating equivalent, nondeterministic multithreaded programs where those are available).
Complete Functional Synthesis
"... Synthesis of program fragments from specifications can make programs easier to write and easier to reason about. To integrate synthesis into programming languages, synthesis algorithms should behave in a predictable way—they should succeed for a well-defined class of specifications. They should also ..."
Abstract
-
Cited by 48 (18 self)
- Add to MetaCart
Synthesis of program fragments from specifications can make programs easier to write and easier to reason about. To integrate synthesis into programming languages, synthesis algorithms should behave in a predictable way—they should succeed for a well-defined class of specifications. They should also support unbounded data types such as numbers and data structures. We propose to generalize decision procedures into predictable and complete synthesis procedures. Such procedures are guaranteed to find code that satisfies the specification if such code exists. Moreover, we identify conditions under which synthesis will statically decide whether the solution is guaranteed to exist, and whether it is unique. We demonstrate our approach by starting from decision procedures for linear arithmetic and data structures and transforming them into synthesis procedures. We establish results on the size and the efficiency of the synthesized code. We show that such procedures are useful as a language extension with implicit value definitions, and we show how to extend a compiler to support such definitions. Our constructs provide the benefits of synthesis to programmers, without requiring them to learn new concepts or give up a deterministic execution model.
Implicit dynamic frames: Combining dynamic frames and separation logic (soundness proof
, 2009
"... Abstract. The dynamic frames approach has proven to be a powerful formalism for specifying and verifying object-oriented programs. However, it requires writing and checking many frame annotations. In this paper, we propose a variant of the dynamic frames approach that eliminates the need to explicit ..."
Abstract
-
Cited by 41 (5 self)
- Add to MetaCart
(Show Context)
Abstract. The dynamic frames approach has proven to be a powerful formalism for specifying and verifying object-oriented programs. However, it requires writing and checking many frame annotations. In this paper, we propose a variant of the dynamic frames approach that eliminates the need to explicitly write and check frame annotations. Reminiscent of separation logic’s frame rule, programmers write access assertions inside pre- and postconditions instead of writing frame annotations. From the precondition, one can then infer an upper bound on the set of locations writable or readable by the corresponding method. We implemented our approach in a tool, and used it to automatically verify several challenging programs, including subject-observer, iterator and linked list. 1
Program Verification using Templates over Predicate Abstraction
"... We address the problem of automatically generating invariants with quantified and boolean structure for proving the validity of given assertions or generating pre-conditions under which the assertions are valid. We present three novel algorithms, having different strengths, that combine template and ..."
Abstract
-
Cited by 37 (3 self)
- Add to MetaCart
(Show Context)
We address the problem of automatically generating invariants with quantified and boolean structure for proving the validity of given assertions or generating pre-conditions under which the assertions are valid. We present three novel algorithms, having different strengths, that combine template and predicate abstraction based formalisms to discover required sophisticated program invariants using SMT solvers. Two of these algorithms use an iterative approach to compute fixed-points (one computes a least fixed-point and the other computes a greatest fixed-point), while the third algorithm uses a constraint based approach to encode the fixed-point. The key idea in all these algorithms is to reduce the problem of invariant discovery to that of finding optimal solutions for unknowns (over conjunctions of some predicates from a given set) in a template formula such that
Decision procedures for algebraic data types with abstractions.
- In POPL’10,
, 2010
"... Abstract We describe a family of decision procedures that extend the decision procedure for quantifier-free constraints on recursive algebraic data types (term algebras) to support recursive abstraction functions. Our abstraction functions are catamorphisms (term algebra homomorphisms) mapping alge ..."
Abstract
-
Cited by 37 (15 self)
- Add to MetaCart
(Show Context)
Abstract We describe a family of decision procedures that extend the decision procedure for quantifier-free constraints on recursive algebraic data types (term algebras) to support recursive abstraction functions. Our abstraction functions are catamorphisms (term algebra homomorphisms) mapping algebraic data type values into values in other decidable theories (e.g. sets, multisets, lists, integers, booleans). Each instance of our decision procedure family is sound; we identify a widely applicable many-to-one condition on abstraction functions that implies the completeness. Complete instances of our decision procedure include the following correctness statements: 1) a functional data structure implementation satisfies a recursively specified invariant, 2) such data structure conforms to a contract given in terms of sets, multisets, lists, sizes, or heights, 3) a transformation of a formula (or lambda term) abstract syntax tree changes the set of free variables in the specified way.
An integrated proof language for imperative programs
- In PLDI’09
"... We present an integrated proof language for guiding the actions of multiple reasoning systems as they work together to prove complex correctness properties of imperative programs. The language operates in the context of a program verification system that uses multiple reasoning systems to discharge ..."
Abstract
-
Cited by 28 (5 self)
- Add to MetaCart
(Show Context)
We present an integrated proof language for guiding the actions of multiple reasoning systems as they work together to prove complex correctness properties of imperative programs. The language operates in the context of a program verification system that uses multiple reasoning systems to discharge generated proof obligations. It is designed to 1) enable developers to resolve key choice points in complex program correctness proofs, thereby enabling automated reasoning systems to successfully prove the desired correctness properties; 2) allow developers to identify key lemmas for the reasoning systems to prove, thereby guiding the reasoning systems to find an effective proof decomposition; 3) enable multiple reasoning systems to work together productively to prove a single correctness property by providing a mechanism that developers can use to divide the property into lemmas, each of which is suitable for
Multi-Prover Verification of Floating-Point Programs ⋆
"... Abstract. In the context of deductive program verification, supporting floatingpoint computations is tricky. We propose an expressive language to formally specify behavioral properties of such programs. We give a first-order axiomatization of floating-point operations which allows to reduce verifica ..."
Abstract
-
Cited by 28 (5 self)
- Add to MetaCart
(Show Context)
Abstract. In the context of deductive program verification, supporting floatingpoint computations is tricky. We propose an expressive language to formally specify behavioral properties of such programs. We give a first-order axiomatization of floating-point operations which allows to reduce verification to checking the validity of logic formulas, in a suitable form for a large class of provers including SMT solvers and interactive proof assistants. Experiments using the Frama-C platform for static analysis of C code are presented. 1
Commutativity Analysis for Software Parallelization: letting Program Transformations See the Big Picture
- In Proceedings of the International Conference on Architectural Support for Programming Languages and Operating Systems
, 2009
"... Extracting performance from many-core architectures requires software engineers to create multi-threaded applications, which significantly complicates the already daunting task of software development. One solution to this problem is automatic compile-time parallelization, which can ease the burden ..."
Abstract
-
Cited by 25 (0 self)
- Add to MetaCart
(Show Context)
Extracting performance from many-core architectures requires software engineers to create multi-threaded applications, which significantly complicates the already daunting task of software development. One solution to this problem is automatic compile-time parallelization, which can ease the burden on software developers in many situations. Clearly, automatic parallelization in its present form is not suitable for many application domains and new compiler analyses are needed address its shortcomings. In this paper, we present one such analysis: a new approach for detecting commutative functions. Commutative functions are sections of code that can be executed in any order without affecting the outcome of the application, e.g., inserting elements into a set. Previous research on this topic had one significant limitation, in that the results of a commutative functions must produce identical memory layouts. This prevented previous techniques from detecting functions like malloc, which may return different pointers depending on the order in which it is called, but these differing results do not affect the overall output of the application. Our new commutativity analysis correctly identify these situations to better facilitate automatic parallelization. We demonstrate that this analysis can automatically extract significant amounts of parallelism from many applications, and where it is ineffective it can provide software developers a useful list of functions that may be commutative provided semantic program changes that are not automatable.
Type-based data structure verification
- In PLDI
, 2009
"... We present a refinement type-based approach for the static verification of complex data structure invariants. Our approach is based on the observation that complex data structures are typically fashioned from two elements: recursion (e.g., lists and trees), and maps (e.g., arrays and hash tables). W ..."
Abstract
-
Cited by 21 (7 self)
- Add to MetaCart
(Show Context)
We present a refinement type-based approach for the static verification of complex data structure invariants. Our approach is based on the observation that complex data structures are typically fashioned from two elements: recursion (e.g., lists and trees), and maps (e.g., arrays and hash tables). We introduce two novel type-based mechanisms targeted towards these elements: recursive refinements and polymorphic refinements. These mechanisms automate the challenging work of generalizing and instantiating rich universal invariants by piggybacking simple refinement predicates on top of types, and carefully dividing the labor of analysis between the type system and an SMT solver [6]. Further, the mechanisms permit the use of the abstract interpretation framework of liquid type inference [22] to automatically synthesize complex invariants from simple logical qualifiers, thereby almost completely automating the verification. We have implemented our approach in DSOLVE, which uses liquid types to verify OCAML programs. We present experiments that show that our type-based approach reduces the manual annotation required to verify complex properties like sortedness, balancedness, binary-search-ordering, and acyclicity by more than an order of magnitude.
Automating Induction with an SMT Solver
, 2011
"... Mechanical proof assistants have always had support for inductive proofs. Sometimes an alternative to proof assistants, satisfiability modulo theories (SMT) solvers bring the hope of a higher degree of automation. However, SMT solvers do not natively support induction, so inductive proofs require s ..."
Abstract
-
Cited by 21 (6 self)
- Add to MetaCart
Mechanical proof assistants have always had support for inductive proofs. Sometimes an alternative to proof assistants, satisfiability modulo theories (SMT) solvers bring the hope of a higher degree of automation. However, SMT solvers do not natively support induction, so inductive proofs require some encoding into the SMT solver’s input. This paper shows a surprisingly simple tactic—a rewriting strategy and a heuristic for when to apply it—that has shown to be useful in verifying simple inductive theorems, like those that can occur during program verification. The paper describes the tactic and its implementation in a program verifier, and reports on the positive experience with using the tactic.