Results 1  10
of
119
Bebop: A Symbolic Model Checker for Boolean Programs
, 2000
"... We present the design, implementation and empirical evaluation of Bebop  a symbolic model checker for boolean programs. Bebop represents control flow explicitly, and sets of states implicitly using BDDs. By harnessing the inherent modularity in procedural abstraction and exploiting the locality of ..."
Abstract

Cited by 255 (24 self)
 Add to MetaCart
(Show Context)
We present the design, implementation and empirical evaluation of Bebop  a symbolic model checker for boolean programs. Bebop represents control flow explicitly, and sets of states implicitly using BDDs. By harnessing the inherent modularity in procedural abstraction and exploiting the locality of variable scoping, Bebop is able to model check boolean programs with several thousand lines of code, hundreds of procedures, and several thousand variables in a few minutes.
Learning assumptions for compositional verification
, 2003
"... Compositional verification is a promising approach to addressing the state explosion problem associated with model checking. One compositional technique advocates proving properties of a system by checking properties of its components in an assumeguarantee style. However, the application of this t ..."
Abstract

Cited by 140 (20 self)
 Add to MetaCart
Compositional verification is a promising approach to addressing the state explosion problem associated with model checking. One compositional technique advocates proving properties of a system by checking properties of its components in an assumeguarantee style. However, the application of this technique is difficult because it involves nontrivial human input. This paper presents a novel framework for performing assumeguarantee reasoning in an incremental and fully automated fashion. To check a component against a property, our approach generates assumptions that the environment needs to satisfy for the property to hold. These assumptions are then discharged on the rest of the system. Assumptions are computed by a learning algorithm. They are initially approximate, but become gradually more precise by means of counterexamples obtained by model checking the component and its environment, alternately. This iterative process may at any stage conclude that the property is either true or false in the system. We have implemented our approach in the LTSA tool and applied it to a NASA system.
Theory of latencyinsensitive design
 IEEE TRANSACTIONS ON COMPUTERAIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS
, 2001
"... The theory of latencyinsensitive design is presented as the foundation of a new correctbyconstruction methodology to design complex systems by assembling intellectual property components. Latencyinsensitive designs are synchronous distributed systems and are realized by composing functional mod ..."
Abstract

Cited by 138 (17 self)
 Add to MetaCart
The theory of latencyinsensitive design is presented as the foundation of a new correctbyconstruction methodology to design complex systems by assembling intellectual property components. Latencyinsensitive designs are synchronous distributed systems and are realized by composing functional modules that exchange data on communication channels according to an appropriate protocol. The protocol works on the assumption that the modules are stallable, a weak condition to ask them to obey. The goal of the protocol is to guarantee that latencyinsensitive designs composed of functionally correct modules behave correctly independently of the channel latencies. This allows us to increase the robustness of a design implementation because any delay variations of a channel can be “recovered ” by changing the channel latency while the overall system functionality remains unaffected. As a consequence, an important application of the proposed theory is represented by the latencyinsensitive methodology to design large digital integrated circuits by using deep submicrometer technologies.
Symbolic compositional verification by learning assumptions
 In CAV
, 2005
"... Abstract. The verification problem for a system consisting of components can be decomposed into simpler subproblems for the components using assumeguarantee reasoning. However, such compositional reasoning requires user guidance to identify appropriate assumptions for components. In this paper, we ..."
Abstract

Cited by 68 (7 self)
 Add to MetaCart
(Show Context)
Abstract. The verification problem for a system consisting of components can be decomposed into simpler subproblems for the components using assumeguarantee reasoning. However, such compositional reasoning requires user guidance to identify appropriate assumptions for components. In this paper, we propose an automated solution for discovering assumptions based on the L \Lambda algorithm for active learning of regular languages. We present a symbolic implementation of the learning algorithm, and incorporate it in the model checker NuSMV. Our experiments demonstrate significant savings in the computational requirements of symbolic model checking.
Proving correctness of highlyconcurrent linearisable objects
 In PPoPP
, 2006
"... We study a family of implementations for linked lists using finegrain synchronisation. This approach enables greater concurrency, but correctness is a greater challenge than for classical, coarsegrain synchronisation. Our examples are demonstrative of common design patterns such as lock coupling, o ..."
Abstract

Cited by 56 (7 self)
 Add to MetaCart
(Show Context)
We study a family of implementations for linked lists using finegrain synchronisation. This approach enables greater concurrency, but correctness is a greater challenge than for classical, coarsegrain synchronisation. Our examples are demonstrative of common design patterns such as lock coupling, optimistic, and lazy synchronisation. Although they are are highly concurrent, we prove that they are linearisable, safe, and they correctly implement a highlevel abstraction. Our proofs illustrate the power and applicability of relyguarantee reasoning, as well of some of its limitations. The examples of the paper establish a benchmark challenge for other reasoning techniques.
Software Model Checking
"... Software model checking is the algorithmic analysis of programs to prove properties of their executions. It traces its roots to logic and theorem proving, both to provide the conceptual framework in which to formalize the fundamental questions and to provide algorithmic procedures for the analysis o ..."
Abstract

Cited by 52 (0 self)
 Add to MetaCart
Software model checking is the algorithmic analysis of programs to prove properties of their executions. It traces its roots to logic and theorem proving, both to provide the conceptual framework in which to formalize the fundamental questions and to provide algorithmic procedures for the analysis of logical questions. The undecidability theorem [Turing 1936] ruled out the possibility of a sound and complete algorithmic solution for any sufficiently powerful programming model, and even under restrictions (such as finite state spaces), the correctness problem remained computationally intractable. However, just because a problem is hard does not mean it never appears in practice. Also, just because the general problem is undecidable does not imply that specific instances of the problem will also be hard. As the complexity of software systems grew, so did the need for some reasoning mechanism about correct behavior. (While we focus here on analyzing the behavior of a program relative to given correctness specifications, the development of specification mechanisms happened in parallel, and merits a different survey.) Initially, the focus of program verification research was on manual reasoning, and
Decomposing Refinement Proofs using AssumeGuarantee Reasoning
, 2000
"... Modelchecking algorithms can be used to verify, formally and automatically, if a lowlevel description of a design conforms with a highlevel description. However, for designs with very large state spaces, prior to the application of an algorithm, the refinementchecking task needs to be decomposed ..."
Abstract

Cited by 32 (2 self)
 Add to MetaCart
(Show Context)
Modelchecking algorithms can be used to verify, formally and automatically, if a lowlevel description of a design conforms with a highlevel description. However, for designs with very large state spaces, prior to the application of an algorithm, the refinementchecking task needs to be decomposed into subtasks of manageable complexity. It is natural to decompose the task following the component structure of the design. However, an individual component often does not satisfy its requirements unless the component is put into the right context, which constrains the inputs to the component. Thus, in order to verify each component individually, we need to make assumptions about its inputs, which are provided by the other components of the design. This reasoning is circular: component A is verified under the assumption that context B behaves correctly, and symmetrically, B is verified assuming the correctness of A. The assumeguarantee paradigm provides a systematic theory and methodology for ensuring the soundness of the circular style of postulating and discharging assumptions in componentbased reasoning. We give a tutorial introduction to the assumeguarantee paradigm for decomposing refinementchecking tasks. To illustrate the method, we step in detail through the formal veri cation of a processor pipeline against an instruction set architecture. In this example, the verification of a threestage pipeline is broken up into three subtasks, one for each stage of the pipeline.
Formal verification of outoforder execution using incremental flushing
 Com puter Aided Verification (CAV'98), volume 1427 of Lecture Notes in Computer Science
, 1998
"... Abstract. We present a twopart approach for verifying outoforder execution. First, the complexity of outoforder issue and scheduling is handled by creating an inorder abstraction of the outoforder execution core. Second, incremental flushing addresses the complexity difficulties encountered ..."
Abstract

Cited by 27 (4 self)
 Add to MetaCart
(Show Context)
Abstract. We present a twopart approach for verifying outoforder execution. First, the complexity of outoforder issue and scheduling is handled by creating an inorder abstraction of the outoforder execution core. Second, incremental flushing addresses the complexity difficulties encountered by automated abstraction functions on very deep pipelines. We illustrate the techniques on a model of a simple outoforder processor core. 1
Latency Insensitive Protocols
 in Computer Aided Verification
, 1999
"... . The theory of latency insensitive design is presented as the foundation of a new correct by construction methodology to design very large digital systems by assembling blocks of Intellectual Properties. Latency insensitive designs are synchronous distributed systems and are realized by assembli ..."
Abstract

Cited by 26 (8 self)
 Add to MetaCart
(Show Context)
. The theory of latency insensitive design is presented as the foundation of a new correct by construction methodology to design very large digital systems by assembling blocks of Intellectual Properties. Latency insensitive designs are synchronous distributed systems and are realized by assembling functional modules exchanging data on communication channels according to an appropriate protocol. The goal of the protocol is to guarantee that latency insensitive designs composed of functionally correct modules, behave correctly independently of the wire delays. A latencyinsensitive protocol is presented that makes use of relay stations buffering signals propagating along long wires. To guarantee correct behavior of the overall system, modules must satisfy weak conditions. The weakness of the conditions makes our method widely applicable. 1 Introduction The level of integration available today with Deep SubMicron (DSM) technologies (0:1m and below) is so high that entire syst...