Results 1 - 10
of
75
The Murφ Verification System
- IN COMPUTER AIDED VERIFICATION. 8TH INTERNATIONAL CONFERENCE
, 1996
"... This is a brief overview of the Murφ verification system. ..."
Abstract
-
Cited by 173 (8 self)
- Add to MetaCart
This is a brief overview of the Murφ verification system.
An Analysis of Bitstate Hashing
, 1995
"... The bitstate hashing, or supertrace, technique was introduced in 1987 as a method to increase the quality of verification by reachability analyses for applications that defeat analysis by traditional means because of their size. Since then, the technique has been included in many research verificati ..."
Abstract
-
Cited by 91 (3 self)
- Add to MetaCart
(Show Context)
The bitstate hashing, or supertrace, technique was introduced in 1987 as a method to increase the quality of verification by reachability analyses for applications that defeat analysis by traditional means because of their size. Since then, the technique has been included in many research verification tools, and was adopted in tools that are marketed commercially. It is therefore important that we understand well how and why the method works, what its limitations are, and how it compares with alternative methods over a broad range of problem sizes. The original
Parallelizing the Murφ verifier
- Computer Aided Verification. 9th International Conference
, 1997
"... With the use of state and memory reduction techniques in verification by explicit state enumeration, runtime becomes a major limiting factor. We describe a parallel version of the explicit state enumeration verifier Murφ for distributed memory multiprocessors and networks of workstations that is ba ..."
Abstract
-
Cited by 72 (0 self)
- Add to MetaCart
With the use of state and memory reduction techniques in verification by explicit state enumeration, runtime becomes a major limiting factor. We describe a parallel version of the explicit state enumeration verifier Murφ for distributed memory multiprocessors and networks of workstations that is based on the message passing paradigm. In experiments with three complex cache coherence protocols, parallel Murφ shows close to linear speedups, which are largely insensitive to communication latency and bandwidth. There is some slowdown with increasing communication overhead, for which a simple yet relatively accurate approximation formula is given. Techniques to reduce overhead and required bandwidth and to allow heterogeneity and dynamically changing load in the parallel machine are discussed, which we expect will allow good speedups when using conventional networks of workstations.
Distributed ltl model-checking in spin
, 2001
"... Abstract. In this paper we propose a distributed algorithm for modelchecking LTL. In particular, we explore the possibility of performing nested depth-first search algorithm in distributed SPIN. A distributed version of the algorithm is presented, and its complexity is discussed. 1 ..."
Abstract
-
Cited by 49 (12 self)
- Add to MetaCart
Abstract. In this paper we propose a distributed algorithm for modelchecking LTL. In particular, we explore the possibility of performing nested depth-first search algorithm in distributed SPIN. A distributed version of the algorithm is presented, and its complexity is discussed. 1
Java Model Checking
, 2000
"... This paper presents initial results in model checking multi-threaded Java programs. Java programs are translated into the SAL (Symbolic Analysis Laboratory) intermediate language, which supports dynamic constructs such as object instantiations and thread call stacks. The SAL model checker then exhau ..."
Abstract
-
Cited by 48 (1 self)
- Add to MetaCart
This paper presents initial results in model checking multi-threaded Java programs. Java programs are translated into the SAL (Symbolic Analysis Laboratory) intermediate language, which supports dynamic constructs such as object instantiations and thread call stacks. The SAL model checker then exhaustively checks the program description for deadlocks and assertion failures. Basic model checking optimizations that help curb the state explosion problem have been implemented. To deal with large Java programs in practice, however, supplementary program analysis tools must work in conjunction with the model checker to make verification manageable. The SAL language framework provides a good starting point to interface new and existing analysis methods with the model checker. 1 Introduction The Java programming language is becoming increasingly popular for writing multi-threaded applications. In particular, many Internet servers are written in Java. Since Java has multi-threading built in a...
Improved Probabilistic Verification by Hash Compaction
- In Advanced Research Working Conference on Correct Hardware Design and Verification Methods
, 1995
"... . We present and analyze a probabilistic method for verification by explicit state enumeration, which improves on the "hashcompact" method of Wolper and Leroy. The hashcompact method maintains a hash table in which compressed values for states instead of full state descriptors are stored. ..."
Abstract
-
Cited by 47 (7 self)
- Add to MetaCart
(Show Context)
. We present and analyze a probabilistic method for verification by explicit state enumeration, which improves on the "hashcompact" method of Wolper and Leroy. The hashcompact method maintains a hash table in which compressed values for states instead of full state descriptors are stored. This method saves space but allows a non-zero probability of omitting states during verification, which may cause verification to miss design errors (i.e. verification may produce "false positives"). Our method improves on Wolper and Leroy's by calculating the hash and compressed values independently, and by using a specific hashing scheme that requires a low number of probes in the hash table. The result is a large reduction in the probability of omitting a state. Hence, we can achieve a given upper bound on the probability of omitting a state using fewer bits per compressed state. For example, we can reduce the number of bytes stored for each state from the eight recommended by Wolper and Leroy to o...
Heuristic Search
, 2011
"... Heuristic search is used to efficiently solve the single-node shortest path problem in weighted graphs. In practice, however, one is not only interested in finding a short path, but an optimal path, according to a certain cost notion. We propose an algebraic formalism that captures many cost notions ..."
Abstract
-
Cited by 46 (24 self)
- Add to MetaCart
Heuristic search is used to efficiently solve the single-node shortest path problem in weighted graphs. In practice, however, one is not only interested in finding a short path, but an optimal path, according to a certain cost notion. We propose an algebraic formalism that captures many cost notions, like typical Quality of Service attributes. We thus generalize A*, the popular heuristic search algorithm, for solving optimal-path problem. The paper provides an answer to a fundamental question for AI search, namely to which general notion of cost, heuristic search algorithms can be applied. We proof correctness of the algorithms and provide experimental results that validate the feasibility of the approach.
Using Magnetic Disk instead of Main Memory in the Mur phi Verifier
, 1998
"... In verification by explicit state enumeration a randomly accessed state table is maintained. In practice, the total main memory available for this state table is a major limiting factor in verification. We describe a version of the explicit state enumeration verifier Mur' that allows using m ..."
Abstract
-
Cited by 43 (2 self)
- Add to MetaCart
(Show Context)
In verification by explicit state enumeration a randomly accessed state table is maintained. In practice, the total main memory available for this state table is a major limiting factor in verification. We describe a version of the explicit state enumeration verifier Mur' that allows using magnetic disk instead of main memory for storing almost all of the state table. The algorithm avoids costly random accesses to disk and amortizes the cost of linearly reading the state table from disk over all states in a certain breadth-first level. The remaining runtime overhead for accessing the disk can be strongly reduced by combining the scheme with hash compaction. We show how to do this combination efficiently and analyze the resulting algorithm. In experiments with three complex cache coherence protocols, the new algorithm achieves memory savings factors of one to two orders of magnitude with a runtime overhead of typically only around 15%. Keywords protocol verification, expli...
Verification Techniques for Cache Coherence Protocols.
, 1997
"... ion and Specification Using FSMs Although there is a variety of ways to specify a protocol model, we are interested in methodologies that employ finite state machines (FSMs) to form protocol models. Because cache protocols are essentially composed of component processes such as memory and cache cont ..."
Abstract
-
Cited by 43 (0 self)
- Add to MetaCart
ion and Specification Using FSMs Although there is a variety of ways to specify a protocol model, we are interested in methodologies that employ finite state machines (FSMs) to form protocol models. Because cache protocols are essentially composed of component processes such as memory and cache controllers that exchange messages and respond to "events" generated by processors, a finite state machine model with such "events" as its inputs is a natural model. Specifically, we focus on verifying cache protocols where the behavior of an individual protocol component C is modeled as a finite state machine [FSM.sub.c] and the protocol machine is composed of all [FSM.sub.c]s. Inputs to these machines are processor-generated events and messages for maintaining data consistency. In general, the protocol models are abstracted representations. They are often kept simple to make the complexity of verification manageable, while preserving properties of interest. It is clear that the quality of a ve...
Parallel Performance Analysis of Large Markov Models
, 2000
"... Stochastic performance models provide a formal way of capturing and analysing the complex dynamic behaviour of concurrent systems. Such models can be spec-i ed by several high-level formalisms, including Stochastic Petri nets, Queueing networks and Stochastic Process Algebras. Traditionally, perform ..."
Abstract
-
Cited by 37 (14 self)
- Add to MetaCart
(Show Context)
Stochastic performance models provide a formal way of capturing and analysing the complex dynamic behaviour of concurrent systems. Such models can be spec-i ed by several high-level formalisms, including Stochastic Petri nets, Queueing networks and Stochastic Process Algebras. Traditionally, performance statistics for these models are derived by generating and then solving a Markov chain corresponding to the model's behaviour at the state transition level. However, workstation memory and compute power are often overwhelmed by the sheer number of states in the Markov chain and the size of their internal computer representations. This thesis presents two parallel and distributed techniques which signicantly increase the size of the models that can be analysed using Markov modelling. The techniques attack the space and time requirements of both major phases of the analysis, i.e. construction of the Markov chain from a high-level model (state space generation) and solution of the Markov chain to determine its equilibrium