Results 1  10
of
54
The ins and outs of the probabilistic model checker MRMC
 IN PROC. QEST’09
, 2009
"... The Markov Reward Model Checker (MRMC) is a software tool for verifying properties over probabilistic models. It supports PCTL and CSL model checking, and their reward extensions. Distinguishing features of MRMC are its support for computing time and rewardbounded reachability probabilities, (prop ..."
Abstract

Cited by 75 (18 self)
 Add to MetaCart
(Show Context)
The Markov Reward Model Checker (MRMC) is a software tool for verifying properties over probabilistic models. It supports PCTL and CSL model checking, and their reward extensions. Distinguishing features of MRMC are its support for computing time and rewardbounded reachability probabilities, (propertydriven) bisimulation minimization, and precise onthefly steadystate detection. Recent tool features include timebounded reachability analysis for uniform CTMDPs and CSL model checking by discreteevent simulation. This paper presents the tool’s current status and its implementation details.
Monte Carlo Model Checking
 In Proc. of Tools and Algorithms for Construction and Analysis of Systems (TACAS 2005), volume 3440 of LNCS
, 2005
"... Abstract. We present MC 2, what we believe to be the first randomized, Monte Carlo algorithm for temporallogic model checking, the classical problem of deciding whether or not a property specified in temporal logic holds of a system specification. Given a specification S of a finitestate system, a ..."
Abstract

Cited by 58 (4 self)
 Add to MetaCart
(Show Context)
Abstract. We present MC 2, what we believe to be the first randomized, Monte Carlo algorithm for temporallogic model checking, the classical problem of deciding whether or not a property specified in temporal logic holds of a system specification. Given a specification S of a finitestate system, an LTL (Linear Temporal Logic) formula ϕ, and parameters ɛ and δ, MC 2 takes N = ln(δ) / ln(1 − ɛ) random samples (random walks ending in a cycle, i.e lassos) from the Büchi automaton B = BS × B¬ϕ to decide if L(B) = ∅. Should a sample reveal an accepting lasso l, MC 2 returns false with l as a witness. Otherwise, it returns true and reports that with probability less than δ, pZ < ɛ, where pZ is the expectation of an accepting lasso in B. It does so in time O(N · D) and space O(D), where D is B’s recurrence diameter, using a number of samples N that is optimal to within a constant factor. Our experimental results demonstrate that MC 2 is fast, memoryefficient, and scales very well.
Automated Verification Techniques for Probabilistic Systems
"... Abstract. This tutorial provides an introduction to probabilistic model checking, a technique for automatically verifying quantitative properties of probabilistic systems. We focus on Markov decision processes (MDPs), which model both stochastic and nondeterministic behaviour. We describe methods to ..."
Abstract

Cited by 40 (16 self)
 Add to MetaCart
(Show Context)
Abstract. This tutorial provides an introduction to probabilistic model checking, a technique for automatically verifying quantitative properties of probabilistic systems. We focus on Markov decision processes (MDPs), which model both stochastic and nondeterministic behaviour. We describe methods to analyse a wide range of their properties, including specifications in the temporal logics PCTL and LTL, probabilistic safety properties and cost or rewardbased measures. We also discuss multiobjective probabilistic model checking, used to analyse tradeoffs between several different quantitative properties. Applications of the techniques in this tutorial include performance and dependability analysis of networked systems, communication protocols and randomised distributed algorithms. Since such systems often comprise several components operating in parallel, we also cover techniques for compositional modelling and verification of multicomponent probabilistic systems. Finally, we describe three large case studies which illustrate practical applications of the various methods discussed in the tutorial. 1
Threevalued abstraction for continuoustime markov chains
 In Proceedings of the International Conference on Computer Aided Verification
, 2007
"... Abstract. This paper proposes a novel abstraction technique for continuoustime Markov chains (CTMCs). Our technique fits within the realm of threevalued abstraction methods that have been used successfully for traditional model checking. The key idea is to apply abstraction on uniform CTMCs that a ..."
Abstract

Cited by 36 (8 self)
 Add to MetaCart
(Show Context)
Abstract. This paper proposes a novel abstraction technique for continuoustime Markov chains (CTMCs). Our technique fits within the realm of threevalued abstraction methods that have been used successfully for traditional model checking. The key idea is to apply abstraction on uniform CTMCs that are readily obtained from general CTMCs, and to abstract transition probabilities by intervals. It is shown that this provides a conservative abstraction for both true and false for a threevalued semantics of the branchingtime logic CSL (Continuous Stochastic Logic). Experiments on an infinitestate CTMC indicate the feasibility of our abstraction technique. 1
Dynamic fault tree analysis using input/output interactive markov chains
 In Proc. of the 37th Annual IEEE/IFIP International Conference on Dependable Systems and Networks
, 2007
"... Dynamic Fault Trees (DFT) extend standard fault trees by allowing the modeling of complex system components’ behaviors and interactions. Being a high level model and easy to use, DFT are experiencing a growing success among reliability engineers. Unfortunately, a number of issues still remains when ..."
Abstract

Cited by 24 (7 self)
 Add to MetaCart
(Show Context)
Dynamic Fault Trees (DFT) extend standard fault trees by allowing the modeling of complex system components’ behaviors and interactions. Being a high level model and easy to use, DFT are experiencing a growing success among reliability engineers. Unfortunately, a number of issues still remains when using DFT. Briefly, these issues are (1) a lack of formality (syntax and semantics), (2) limitations in modular analysis and thus vulnerability to the statespace explosion problem, and (3) lack in modular modelbuilding. We use the input/output interactive Markov chain (I/OIMC) formalism to analyse DFT. I/OIMC have a precise semantics and are an extension of continuoustime Markov chains with input and output actions. In this paper, using the I/OIMC framework, we address and resolve issues (2) and (3) mentioned above. We also show, through some examples, how one can readily extend the DFT modeling capabilities using the I/OIMC framework.
S.: A characterization of meaningful schedulers for continuoustime Markov decision processes. In: Formal Modeling and Analysis of Timed Systems
 LNCS
, 2006
"... Abstract. Continuoustime Markov decision process are an important variant of labelled transition systems having nondeterminism through labels and stochasticity through exponential firetime distributions. Nondeterministic choices are resolved using the notion of a scheduler. In this paper we chara ..."
Abstract

Cited by 19 (1 self)
 Add to MetaCart
(Show Context)
Abstract. Continuoustime Markov decision process are an important variant of labelled transition systems having nondeterminism through labels and stochasticity through exponential firetime distributions. Nondeterministic choices are resolved using the notion of a scheduler. In this paper we characterize the class of measurable schedulers, which is the most general one, and show how a measurable scheduler induces a unique probability measure on the sigmaalgebra of infinite paths. We then give evidence that for particular reachability properties it is sufficient to consider a subset of measurable schedulers. Having analyzed schedulers and their induced probability measures we finally show that each probability measure on the sigmaalgebra of infinite paths is indeed induced by a measurable scheduler which proves that this class is complete. 1
A compositional semantics for Dynamic Fault Trees in terms of Interactive Markov Chains
"... Abstract. Dynamic fault trees (DFTs) are a versatile and common formalism to model and analyze the reliability of computerbased systems. This paper presents a formal semantics of DFTs in terms of input/output interactive Markov chains (I/OIMCs), which extend continuoustime Markov chains with disc ..."
Abstract

Cited by 16 (9 self)
 Add to MetaCart
(Show Context)
Abstract. Dynamic fault trees (DFTs) are a versatile and common formalism to model and analyze the reliability of computerbased systems. This paper presents a formal semantics of DFTs in terms of input/output interactive Markov chains (I/OIMCs), which extend continuoustime Markov chains with discrete input, output and internal actions. This semantics provides a rigorous basis for the analysis of DFTs. Our semantics is fully compositional, that is, the semantics of a DFT is expressed in terms of the semantics of its elements (i.e. basic events and gates). This enables an efficient analysis of DFTs through compositional aggregation, which helps to alleviate the statespace explosion problem by incrementally building the DFT state space. We have implemented our methodology by developing a tool, and showed, through four case studies, the feasibility of our approach and its effectiveness in reducing the state space to be analyzed. Fault trees (FTs) [20], also called static FTs, provide a highlevel, graphical formalism
Flow faster: Efficient decision algorithms for probabilistic simulations
 13th International Conference on Tools and Algorithms for the Construction and Analysis of Systems (TACAS), volume 4424 of LNCS
"... Abstract. Strong and weak simulation relations have been proposed for Markov chains, while strong simulation and strong probabilistic simulation relations have been proposed for probabilistic automata. However, decision algorithms for strong and weak simulation over Markov chains, and for strong sim ..."
Abstract

Cited by 16 (4 self)
 Add to MetaCart
(Show Context)
Abstract. Strong and weak simulation relations have been proposed for Markov chains, while strong simulation and strong probabilistic simulation relations have been proposed for probabilistic automata. However, decision algorithms for strong and weak simulation over Markov chains, and for strong simulation over probabilistic automata are not efficient, which makes it as yet unclear whether they can be used as effectively as their nonprobabilistic counterparts. This paper presents drastically improved algorithms to decide whether some (discrete or continuoustime) Markov chain strongly or weakly simulates another, or whether a probabilistic automaton strongly simulates another. The key innovation is the use of parametric maximum flow techniques to amortize computations. We also present a novel algorithm for deciding strong probabilistic simulation preorders on probabilistic automata, which has polynomial complexity via a reduction to an LP problem. When extending the algorithms for probabilistic automata to their continuoustime counterpart, we retain the same complexity for both strong and strong probabilistic simulations.
Delayed Nondeterminism in ContinuousTime Markov Decision Processes
"... Schedulers in randomly timed games can be classified as to whether they use timing information or not. We consider continuoustime Markov decision processes (CTMDPs) and define a hierarchy of positional (P) and historydependent (H) schedulers which induce strictly tighter bounds on quantitative prop ..."
Abstract

Cited by 10 (2 self)
 Add to MetaCart
(Show Context)
Schedulers in randomly timed games can be classified as to whether they use timing information or not. We consider continuoustime Markov decision processes (CTMDPs) and define a hierarchy of positional (P) and historydependent (H) schedulers which induce strictly tighter bounds on quantitative properties on CTMDPs. This classification into time abstract (TA), total time (TT) and fully timedependent (T) schedulers is mainly based on the kind of timing details that the schedulers may exploit. We investigate when the resolution of nondeterminism may be deferred. In particular, we show that TTP and TAP schedulers allow for delaying nondeterminism for all measures, whereas this does neither hold for TP nor for any TAH scheduler. The core of our study is a transformation on CTMDPs which unifies the speed of outgoing transitions per state.
Alternating timed automata over bounded time
 In LICS
"... Abstract Alternating timed automata are a powerful extension of classical AlurDill timed automata that are closed under all Boolean operations. They have played a key role, among others, in providing verification algorithms for prominent specification formalisms such as Metric Temporal Logic. Unfo ..."
Abstract

Cited by 8 (2 self)
 Add to MetaCart
(Show Context)
Abstract Alternating timed automata are a powerful extension of classical AlurDill timed automata that are closed under all Boolean operations. They have played a key role, among others, in providing verification algorithms for prominent specification formalisms such as Metric Temporal Logic. Unfortunately, when interpreted over an infinite dense time domain (such as the reals), alternating timed automata have an undecidable language emptiness problem. The main result of this paper is that, over bounded time domains, language emptiness for alternating timed automata is decidable (but nonelementary). The proof involves showing decidability of a class of parametric McNaughton games that are played over timed words and that have winning conditions expressed in monadic logic over the signature with order and the +1 function. As a corollary, we establish the decidability of the timebounded modelchecking problem for AlurDill timed automata against specifications expressed as alternating timed automata.