Results 1  10
of
63
Efficient computation of timebounded reachability probabilities in uniform continuoustime Markov decision processes
, 2004
"... ..."
2007): Bisimulation Minimisation Mostly Speeds Up Probabilistic Model Checking
 In: Tools and Algorithms for the Construction and Analysis of Systems, 13th International Conference (TACAS’07), Lecture Notes in Computer Science 4424
"... The following full text is a publisher's version. ..."
Abstract

Cited by 40 (9 self)
 Add to MetaCart
(Show Context)
The following full text is a publisher's version.
Threevalued abstraction for continuoustime markov chains
 In Proceedings of the International Conference on Computer Aided Verification
, 2007
"... Abstract. This paper proposes a novel abstraction technique for continuoustime Markov chains (CTMCs). Our technique fits within the realm of threevalued abstraction methods that have been used successfully for traditional model checking. The key idea is to apply abstraction on uniform CTMCs that a ..."
Abstract

Cited by 36 (8 self)
 Add to MetaCart
(Show Context)
Abstract. This paper proposes a novel abstraction technique for continuoustime Markov chains (CTMCs). Our technique fits within the realm of threevalued abstraction methods that have been used successfully for traditional model checking. The key idea is to apply abstraction on uniform CTMCs that are readily obtained from general CTMCs, and to abstract transition probabilities by intervals. It is shown that this provides a conservative abstraction for both true and false for a threevalued semantics of the branchingtime logic CSL (Continuous Stochastic Logic). Experiments on an infinitestate CTMC indicate the feasibility of our abstraction technique. 1
Probabilistic Reachability for Parametric Markov Models
"... Given a parametric Markov model, we consider the problem of computing the formula expressing the probability of reaching a given set of states. To attack this principal problem, Daws has suggested to first convert the Markov chain into a finite automaton, from which a regular expression is compute ..."
Abstract

Cited by 29 (4 self)
 Add to MetaCart
(Show Context)
Given a parametric Markov model, we consider the problem of computing the formula expressing the probability of reaching a given set of states. To attack this principal problem, Daws has suggested to first convert the Markov chain into a finite automaton, from which a regular expression is computed. Afterwards, this expression is evaluated to a closed form expression representing the reachability probability. This paper investigates how this idea can be turned into an effective procedure. It turns out that the bottleneck lies in an exponential growth of the regular expression relative to the number of states. We therefore proceed differently, by tightly intertwining the regular expression computation with its evaluation. This allows us to arrive at an effective method that avoids the exponential blow up in most practical cases. We give a detailed account of the approach, also extending to parametric models with rewards and with nondeterminism. Experimental evidence is provided, illustrating that our implementation provides meaningful insights on nontrivial models.
Probability and Nondeterminism in Operational Models of Concurrency
 In Proc. CONCUR, LNCS
, 2006
"... Abstract. We give a brief overview of operational models for concurrent systems that exhibit probabilistic behavior, focussing on the interplay between probability and nondeterminism. Our survey is carried out from the perspective of probabilistic automata, a model originally developed for the analy ..."
Abstract

Cited by 19 (1 self)
 Add to MetaCart
(Show Context)
Abstract. We give a brief overview of operational models for concurrent systems that exhibit probabilistic behavior, focussing on the interplay between probability and nondeterminism. Our survey is carried out from the perspective of probabilistic automata, a model originally developed for the analysis of randomized distributed algorithms. 1
Flow faster: Efficient decision algorithms for probabilistic simulations
 13th International Conference on Tools and Algorithms for the Construction and Analysis of Systems (TACAS), volume 4424 of LNCS
"... Abstract. Strong and weak simulation relations have been proposed for Markov chains, while strong simulation and strong probabilistic simulation relations have been proposed for probabilistic automata. However, decision algorithms for strong and weak simulation over Markov chains, and for strong sim ..."
Abstract

Cited by 16 (4 self)
 Add to MetaCart
(Show Context)
Abstract. Strong and weak simulation relations have been proposed for Markov chains, while strong simulation and strong probabilistic simulation relations have been proposed for probabilistic automata. However, decision algorithms for strong and weak simulation over Markov chains, and for strong simulation over probabilistic automata are not efficient, which makes it as yet unclear whether they can be used as effectively as their nonprobabilistic counterparts. This paper presents drastically improved algorithms to decide whether some (discrete or continuoustime) Markov chain strongly or weakly simulates another, or whether a probabilistic automaton strongly simulates another. The key innovation is the use of parametric maximum flow techniques to amortize computations. We also present a novel algorithm for deciding strong probabilistic simulation preorders on probabilistic automata, which has polynomial complexity via a reduction to an LP problem. When extending the algorithms for probabilistic automata to their continuoustime counterpart, we retain the same complexity for both strong and strong probabilistic simulations.
Concurrency and Composition in a Stochastic World
, 2012
"... Abstract. We discuss conceptional and foundational aspects of Markov automata [22]. We place this model in the context of continuous and discretetime Markov chains, probabilistic automata and interactive Markov chains, and provide insight into the parallel execution of such models. We further give ..."
Abstract

Cited by 12 (3 self)
 Add to MetaCart
(Show Context)
Abstract. We discuss conceptional and foundational aspects of Markov automata [22]. We place this model in the context of continuous and discretetime Markov chains, probabilistic automata and interactive Markov chains, and provide insight into the parallel execution of such models. We further give a detailled account of the concept of relations on distributions, and discuss how this can generalise known notions of weak simulation and bisimulation, such as to fuse sequences of internal transitions. 1
M.: Approximate abstractions of stochastic hybrid systems
 IEEE Transactions on Automatic Control
"... Abstract—We present a constructive procedure for obtaining a finite approximate abstraction of a discretetime stochastic hybrid system. The procedure consists of a partition of the state space of the system and depends on a controllable parameter. Given proper continuity assumptions on the model, t ..."
Abstract

Cited by 11 (3 self)
 Add to MetaCart
Abstract—We present a constructive procedure for obtaining a finite approximate abstraction of a discretetime stochastic hybrid system. The procedure consists of a partition of the state space of the system and depends on a controllable parameter. Given proper continuity assumptions on the model, the approximation errors introduced by the abstraction procedure are explicitly computed and it is shown that they can be tuned through the parameter of the partition. The abstraction is interpreted as a Markov setChain. We show that the enforcement of certain ergodic properties on the stochastic hybrid model implies the existence of a finite abstraction with finite error in time over the concrete model, and allows introducing a finitetime algorithm that computes the abstraction. Index Terms—Stochastic Hybrid Systems, Markov Chains. I. INTRODUCTION AND RELATED WORK The study of complex, heterogeneous, and probabilistic models
A Uniform Framework for Modeling Nondeterministic, Probabilistic, Stochastic, or Mixed Processes and their Behavioral Equivalences
, 2013
"... Labeled transition systems are typically used as behavioral models of concurrent processes. Their labeled transitions define a onestep statetostate reachability relation. This model can be generalized by modifying the transition relation to associate a state reachability distribution with any pai ..."
Abstract

Cited by 9 (4 self)
 Add to MetaCart
Labeled transition systems are typically used as behavioral models of concurrent processes. Their labeled transitions define a onestep statetostate reachability relation. This model can be generalized by modifying the transition relation to associate a state reachability distribution with any pair consisting of a source state and a transition label. The state reachability distribution is a function mapping each possible target state to a value that expresses the degree of onestep reachability of that state. Values are taken from a preordered set equipped with a minimum that denotes unreachability. By selecting suitable preordered sets, the resulting model, called ULTraS from Uniform Labeled Transition System, can be specialized to capture wellknown models of fully nondeterministic processes (LTS), fully probabilistic processes (ADTMC), fully stochastic processes (ACTMC), and nondeterministic and probabilistic (MDP) or nondeterministic and stochastic (CTMDP) processes. This uniform treatment of different behavioral models extends to behavioral equivalences. They can be defined on ULTraS by relying on appropriate measure functions that express the degree of reachability of a set of states when performing multistep computations. It is shown that the specializations of bisimulation, trace, and testing equivalences for the different classes of ULTraS coincide with the behavioral equivalences defined in the literature over traditional models except when nondeterminism and probability/stochasticity coexist; then new equivalences pop up.
Approximate verification of the symbolic dynamics of Markov chains. Technical report available at http://www.crans.org/˜genest/AAGT12.pdf
"... Abstract—A finite state Markov chain M is often viewed as a probabilistic transition system. An alternative view which we follow here is to regard M as a linear transform operating on the space of probability distributions over its set of nodes. The novel idea here is to discretize the probability ..."
Abstract

Cited by 9 (0 self)
 Add to MetaCart
(Show Context)
Abstract—A finite state Markov chain M is often viewed as a probabilistic transition system. An alternative view which we follow here is to regard M as a linear transform operating on the space of probability distributions over its set of nodes. The novel idea here is to discretize the probability value space [0,1] into a finite set of intervals. A concrete probability distribution over the nodes is then symbolically represented as a tuple D of such intervals. The ith component of the discretized distribution D will be the interval in which the probability of node i falls. The set of discretized distributions is a finite set and each trajectory, generated by repeated applications of M to an initial distribution, will induce a unique infinite string over this finite set of letters. Hence, given a set of initial distributions, the symbolic dynamics of M will consist of an infinite language L over the finite alphabet of discretized distributions. We investigate whether L