Results 1  10
of
52
Recursive Markov chains, stochastic grammars, and monotone systems of nonlinear equations
 IN STACS
, 2005
"... We define Recursive Markov Chains (RMCs), a class of finitely presented denumerable Markov chains, and we study algorithms for their analysis. Informally, an RMC consists of a collection of finitestate Markov chains with the ability to invoke each other in a potentially recursive manner. RMCs offer ..."
Abstract

Cited by 95 (13 self)
 Add to MetaCart
We define Recursive Markov Chains (RMCs), a class of finitely presented denumerable Markov chains, and we study algorithms for their analysis. Informally, an RMC consists of a collection of finitestate Markov chains with the ability to invoke each other in a potentially recursive manner. RMCs offer a natural abstract model for probabilistic programs with procedures. They generalize, in a precise sense, a number of well studied stochastic models, including Stochastic ContextFree Grammars (SCFG) and MultiType Branching Processes (MTBP). We focus on algorithms for reachability and termination analysis for RMCs: what is the probability that an RMC started from a given state reaches another target state, or that it terminates? These probabilities are in general irrational, and they arise as (least) fixed point solutions to certain (monotone) systems of nonlinear equations associated with RMCs. We address both the qualitative problem of determining whether the probabilities are 0, 1 or inbetween, and
Automated Verification Techniques for Probabilistic Systems
"... Abstract. This tutorial provides an introduction to probabilistic model checking, a technique for automatically verifying quantitative properties of probabilistic systems. We focus on Markov decision processes (MDPs), which model both stochastic and nondeterministic behaviour. We describe methods to ..."
Abstract

Cited by 40 (16 self)
 Add to MetaCart
(Show Context)
Abstract. This tutorial provides an introduction to probabilistic model checking, a technique for automatically verifying quantitative properties of probabilistic systems. We focus on Markov decision processes (MDPs), which model both stochastic and nondeterministic behaviour. We describe methods to analyse a wide range of their properties, including specifications in the temporal logics PCTL and LTL, probabilistic safety properties and cost or rewardbased measures. We also discuss multiobjective probabilistic model checking, used to analyse tradeoffs between several different quantitative properties. Applications of the techniques in this tutorial include performance and dependability analysis of networked systems, communication protocols and randomised distributed algorithms. Since such systems often comprise several components operating in parallel, we also cover techniques for compositional modelling and verification of multicomponent probabilistic systems. Finally, we describe three large case studies which illustrate practical applications of the various methods discussed in the tutorial. 1
Recursive concurrent stochastic games
 In Proc. of 33rd Int. Coll. on Automata, Languages, and Programming (ICALP’06
, 2006
"... Abstract. We study Recursive Concurrent Stochastic Games (RCSGs), extending our recent analysis of recursive simple stochastic games [16, 17] to a concurrent setting where the two players choose moves simultaneously and independently at each state. For multiexit games, our earlier work already show ..."
Abstract

Cited by 30 (4 self)
 Add to MetaCart
(Show Context)
Abstract. We study Recursive Concurrent Stochastic Games (RCSGs), extending our recent analysis of recursive simple stochastic games [16, 17] to a concurrent setting where the two players choose moves simultaneously and independently at each state. For multiexit games, our earlier work already showed undecidability for basic questions like termination, thus we focus on the important case of singleexit RCSGs (1RCSGs). We first characterize the value of a 1RCSG termination game as the least fixed point solution of a system of nonlinear minimax functional equations, and use it to show PSPACE decidability for the quantitative termination problem. We then give a strategy improvement technique, which we use to show that player 1 (maximizer) has ǫoptimal randomized Stackless & Memoryless (rSM) strategies for all ǫ> 0, while player 2 (minimizer) has optimal rSM strategies. Thus, such games are rSMdetermined. These results mirror and generalize in a strong sense the randomized memoryless determinacy results for finite stochastic games, and extend the classic HoffmanKarp [22] strategy improvement approach from the finite to an infinite state setting. The proofs in our infinitestate setting are very different however, relying on subtle analytic properties of certain power series that arise from studying 1RCSGs. We show that our upper bounds, even for qualitative (probability 1) termination, can not be improved, even to NP, without a major breakthrough, by giving two reductions: first a Ptime reduction from the longstanding squareroot sum problem to the quantitative termination decision problem for finite concurrent stochastic games, and then a Ptime reduction from the latter problem to the qualitative termination problem for 1RCSGs. 1.
On the convergence of Newton’s method for monotone systems of polynomial equations
 In Proceedings of STOC
, 2007
"... kiefersn, luttenml, esparza o ..."
Quasibirthdeath processes, TreeLike QBDs, probabilistic 1counter automata, and pushdown systems
, 2008
"... We begin by observing that (discretetime) QuasiBirthDeath Processes (QBDs) are equivalent, in a precise sense, to (discretetime) probabilistic 1Counter Automata (p1CAs), and both TreeLike QBDs (TLQBDs) and TreeStructured QBDs (TSQBDs) are equivalent to both probabilistic Pushdown Systems ..."
Abstract

Cited by 23 (8 self)
 Add to MetaCart
We begin by observing that (discretetime) QuasiBirthDeath Processes (QBDs) are equivalent, in a precise sense, to (discretetime) probabilistic 1Counter Automata (p1CAs), and both TreeLike QBDs (TLQBDs) and TreeStructured QBDs (TSQBDs) are equivalent to both probabilistic Pushdown Systems
Efficient qualitative analysis of classes of recursive markov decision processes and simple stochastic games
 In Proc. STACS’06
, 2006
"... Abstract. Recursive Markov Decision Processes (RMDPs) and Recursive Simple Stochastic Games (RSSGs) are natural models for recursive systems involving both probabilistic and nonprobabilistic actions. As shown recently [10], fundamental problems about such models, e.g., termination, are undecidable ..."
Abstract

Cited by 22 (9 self)
 Add to MetaCart
(Show Context)
Abstract. Recursive Markov Decision Processes (RMDPs) and Recursive Simple Stochastic Games (RSSGs) are natural models for recursive systems involving both probabilistic and nonprobabilistic actions. As shown recently [10], fundamental problems about such models, e.g., termination, are undecidable in general, but decidable for the important class of 1exit RMDPs and RSSGs. These capture controlled and game versions of multitype Branching Processes, an important and wellstudied class of stochastic processes. In this paper we provide efficient algorithms for the qualitative termination problem for these models: does the process terminate almost surely when the players use their optimal strategies? Polynomial time algorithms are given for both maximizing and minimizing 1exit RMDPs (the two cases are not symmetric). For 1exit RSSGs the problem is in NP∩coNP, and furthermore, it is at least as hard as other wellknown NP∩coNP problems on games, e.g., Condon’s quantitative termination problem for finite SSGs ([3]). For the class of linearlyrecursive 1exit RSSGs, we show that the problem can be solved in polynomial time.
OneCounter Markov Decision Processes
"... We study the computational complexity of some central analysis ..."
Abstract

Cited by 19 (5 self)
 Add to MetaCart
We study the computational complexity of some central analysis
CONVERGENCE THRESHOLDS OF NEWTON’S METHOD FOR MONOTONE POLYNOMIAL EQUATIONS
, 2008
"... Monotone systems of polynomial equations (MSPEs) are systems of fixedpoint equations X1 = f1(X1,..., Xn),..., Xn = fn(X1,..., Xn) where each fi is a polynomial with positive real coefficients. The question of computing the least nonnegative solution of a given MSPE X = f(X) arises naturally in the ..."
Abstract

Cited by 15 (8 self)
 Add to MetaCart
Monotone systems of polynomial equations (MSPEs) are systems of fixedpoint equations X1 = f1(X1,..., Xn),..., Xn = fn(X1,..., Xn) where each fi is a polynomial with positive real coefficients. The question of computing the least nonnegative solution of a given MSPE X = f(X) arises naturally in the analysis of stochastic models such as stochastic contextfree grammars, probabilistic pushdown automata, and backbutton processes. Etessami and Yannakakis have recently adapted Newton’s iterative method to MSPEs. In a previous paper we have proved the existence of a threshold kf for strongly connected MSPEs, such that after kf iterations of Newton’s method each new iteration computes at least 1 new bit of the solution. However, the proof was purely existential. In this paper we give an upper bound for kf as a function of the minimal component of the least fixedpoint µf of f(X). Using this result we show that kf is at most single exponential resp. linear for strongly connected MSPEs derived from probabilistic pushdown automata resp. from backbutton processes. Further, we prove the existence of a threshold for arbitrary MSPEs after which each new iteration computes at least 1/w2 h new bits of the solution, where w and h are the width and height of the DAG of strongly connected components.
PReMo: an analyzer for Probabilistic Recursive Models
 IN PROC. OF TACAS
, 2007
"... This paper describes PReMo, a tool for analyzing Recursive Markov Chains, and their controlled/game extensions: (1exit) Recursive Markov Decision Processes and Recursive Simple Stochastic Games. ..."
Abstract

Cited by 11 (4 self)
 Add to MetaCart
This paper describes PReMo, a tool for analyzing Recursive Markov Chains, and their controlled/game extensions: (1exit) Recursive Markov Decision Processes and Recursive Simple Stochastic Games.