Results 1  10
of
18
Onthefly Confluence Detection for Statistical Model Checking
, 2013
"... Statistical model checking is an analysis method that circumvents the state space explosion problem in modelbased verification by combining probabilistic simulation with statistical methods that provide clear error bounds. As a simulationbased technique, it can only provide sound results if the un ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
(Show Context)
Statistical model checking is an analysis method that circumvents the state space explosion problem in modelbased verification by combining probabilistic simulation with statistical methods that provide clear error bounds. As a simulationbased technique, it can only provide sound results if the underlying model is a stochastic process. In verification, however, models are usually variations of nondeterministic transition systems. The notion of confluence allows the reduction of such transition systems in classical model checking by removing spurious nondeterministic choices. In this presentation, we show that confluence can be adapted to detect and discard such choices onthefly during simulation, thus extending the applicability of statistical model checking to a subclass of Markov decision processes. In contrast to previous approaches that use partial order reduction, the confluencebased technique can handle additional kinds of nondeterminism. In particular, it is not restricted to interleavings. We evaluate our approach, which is implemented as part of the modes simulator for the MODEST modelling language, on a set of examples that highlight its strengths and limitations and show the improvements compared to the partial orderbased method.
Probably Approximately Correct MDP Learning and Control With Temporal Logic Constraints
"... Abstract—We consider synthesis of controllers that maximize the probability of satisfying given temporal logic specifications in unknown, stochastic environments. We model the interaction between the system and its environment as a Markov decision process (MDP) with initially unknown transition prob ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
(Show Context)
Abstract—We consider synthesis of controllers that maximize the probability of satisfying given temporal logic specifications in unknown, stochastic environments. We model the interaction between the system and its environment as a Markov decision process (MDP) with initially unknown transition probabilities. The solution we develop builds on the socalled modelbased probably approximately correct Markov decision process (PACMDP) method. The algorithm attains an εapproximately optimal policy with probability 1−δ using samples (i.e. observations), time and space that grow polynomially with the size of the MDP, the size of the automaton expressing the temporal logic specification,
Control of noisy differentialdrive vehicles from timebounded temporal logic specifications
 In ICRA
, 2013
"... Abstract — We address the problem of controlling a noisy differential drive mobile robot such that the probability of satisfying a specification given as a Bounded Linear Temporal Logic (BLTL) formula over a set of properties at the regions in the environment is maximized. We assume that the vehicle ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
Abstract — We address the problem of controlling a noisy differential drive mobile robot such that the probability of satisfying a specification given as a Bounded Linear Temporal Logic (BLTL) formula over a set of properties at the regions in the environment is maximized. We assume that the vehicle can determine its precise initial position in a known map of the environment. However, inspired by practical limitations, we assume that the vehicle is equipped with noisy actuators and, during its motion in the environment, it can only measure the angular velocity of its wheels using limited accuracy incremental encoders. Assuming the duration of the motion is finite, we map the measurements to a Markov Decision Process (MDP). We use recent results in Statistical Model Checking (SMC) to obtain an MDP control policy that maximizes the probability of satisfaction. We translate this policy to a vehicle feedback control strategy and show that the probability that the vehicle satisfies the specification in the environment is bounded from below by the probability of satisfying the specification on the MDP. We illustrate our method with simulations and experimental results. I.
Verification of Markov Decision Processes using Learning Algorithms?
"... Abstract. We present a general framework for applying machinelearning algorithms to the verification of Markov decision processes (MDPs). The primary goal of these techniques is to improve performance by avoiding an exhaustive exploration of the state space. Our framework focuses on probabilistic ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
Abstract. We present a general framework for applying machinelearning algorithms to the verification of Markov decision processes (MDPs). The primary goal of these techniques is to improve performance by avoiding an exhaustive exploration of the state space. Our framework focuses on probabilistic reachability, which is a core property for verification, and is illustrated through two distinct instantiations. The first assumes that full knowledge of the MDP is available, and performs a heuristicdriven partial exploration of the model, yielding precise lower and upper bounds on the required probability. The second tackles the case where we may only sample the MDP, and yields probabilistic guarantees, again in terms of both the lower and upper bounds, which provides efficient stopping criteria for the approximation. The latter is the first extension of statistical model checking for unbounded properties in MDPs. In contrast with other related techniques, our approach is not restricted to timebounded (finitehorizon) or discounted properties, nor does it assume any particular properties of the MDP. We also show how our methods extend to LTL objectives. We present experimental results showing the performance of our framework on several examples. 1
A Simple and Efficient Statistical Model Checking Algorithm to Evaluate Markov Decision Processes
, 2013
"... Abstract. We propose a simple and efficient technique that allows the application of statistical model checking (SMC) to Markov Decision Processes (MDP). Our technique finds schedulers that transform the original MDP into a purely stochastic Markov Chain, on which standard SMC can be used. A statist ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract. We propose a simple and efficient technique that allows the application of statistical model checking (SMC) to Markov Decision Processes (MDP). Our technique finds schedulers that transform the original MDP into a purely stochastic Markov Chain, on which standard SMC can be used. A statistical search is performed over the set of possible schedulers to find the best and worst with respect to the given property. If a scheduler is found that disproves the property, a counterexample is produced. If no counterexample is found, the algorithm concludes that the property is probably satisfied, with a confidence depending on the number of schedulers evaluated. Unlike previous approaches, the efficiency of our algorithm does not depend on structural properties of the MDP. Moreover, we have devised an efficient procedure to address general classes of schedulers by using Pseudo Random Number Generators and Hash Functions. In practice, our algorithm allows the representation of general schedulers in constant space, in contrast to existing algorithms that are exponential in the size of the system. In particular, this allows our SMC algorithm for MDPs to consider memorydependant schedulers in addition to the memoryless schedulers that have been considered by others. 1
Pareto
"... efficiency in synthesizing shared autonomy policies with temporal logic constraints ..."
Abstract
 Add to MetaCart
(Show Context)
efficiency in synthesizing shared autonomy policies with temporal logic constraints
Towards an Aordable Internet Access for Everyone: The Quest for Enabling Universal Service Commitment (Dagstuhl Seminar 14471)
, 2015
"... Published online and open access by ..."
Contacts
"... Markov models are a popular mean for modeling and verification of a system’s nonfunctional properties. However, most of the verification procedures are based on heavy numerical routines and suffer of state space explosion issues. Bisimulation minimization is a state space reduction technique that c ..."
Abstract
 Add to MetaCart
Markov models are a popular mean for modeling and verification of a system’s nonfunctional properties. However, most of the verification procedures are based on heavy numerical routines and suffer of state space explosion issues. Bisimulation minimization is a state space reduction technique that can often reduce the impact of state space explosion. On the other hand, it usually requires an explicit representation of the state space, which might be unfeasible for large systems. In [2] a bisimulation minimization approach has been proposed, which leverage an SMT solver to extract the minimized system from the extended specification. The goal of this seminar is to study and reproduce the approach of [2] for bisimulation minimization, using the SMT solver Microsoft Z3. As an additional extension, the student may develop an SMTbased verification. This work might be optionally extended for a MS Thesis.
Performance Analysis for LTE Networks with Markov Decision Process
"... Abstract—Long Term Evolution (LTE) has been proposed as a promising radio access technology to bring higher peak data rates and better spectral efficiency. However, scheduling and resource allocation in LTE still face huge design challenges due to their complexity. In this paper, the optimization pr ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract—Long Term Evolution (LTE) has been proposed as a promising radio access technology to bring higher peak data rates and better spectral efficiency. However, scheduling and resource allocation in LTE still face huge design challenges due to their complexity. In this paper, the optimization problem of scheduling and resource allocation for separate streams is first formulated. By separating streaming scheduling and packet sorting, the scheduler is aware of probabilistic state information, fairness among the streams, and the frame weight. Our algorithm thus reduces an Markov Decision Processes(MDP) to a fully probabilistic Markov chain on which Statistical Model Checking (SMC) may be applied to give an approximate solution to the problem of checking the probabilistic Bounded Linear Temporal Logic (BLTL) property. We integrate our algorithm in a parallelized modification of the PRISM simulation framework. Extensive validation with both new and PRISM benchmarks demonstrates that the approach scales very well in scenarios where symbolic algorithms fail to do so. Simulations results with video sequences show that significant gains could be observed by our scheme in terms of spectrum efficiency, QoS of packet delay, and video quality while maintaining the fairness among the streams.