Results 1  10
of
12
M.: Approximate abstractions of stochastic hybrid systems
 IEEE Transactions on Automatic Control
"... Abstract—We present a constructive procedure for obtaining a finite approximate abstraction of a discretetime stochastic hybrid system. The procedure consists of a partition of the state space of the system and depends on a controllable parameter. Given proper continuity assumptions on the model, t ..."
Abstract

Cited by 11 (3 self)
 Add to MetaCart
Abstract—We present a constructive procedure for obtaining a finite approximate abstraction of a discretetime stochastic hybrid system. The procedure consists of a partition of the state space of the system and depends on a controllable parameter. Given proper continuity assumptions on the model, the approximation errors introduced by the abstraction procedure are explicitly computed and it is shown that they can be tuned through the parameter of the partition. The abstraction is interpreted as a Markov setChain. We show that the enforcement of certain ergodic properties on the stochastic hybrid model implies the existence of a finite abstraction with finite error in time over the concrete model, and allows introducing a finitetime algorithm that computes the abstraction. Index Terms—Stochastic Hybrid Systems, Markov Chains. I. INTRODUCTION AND RELATED WORK The study of complex, heterogeneous, and probabilistic models
Approximation Metrics based on Probabilistic Bisimulations for General StateSpace Markov Processes: a Survey
 HAS 2011
, 2011
"... This article provides a survey of approximation metrics for stochastic processes. We deal with Markovian processes in discrete time evolving on general state spaces, namely on domains with infinite cardinality and endowed with proper measurability and metric structures. The focus of this work is to ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
This article provides a survey of approximation metrics for stochastic processes. We deal with Markovian processes in discrete time evolving on general state spaces, namely on domains with infinite cardinality and endowed with proper measurability and metric structures. The focus of this work is to discuss approximation metrics between two such processes, based on the notion of probabilistic bisimulation: in particular we investigate metrics characterized by an approximate variant of this notion. We suggests that metrics between two processes can be introduced essentially in two distinct ways: the first employs the probabilistic conditional kernels underlying the two stochastic processes under study, and leverages notions derived from algebra, logic, or category theory; whereas the second looks at distances between trajectories of the two processes, and is based on the dynamical properties of the two processes (either their syntax, via the notion of bisimulation function; or their semantics, via sampling techniques). The survey moreover covers the problem of constructing formal approximations of stochastic processes according to the introduced metrics.
Probabilistically safe control of noisy Dubins vehicles.
, 2012
"... AbstractWe address the problem of controlling a stochastic version of a Dubins vehicle such that the probability of satisfying a temporal logic specification over a set of properties at the regions in a partitioned environment is maximized. We assume that the vehicle can determine its precise init ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
(Show Context)
AbstractWe address the problem of controlling a stochastic version of a Dubins vehicle such that the probability of satisfying a temporal logic specification over a set of properties at the regions in a partitioned environment is maximized. We assume that the vehicle can determine its precise initial position in a known map of the environment. However, inspired by practical limitations, we assume that the vehicle is equipped with noisy actuators and, during its motion in the environment, it can only measure its angular velocity using a limited accuracy gyroscope. Through quantization and discretization, we construct a finite approximation for the motion of the vehicle in the form of a Markov Decision Process (MDP). We allow for task specifications given as temporal logic statements over the environmental properties, and use tools in Probabilistic Computation Tree Logic (PCTL) to generate an MDP control policy that maximizes the probability of satisfaction. We translate this policy to a vehicle feedback control strategy and show that the probability that the vehicle satisfies the specification in the original environment is bounded from below by the maximum probability of satisfying the specification on the MDP.
Control of noisy differentialdrive vehicles from timebounded temporal logic specifications
 In ICRA
, 2013
"... Abstract — We address the problem of controlling a noisy differential drive mobile robot such that the probability of satisfying a specification given as a Bounded Linear Temporal Logic (BLTL) formula over a set of properties at the regions in the environment is maximized. We assume that the vehicle ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
Abstract — We address the problem of controlling a noisy differential drive mobile robot such that the probability of satisfying a specification given as a Bounded Linear Temporal Logic (BLTL) formula over a set of properties at the regions in the environment is maximized. We assume that the vehicle can determine its precise initial position in a known map of the environment. However, inspired by practical limitations, we assume that the vehicle is equipped with noisy actuators and, during its motion in the environment, it can only measure the angular velocity of its wheels using limited accuracy incremental encoders. Assuming the duration of the motion is finite, we map the measurements to a Markov Decision Process (MDP). We use recent results in Statistical Model Checking (SMC) to obtain an MDP control policy that maximizes the probability of satisfaction. We translate this policy to a vehicle feedback control strategy and show that the probability that the vehicle satisfies the specification in the environment is bounded from below by the probability of satisfying the specification on the MDP. We illustrate our method with simulations and experimental results. I.
Symbolic control of stochastic systems via approximately bisimilar finite abstractions. arXiv
"... Abstract. Symbolic approaches to the control design over complex systems employ the construction of finitestate models that are related to the original control systems, then use techniques from finitestate synthesis to compute controllers satisfying specifications given in a temporal logic, and fi ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
(Show Context)
Abstract. Symbolic approaches to the control design over complex systems employ the construction of finitestate models that are related to the original control systems, then use techniques from finitestate synthesis to compute controllers satisfying specifications given in a temporal logic, and finally translate the synthesized schemes back as controllers for the concrete complex systems. Such approaches have been successfully developed and implemented for the synthesis of controllers over nonprobabilistic control systems. In this paper, we extend the technique to probabilistic control systems modeled by controlled stochastic differential equations. We show that for every stochastic control system satisfying a probabilistic variant of incremental inputtostate stability, and for every given precision ε> 0, a finitestate transition system can be constructed, which is εapproximately bisimilar (in the sense of moments) to the original stochastic control system. Moreover, we provide results relating stochastic control systems to their corresponding finitestate transition systems in terms of probabilistic bisimulation relations known in the literature. We demonstrate the effectiveness of the construction by synthesizing controllers for stochastic control systems over rich specifications expressed in linear temporal logic. The discussed technique enables a new, automated, correctbyconstruction controller synthesis approach for stochastic control systems, which are common mathematical models employed in many safety critical systems subject to structured uncertainty and are thus relevant for cyberphysical applications. 1. Introduction, Literature Background
Backstepping controller synthesis and characterizations of incremental stability
 Systems & Control Letters
"... Abstract. Incremental stability is a property of dynamical and control systems, requiring the uniform asymptotic stability of every trajectory, rather than that of an equilibrium point or a particular timevarying trajectory. Similarly to stability, Lyapunov functions and contraction metrics play i ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
Abstract. Incremental stability is a property of dynamical and control systems, requiring the uniform asymptotic stability of every trajectory, rather than that of an equilibrium point or a particular timevarying trajectory. Similarly to stability, Lyapunov functions and contraction metrics play important roles in the study of incremental stability. In this paper, we provide characterizations and descriptions of incremental stability in terms of existence of coordinateinvariant notions of incremental Lyapunov functions and contraction metrics, respectively. Most design techniques providing controllers rendering control systems incrementally stable have two main drawbacks: they can only be applied to control systems in either parametricstrictfeedback or strictfeedback form, and they require these control systems to be smooth. In this paper, we propose a design technique that is applicable to larger classes of (not necessarily smooth) control systems. Moreover, we propose a recursive way of constructing contraction metrics (for smooth control systems) and incremental Lyapunov functions which have been identified as a key tool enabling the construction of finite abstractions of nonlinear control systems, the approximation of stochastic hybrid systems, sourcecode model checking for nonlinear dynamical systems and so on. The effectiveness of the proposed results in this paper is illustrated by synthesizing a controller rendering a nonsmooth control system incrementally stable as well as constructing its finite abstraction, using the computed incremental Lyapunov function. 1.
Performance assessment and design of abstracted models for stochastic hybrid systems through a randomized approach
, 2014
"... ..."
[17] G. Conte, C. H. Moog, and A. M. Perdon, Algebraic Methods for Non linear Control Systems. Theory and Applications, 2nd ed. New York:
, 2008
"... braic formalism of differential oneforms for nonlinear control systems on time scales, ” Proc. Estonian Acad. Sci. Phys. Math., vol. 56, no. 3, pp. 264–282, 2007. ..."
Abstract
 Add to MetaCart
(Show Context)
braic formalism of differential oneforms for nonlinear control systems on time scales, ” Proc. Estonian Acad. Sci. Phys. Math., vol. 56, no. 3, pp. 264–282, 2007.
Approximate Markovian Abstractions for Linear Stochastic Systems
"... Abstract — In this paper, we present a method to generate a finite Markovian abstraction for a discrete time linear stochastic system evolving in a full dimensional polytope. Our approach involves an adaptation of an existing approximate abstraction procedure combined with a bisimulationlike refine ..."
Abstract
 Add to MetaCart
Abstract — In this paper, we present a method to generate a finite Markovian abstraction for a discrete time linear stochastic system evolving in a full dimensional polytope. Our approach involves an adaptation of an existing approximate abstraction procedure combined with a bisimulationlike refinement algorithm. It proceeds by approximating the transition probabilities from one region to another by calculating the probability from a single representative point in the first region. We derive the exact bound of the approximation error and an explicit expression for its growth over time. To achieve a desired error value, we employ an adaptive refinement algorithm that takes advantage of the dynamics of the system. We demonstrate the performance of our method through simulations. I.