Results 1 
7 of
7
Software Engineering Meets Control Theory
"... Abstract—The software engineering community has proposed numerous approaches for making software selfadaptive. These approaches take inspiration from machine learning and control theory, constructing software that monitors and modifies its own behavior to meet goals. Control theory, in particular, ..."
Abstract

Cited by 4 (4 self)
 Add to MetaCart
(Show Context)
Abstract—The software engineering community has proposed numerous approaches for making software selfadaptive. These approaches take inspiration from machine learning and control theory, constructing software that monitors and modifies its own behavior to meet goals. Control theory, in particular, has received considerable attention as it represents a general methodology for creating adaptive systems. Controltheoretical software implementations, however, tend to be ad hoc. While such solutions often work in practice, it is difficult to understand and reason about the desired properties and behavior of the resulting adaptive software and its controller. This paper discusses a control design process for software systems which enables automatic analysis and synthesis of a controller that is guaranteed to have the desired properties and behavior. The paper documents the process and illustrates its use in an example that walks through all necessary steps for selfadaptive controller synthesis. I.
Saturated pathconstrained MDP: Planning under uncertainty and deterministic modelchecking constraints
 In Proc. of 28th AAAI Conf. on Artificial Intelligence (AAAI
, 2014
"... Abstract In many probabilistic planning scenarios, a system's behavior needs to not only maximize the expected utility but also obey certain restrictions. This paper presents Saturated PathConstrained Markov Decision Processes (SPC MDPs), a new MDP type for planning under uncertainty with dete ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
Abstract In many probabilistic planning scenarios, a system's behavior needs to not only maximize the expected utility but also obey certain restrictions. This paper presents Saturated PathConstrained Markov Decision Processes (SPC MDPs), a new MDP type for planning under uncertainty with deterministic modelchecking constraints, e.g., "state s must be visited before s ", "the system must end up in s", or "the system must never enter s". We present a mathematical analysis of SPC MDPs, showing that although SPC MDPs generally have no optimal policies, every instance of this class has an optimal randomized policy for any > 0. We propose a dynamic programmingbased algorithm for finding such policies, and empirically demonstrate this algorithm to be orders of magnitude faster than its nextbest alternative.
Proactive SelfAdaptation under Uncertainty: a Probabilistic Model Checking Approach
"... ABSTRACT Selfadaptive systems tend to be reactive and myopic, adapting in response to changes without anticipating what the subsequent adaptation needs will be. Adapting reactively can result in inefficiencies due to the system performing a suboptimal sequence of adaptations. Furthermore, when ada ..."
Abstract
 Add to MetaCart
(Show Context)
ABSTRACT Selfadaptive systems tend to be reactive and myopic, adapting in response to changes without anticipating what the subsequent adaptation needs will be. Adapting reactively can result in inefficiencies due to the system performing a suboptimal sequence of adaptations. Furthermore, when adaptations have latency, and take some time to produce their effect, they have to be started with sufficient lead time so that they complete by the time their effect is needed. Proactive latencyaware adaptation addresses these issues by making adaptation decisions with a lookahead horizon and taking adaptation latency into account. In this paper we present an approach for proactive latencyaware adaptation under uncertainty that uses probabilistic model checking for adaptation decisions. The key idea is to use a formal model of the adaptive system in which the adaptation decision is left underspecified through nondeterminism, and have the model checker resolve the nondeterministic choices so that the accumulated utility over the horizon is maximized. The adaptation decision is optimal over the horizon, and takes into account the inherent uncertainty of the environment predictions needed for looking ahead. Our results show that the decision based on a lookahead horizon, and the factoring of both tactic latency and environment uncertainty, considerably improve the effectiveness of adaptation decisions.
Probabilistic Model Checking at Runtime for the Provisioning of Cloud Resources
"... Abstract. We elaborate on the ingredients of a modeldriven approach for the dynamic provisioning of cloud resources in an autonomic manner. Our solution has been experimentally evaluated using a NoSQL database cluster running on a cloud infrastructure. In contrast to other techniques, which work o ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract. We elaborate on the ingredients of a modeldriven approach for the dynamic provisioning of cloud resources in an autonomic manner. Our solution has been experimentally evaluated using a NoSQL database cluster running on a cloud infrastructure. In contrast to other techniques, which work on a besteffort basis, we can provide probabilistic guarantees for the provision of sufficient resources. Our approach is based on the probabilistic model checking of Markov Decision Processes (MDPs) at runtime. We present: (i) the specification of an appropriate MDP model for the provisioning of cloud resources, (ii) the generation of a parametric model with systemspecific parameters, (iii) the dynamic instantiation of MDPs at runtime based on logged and current measurements and (iv) their verification using the PRISM model checker for the provisioning/deprovisioning of cloud resources to meet the set goals1. 1
Controller Synthesis for Autonomous Systems Interacting with Human Operators ∗
"... We propose an approach to synthesize control protocols for autonomous systems that account for uncertainties and imperfections in interactions with human operators. As an illustrative example, we consider a scenario involving road network surveillance by an unmanned aerial vehicle (UAV) that is con ..."
Abstract
 Add to MetaCart
(Show Context)
We propose an approach to synthesize control protocols for autonomous systems that account for uncertainties and imperfections in interactions with human operators. As an illustrative example, we consider a scenario involving road network surveillance by an unmanned aerial vehicle (UAV) that is controlled remotely by a human operator but also has a certain degree of autonomy. Depending on the type (i.e., probabilistic and/or nondeterministic) of knowledge about the uncertainties and imperfections in the operatorautonomy interactions, we use abstractions based on Markov decision processes and augment these models to stochastic twoplayer games. Our approach enables the synthesis of operatordependent optimal mission plans for the UAV, highlighting the effects of operator characteristics (e.g., workload, proficiency, and fatigue) on UAV mission performance; it can also provide informative feedback (e.g., Pareto curves showing the tradeoffs between multiple mission objectives), potentially assisting the operator in decisionmaking.
Robust Strategy Synthesis for Probabilistic Systems Applied to RiskLimiting RenewableEnergy Pricing
"... We address the problem of synthesizing control strategies for Ellipsoidal Markov Decision Processes (EMDP), i.e., MDPs whose transition probabilities are expressed using ellipsoidal uncertainty sets. The synthesized strategy aims to maximize the total expected reward of the EMDP, constrained to a s ..."
Abstract
 Add to MetaCart
(Show Context)
We address the problem of synthesizing control strategies for Ellipsoidal Markov Decision Processes (EMDP), i.e., MDPs whose transition probabilities are expressed using ellipsoidal uncertainty sets. The synthesized strategy aims to maximize the total expected reward of the EMDP, constrained to a specification expressed in Probabilistic Computation Tree Logic (PCTL). We prove that the EMDP strategy synthesis problem for the fragment of PCTL disabling operators with a finite time bound is NPcomplete and propose a novel sound and complete algorithm to solve it. We apply these results to the problem of synthesizing optimal energy pricing and dispatch strategies in smart grids that integrate renewable sources of energy. We use rewards to maximize the profit of the network operator and a PCTL specification to constrain the risk of power unbalance and guarantee qualityofservice for the users. The EMDP model used to represent the decisionmaking scenario was trained with measured data and quantitatively captures the uncertainty in the prediction of energy generation. An experimental comparison shows the effectiveness of our method with respect to previous approaches presented in the literature. 1.
Optimal Motion Planning for Markov Decision Processes with CoSafe Linear Temporal Logic Specifications
"... We present preliminary work on the application of probabilistic model checking to motion planning for robot systems, using specifications in cosafe linear temporal logic. We describe our approach, implemented with the probabilistic model checker PRISM, illustrate it with a simple simulated example ..."
Abstract
 Add to MetaCart
We present preliminary work on the application of probabilistic model checking to motion planning for robot systems, using specifications in cosafe linear temporal logic. We describe our approach, implemented with the probabilistic model checker PRISM, illustrate it with a simple simulated example and discuss further extensions and improvements. 1