Results 1 -
6 of
6
Minimizing Nasty Surprises with Better Informed Decision-Making in Self-Adaptive Systems
"... Abstract—Designers of self-adaptive systems often formulate adaptive design decisions, making unrealistic or myopic assumptions about the system’s requirements and environment. The decisions taken during this formulation are crucial for satisfying requirements. In environments which are characterize ..."
Abstract
-
Cited by 3 (2 self)
- Add to MetaCart
Abstract—Designers of self-adaptive systems often formulate adaptive design decisions, making unrealistic or myopic assumptions about the system’s requirements and environment. The decisions taken during this formulation are crucial for satisfying requirements. In environments which are characterized by uncertainty and dynamism, deviation from these assumptions is the norm and may trigger “surprises”. Our method allows designers to make explicit links between the possible emergence of surprises, risks and design trade-offs. The method can be used to explore the design decisions for self-adaptive systems and choose among decisions that better fulfil (or rather partially fulfil) non-functional requirements and address their trade-offs. The analysis can also provide designers with valuable input for refining the adaptation decisions to balance, for example, resilience (i.e. satisfiability of non-functional requirements and their trade-offs) and stability (i.e. minimizing the frequency of adaptation). The objective is to provide designers of self-adaptive systems with a basis for multi-dimensional what-if analysis to revise and improve the understanding of the environment and its effect on non-functional requirements and thereafter decision-making. We have applied the method to a wireless sensor network for flood prediction. The application shows that the method gives rise to questions that were not explicitly asked before at design-time and assists designers in the process of risk-aware, what-if and trade-off analysis. I.
A world full of surprises: Bayesian theory of surprise to quantify degrees of uncertainty
- in Companion Proceedings of the 36th International Conference on Software Engineering, ser. ICSE Companion 2014
"... In the specific area of software engineering (SE) for self-adaptive systems (SASs) there is a growing research awareness about the synergy between SE and artificial intelligence (AI). However, just few significant results have been published so far. In this paper, we propose a novel and formal Bayes ..."
Abstract
-
Cited by 3 (3 self)
- Add to MetaCart
(Show Context)
In the specific area of software engineering (SE) for self-adaptive systems (SASs) there is a growing research awareness about the synergy between SE and artificial intelligence (AI). However, just few significant results have been published so far. In this paper, we propose a novel and formal Bayesian definition of surprise as the basis for quantitative analysis to measure degrees of uncertainty and deviations of self-adaptive systems from normal behavior. A surprise measures how observed data affects the models or assump-tions of the world during runtime. The key idea is that a “sur-prising ” event can be defined as one that causes a large divergence between the belief distributions prior to and posterior to the event occurring. In such a case the system may decide either to adapt ac-cordingly or to flag that an abnormal situation is happening. In this paper, we discuss possible applications of Bayesian theory of sur-prise for the case of self-adaptive systems using Bayesian dynamic decision networks.
Living with uncertainty in the age of runtime models,” in Models@run.time
"... Abstract. Uncertainty can be defined as the difference between infor-mation that is represented in an executing system and the information that is both measurable and available about the system at a certain point in its life-time. A software system can be exposed to multiple sources of uncertainty p ..."
Abstract
-
Cited by 2 (2 self)
- Add to MetaCart
(Show Context)
Abstract. Uncertainty can be defined as the difference between infor-mation that is represented in an executing system and the information that is both measurable and available about the system at a certain point in its life-time. A software system can be exposed to multiple sources of uncertainty produced by, for example, ambiguous requirements and unpredictable execution environments. A runtime model is a dynamic knowledge base that abstracts useful information about the system, its operational context and the extent to which the system meets its stake-holders ’ needs. A software system can successfully operate in multiple dynamic contexts by using runtime models that augment information available at design-time with information monitored at runtime. This chapter explores the role of runtime models as a means to cope with uncertainty. To this end, we introduce a well-suited terminology about models, runtime models and uncertainty and present a state-of-the-art summary on model-based techniques for addressing uncertainty both at development- and runtime. Using a case study about robot systems we discuss how current techniques and the MAPE-K loop can be used to-gether to tackle uncertainty. Furthermore, we propose possible extensions of the MAPE-K loop architecture with runtime models to further handle uncertainty at runtime. The chapter concludes by identifying key chal-lenges, and enabling technologies for using runtime models to address uncertainty, and also identifies closely related research communities that can foster ideas for resolving the challenges raised. 2 Holger Giese, Nelly Bencomo, Liliana Pasquale, et. al. 1
Sri Krishnadevaraya University,
"... Defect prediction and assessment are the essential steps in large organizations and industries where the software complexity is growing exponentially. A large number of software metrics are discovered and used for metric prediction in the literature. Bayesian networks are applied to find the probabi ..."
Abstract
- Add to MetaCart
(Show Context)
Defect prediction and assessment are the essential steps in large organizations and industries where the software complexity is growing exponentially. A large number of software metrics are discovered and used for metric prediction in the literature. Bayesian networks are applied to find the probabilistic relationships among the software metrics in different phases of software life cycle. Defects in a software project lead to minimize the quality which might be the impact on the overall defect correction. Traditional Bayesian networks are system dependable and their models are invariant towards the accurate computation. Bayesian network model is used to predict the defect correction at various levels of the software development. This model reveals the high potential software efforts and metrics required to minimize the overall cost of the organization for decision support.
Uncertainty Handling in Goal-Driven Self-Optimization – Limiting the Negative Effect on Adaptation
"... and other research outputs Uncertainty handling in goal-driven self-optimization – limiting the negative effect on adaptation ..."
Abstract
- Add to MetaCart
(Show Context)
and other research outputs Uncertainty handling in goal-driven self-optimization – limiting the negative effect on adaptation
General
"... This paper 1 presents a brief outline of an approach to online genetic improvement. We argue that existing progress in genetic improvement can be exploited to support adaptivity. We illustrate our proposed approach with a ‘dreaming smart device ’ example that combines online and offline machine lear ..."
Abstract
- Add to MetaCart
(Show Context)
This paper 1 presents a brief outline of an approach to online genetic improvement. We argue that existing progress in genetic improvement can be exploited to support adaptivity. We illustrate our proposed approach with a ‘dreaming smart device ’ example that combines online and offline machine learning and optimisation. Categories and Subject Descriptors