Results 1 
8 of
8
Learning I/O Automata
"... Links are established between three widely used modeling frameworks for reactive systems: the ioco theory of Tretmans, the interface automata of De Alfaro and Henzinger, and Mealy machines. It is shown that, by exploiting these links, any tool for active learning of Mealy machines can be used for l ..."
Abstract

Cited by 13 (6 self)
 Add to MetaCart
Links are established between three widely used modeling frameworks for reactive systems: the ioco theory of Tretmans, the interface automata of De Alfaro and Henzinger, and Mealy machines. It is shown that, by exploiting these links, any tool for active learning of Mealy machines can be used for learning I/O automata that are deterministic and output determined. The main idea is to place a transducer in between the I/O automata teacher and the Mealy machine learner, which translates concepts from the world of I/O automata to the world of Mealy machines, and vice versa. The transducer comes equipped with an interface automaton that allows us to focus the learning process on those parts of the behavior that can effectively be tested and/or are of particular interest. The approach has been implemented on top of the LearnLib tool and has been applied successfully to three case studies.
AngluinStyle Learning of NFA
"... We introduce NL ∗ , a learning algorithm for inferring nondeterministic finitestate automata using membership and equivalence queries. More specifically, residual finitestate automata (RFSA) are learned similarly as in Angluin’s popular L ∗ algorithm, which, however, learns deterministic finitest ..."
Abstract

Cited by 11 (2 self)
 Add to MetaCart
We introduce NL ∗ , a learning algorithm for inferring nondeterministic finitestate automata using membership and equivalence queries. More specifically, residual finitestate automata (RFSA) are learned similarly as in Angluin’s popular L ∗ algorithm, which, however, learns deterministic finitestate automata (DFA). Like in a DFA, the states of an RFSA represent residual languages. Unlike a DFA, an RFSA restricts to prime residual languages, which cannot be described as the union of other residual languages. In doing so, RFSA can be exponentially more succinct than DFA. They are, therefore, the preferable choice for many learning applications. The implementation of our algorithm is applied to a collection of examples and confirms the expected advantage of NL ∗ over L ∗.
A Fresh Approach to Learning Register Automata
, 2013
"... This paper provides an Angluinstyle learning algorithm for a class of register automata supporting the notion of fresh data values. More specifically, we introduce session automata which are well suited for modeling protocols in which sessions using fresh values are of major interest, like in secu ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
(Show Context)
This paper provides an Angluinstyle learning algorithm for a class of register automata supporting the notion of fresh data values. More specifically, we introduce session automata which are well suited for modeling protocols in which sessions using fresh values are of major interest, like in security protocols or adhoc networks. We show that session automata (i) have an expressiveness partly extending, partly reducing that of register automata, (ii) admit a symbolic regular representation, and (iii) have a decidable equivalence and modelchecking problem (unlike register automata). Using these results, we establish a learning algorithm to infer session automata through membership and equivalence queries. Finally, we strengthen the robustness of our automaton by its characterization in monadic secondorder logic.
D.: Learning minimal deterministic automata from inexperienced teachers
, 2012
"... ..."
(Show Context)
Learning Finite State Controllers from Simulation
"... Abstract. We propose a methodology to automatically generate agent controllers, represented as state machines, to act in partially observable environments. We define a multistep process, in which increasingly accurate models generally too complex to be used for planning are employed to generate p ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract. We propose a methodology to automatically generate agent controllers, represented as state machines, to act in partially observable environments. We define a multistep process, in which increasingly accurate models generally too complex to be used for planning are employed to generate possible traces of execution by simulation. Those traces are then utilized to induce a state machine, that represents all reasonable behaviors, given the approximate models and planners previously used. The state machine will have multiple possible choices in some of its states. Those states are choice points, and we defer the learning of those choices to the deployment of the agent in the real environment. The controller obtained can therefore adapt to the actual environment, limiting the search space in a sensible way.
Creative Commons Attribution License. Learning Markov Decision Processes for Model Checking
"... Constructing an accurate system model for formal model verification can be both resource demanding and timeconsuming. To alleviate this shortcoming, algorithms have been proposed for automatically learning system models based on observed system behaviors. In this paper we extend the algorithm on le ..."
Abstract
 Add to MetaCart
(Show Context)
Constructing an accurate system model for formal model verification can be both resource demanding and timeconsuming. To alleviate this shortcoming, algorithms have been proposed for automatically learning system models based on observed system behaviors. In this paper we extend the algorithm on learning probabilistic automata to reactive systems, where the observed system behavior is in the form of alternating sequences of inputs and outputs. We propose an algorithm for automatically learning a deterministic labeled Markov decision process model from the observed behavior of a reactive system. The proposed learning algorithm is adapted from algorithms for learning deterministic probabilistic finite automata, and extended to include both probabilistic and nondeterministic transitions. The algorithm is empirically analyzed and evaluated by learning system models of slot machines. The evaluation is performed by analyzing the probabilistic linear temporal logic properties of the system as well as by analyzing the schedulers, in particular the optimal schedulers, induced by the learned models. 1