Results 1  10
of
12
Turing degrees and the Ershov hierarchy
 in Proceedings of the Tenth Asian Logic Conference, Kobe, Japan, 16 September 2008, World Scienti…c
"... Abstract. An nr.e. set can be defined as the symmetric difference of n recursively enumerable sets. The classes of these sets form a natural hierarchy which became a wellstudied topic in recursion theory. In a series of groundbreaking papers, Ershov generalized this hierarchy to transfinite level ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
(Show Context)
Abstract. An nr.e. set can be defined as the symmetric difference of n recursively enumerable sets. The classes of these sets form a natural hierarchy which became a wellstudied topic in recursion theory. In a series of groundbreaking papers, Ershov generalized this hierarchy to transfinite levels based on Kleene’s notations of ordinals and this work lead to a fruitful study of these sets and their manyone and Turing degrees. The Ershov hierarchy is a natural measure of complexity of the sets below the halting problem. In this paper, we survey the early work by Ershov and others on this hierarchy and present the most fundamental results. We also provide some pointers to concurrent work in the field. 1.
Parsimony hierarchies for inductive inference
 Journal of Symbolic Logic
"... Freivalds defined an acceptable programming system independent criterion for learning programs for functions in which the final programs were required to be both correct and “nearly” minimal size, i.e, within a computable function of being purely minimal size. Kinber showed that this parsimony requ ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
(Show Context)
Freivalds defined an acceptable programming system independent criterion for learning programs for functions in which the final programs were required to be both correct and “nearly” minimal size, i.e, within a computable function of being purely minimal size. Kinber showed that this parsimony requirement on final programs limits learning power. However, in scientific inference, parsimony is considered highly desirable. A limcomputable function is (by definition) one calculable by a total procedure allowed to change its mind finitely many times about its output. Investigated is the possibility of assuaging somewhat the limitation on learning power resulting from requiring parsimonious final programs by use of criteria which require the final, correct programs to be “notsonearly ” minimal size, e.g., to be within a limcomputable function of actual minimal size. It is shown that some parsimony in the final program is thereby retained, yet learning power strictly increases. Considered, then, are limcomputable functions as above but for which notations for constructive ordinals are used to bound the number of mind changes allowed regarding the output. This is a variant of an idea introduced by Freivalds and Smith. For this ordinal notation complexity bounded version of limcomputability, the power of
On a generalized notion of mistake bounds
 Information and Computation
"... This paper proposes the use of constructive ordinals as mistake bounds in the online learning model. This approach elegantly generalizes the applicability of the online mistake bound model to learnability analysis of very expressive concept classes like pattern languages, unions of pattern languag ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
This paper proposes the use of constructive ordinals as mistake bounds in the online learning model. This approach elegantly generalizes the applicability of the online mistake bound model to learnability analysis of very expressive concept classes like pattern languages, unions of pattern languages, elementary formal systems, and minimal models of logic programs. The main result in the paper shows that the topological property of effective finite bounded thickness is a sufficient condition for online learnability with a certain ordinal mistake bound. An interesting characterization of the online learning model is shown in terms of the identification in the limit framework. It is established that the classes of languages learnable in the online model with a mistake bound of α are exactly the same as the classes of languages learnable in the limit from both positive and negative data by a Popperian, consistent learner with a mind change bound of α. This result nicely builds a bridge between the two models. 1
Learning How to Separate
"... The main question addressed in the present work is how to find effectively a recursive function separating two sets drawn arbitrarily from a given collection of disjoint sets. In particular, it is investigated in which cases it is possible to satisfy the following additional constraints: confidence ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
The main question addressed in the present work is how to find effectively a recursive function separating two sets drawn arbitrarily from a given collection of disjoint sets. In particular, it is investigated in which cases it is possible to satisfy the following additional constraints: confidence where the learner converges on all datasequences; conservativeness where the learner abandons only definitely wrong hypotheses; consistency where also every intermediate hypothesis is consistent with the data seen so far; setdriven learners whose hypotheses are independent of the order and the number of repetitions of the dataitems supplied; learners where either the last or even all hypotheses are programs of total recursive functions. The present work gives an overview of the relations between these notions and succeeds to answer many questions by finding ways to carry over the corresponding results from other scenarios within inductive inference. Nevertheless, the relations...
Counting Extensional Differences in BCLearning
 PROCEEDINGS OF THE 5TH INTERNATIONAL COLLOQUIUM ON GRAMMATICAL INFERENCE (ICGI 2000), SPRINGER LECTURE NOTES IN A. I. 1891
, 2000
"... Let BC be the model of behaviourally correct function learning as introduced by Barzdins [4] and Case and Smith [8]. We introduce a mind change hierarchy for BC, counting the number of extensional differences in the hypotheses of a learner. We compare the resulting models BCn to models from the ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Let BC be the model of behaviourally correct function learning as introduced by Barzdins [4] and Case and Smith [8]. We introduce a mind change hierarchy for BC, counting the number of extensional differences in the hypotheses of a learner. We compare the resulting models BCn to models from the literature and discuss confidence, team learning, and finitely defective hypotheses. Among other things, we prove that there is a tradeoff between the number of semantic mind changes and the number of anomalies in the hypotheses. We also discuss consequences for language learning. In particular we show that, in contrast to the case of function learning, the family of classes that are confidently BClearnable from text is not closed under finite unions. Keywords. Models of grammar induction, inductive inference, behaviourally correct learning.
Probabilistic Learning of Indexed Families under Monotonicity Constraints  Hierarchy Results and Complexity Aspects
"... We are concerned with probabilistic identification of indexed families of uniformly recursive languages from positive data under monotonicity constraints. Thereby, we consider conservative, strongmonotonic and monotonic probabilistic learning of indexed families with respect to class comprising, c ..."
Abstract
 Add to MetaCart
We are concerned with probabilistic identification of indexed families of uniformly recursive languages from positive data under monotonicity constraints. Thereby, we consider conservative, strongmonotonic and monotonic probabilistic learning of indexed families with respect to class comprising, class preserving and proper hypothesis spaces, and investigate the probabilistic hierarchies in these learning models. In the setting of learning indexed families, probabilistic learning under monotonicity constraints is more powerful than deterministic learning under monotonicity constraints, even if the probability is close to 1, provided the learning machines are restricted to proper or class preserving hypothesis spaces. In the class comprising case, each of the investigated probabilistic hierarchies has a threshold. In particular, we can show for class comprising conservative learning as well as for learning without additional constraints that probabilistic identification and team identification are equivalent. This yields discrete probabilistic hierarchies in these cases. In the second part of our work, we investigate the relation between probabilistic learn
Mind Change Optimal Learning: . . .
, 2007
"... Learning theories play a significant role to machine learning as computability and complexity theories to software engineering. Gold’s language learning paradigm is one cornerstone of modern learning theories. The aim of this thesis is to establish an inductive principle in Gold’s language learning ..."
Abstract
 Add to MetaCart
Learning theories play a significant role to machine learning as computability and complexity theories to software engineering. Gold’s language learning paradigm is one cornerstone of modern learning theories. The aim of this thesis is to establish an inductive principle in Gold’s language learning paradigm to guide the design of machine learning algorithms. We follow the common practice of using the number of mind changes to measure complexity of Gold’s language learning problems, and study efficient learning with respect to mind changes. Our starting point is the idea that a learner that is efficient with respect to mind changes minimizes mind changes not only globally in the entire learning problem, but also locally in subproblems after receiving some evidence. Formalizing this idea leads to the notion of mind change optimality. We characterize mind change complexity of language collections with Cantor’s classic concept of accumulation order. We show that the characteristic property of mind change optimal learners is that they output conjectures (languages) with maximal accumulation order. Therefore, we obtain an inductive principle in Gold’s language learning paradigm based on the simple topological concept accumulation order. The new
Mind Change Complexity of Learning Logic Programs
"... The present paper motivates the study of mind change complexity for learning minimal models of lengthbounded logic programs. It establishes ordinal mind change complexity bounds for learnability of these classes both from positive facts and from positive and negative facts. Building on Angluin’s no ..."
Abstract
 Add to MetaCart
The present paper motivates the study of mind change complexity for learning minimal models of lengthbounded logic programs. It establishes ordinal mind change complexity bounds for learnability of these classes both from positive facts and from positive and negative facts. Building on Angluin’s notion of finite thickness and Wright’s work on finite elasticity, Shinohara defined the property of bounded finite thickness to give a sufficient condition for learnability of indexed families of computable languages from positive data. This paper shows that an effective version of Shinohara’s notion of bounded finite thickness gives sufficient conditions for learnability with ordinal mind change bound, both in the context of learnability from positive data and for learnability from complete (both positive and negative) data. Let ω be a notation for the first limit ordinal. Then, it is shown that if a language defining framework yields a uniformly decidable family of languages and has effective bounded finite thickness, then for each natural number m> 0, the class of languages defined by formal systems of length ≤ m: • is identifiable in the limit from positive data with a mind change bound of ω m; • is identifiable in the limit from both positive and negative data with an ordinal mind change bound of ω × m. The above sufficient conditions are employed to give an ordinal mind change bound for learnability of minimal models of various classes of lengthbounded Prolog programs, including Shapiro’s linear programs, Arimura and Shinohara’s depthbounded linearlycovering programs, and Krishna Rao’s depthbounded linearlymoded programs. It is also noted that the bound for learning from positive data is tight for the example classes considered.
Counting Extensional Differences in BCLearning \Lambda
"... University of Heidelberg Sebastiaan A. Terwijn x Vrije Universiteit Amsterdam ..."
Abstract
 Add to MetaCart
(Show Context)
University of Heidelberg Sebastiaan A. Terwijn x Vrije Universiteit Amsterdam
unknown title
"... Abstract The main question addressed in the present work is how to find effectively a recursive function separating two sets drawn arbitrarily from a given collection of disjoint sets. In particular, it is investigated in which cases it is possible to satisfy the following additional constraints: co ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract The main question addressed in the present work is how to find effectively a recursive function separating two sets drawn arbitrarily from a given collection of disjoint sets. In particular, it is investigated in which cases it is possible to satisfy the following additional constraints: confidence where the learner converges on all datasequences; conservativeness where the learner abandons only definitely wrong hypotheses; consistency where also every intermediate hypothesis is consistent with the data seen so far; setdriven learners whose hypotheses are independent of the order and the number of repetitions of the dataitems supplied; learners where either the last or even all hypotheses are programs of total recursive functions. The present work gives an overview of the relations between these notions and succeeds to answer many questions by finding ways to carry over the corresponding results from other scenarios within inductive inference. Nevertheless, the relations between conservativeness and setdriven inference needed a novel approach which enabled to show the following two major results: (1) There is a class for which recursive separators can be found in a confident and setdriven way, but no conservative learner finds a (not necessarily total) separator for this class. (2) There is a class for which recursive separators can be found in a confident and conservative way, but no setdriven learner finds a (not necessarily total) separator for this class.