Results 1 
8 of
8
Transfer Learning
"... Abstract. Transfer learning is the improvement of learning in a new task through the transfer of knowledge from a related task that has already been learned. While most machine learning algorithms are designed to address single tasks, the development of algorithms that facilitate transfer learning i ..."
Abstract

Cited by 25 (2 self)
 Add to MetaCart
(Show Context)
Abstract. Transfer learning is the improvement of learning in a new task through the transfer of knowledge from a related task that has already been learned. While most machine learning algorithms are designed to address single tasks, the development of algorithms that facilitate transfer learning is a topic of ongoing interest in the machinelearning community. This chapter provides an introduction to the goals, formulations, and challenges of transfer learning. It surveys current research in this area, giving an overview of the state of the art and outlining the open problems. The survey covers transfer in both inductive learning and reinforcement learning, and discusses the issues of negative transfer and task mapping in depth.
Transfer in Reinforcement Learning via Markov Logic Networks
"... We propose the use of statistical relational learning, and in particular the formalism of Markov Logic Networks, for transfer in reinforcement learning. Our goal is to extract relational knowledge from a source task and use it to speed up learning in a related target task. We do so by learning a Mar ..."
Abstract

Cited by 8 (5 self)
 Add to MetaCart
We propose the use of statistical relational learning, and in particular the formalism of Markov Logic Networks, for transfer in reinforcement learning. Our goal is to extract relational knowledge from a source task and use it to speed up learning in a related target task. We do so by learning a Markov Logic Network that describes the sourcetask Qfunction, and then using it for decision making in the early learning stages of the target task. Through experiments in the RoboCup simulatedsoccer domain, we show that this approach can provide a substantial performance benefit in the target task.
Reinforcement learning with markov logic networks
 In Proceedigns of European Workshop on Reinforcement Learning
, 2008
"... Abstract. In this paper, we propose a method to combine reinforcement learning (RL) and Markov logic networks (MLN). RL usually does not consider the inherent relations or logical connections of the features. Markov logic networks combines firstorder logic and graphical model and it can represent a ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
Abstract. In this paper, we propose a method to combine reinforcement learning (RL) and Markov logic networks (MLN). RL usually does not consider the inherent relations or logical connections of the features. Markov logic networks combines firstorder logic and graphical model and it can represent a wide variety of knowledge compactly and abstractly. We propose a new method, reinforcement learning algorithm with Markov logic networks (RLMLN), to deal with many difficult problems in RL which have much prior knowledge to employ and need some relational representation of states. With RLMLN, prior knowledge can be easily introduced to the learning systems and the learning process will become more efficient. Experiments on blocks world illustrate that RLMLN is a promising method. 1
Improving Learning of Markov Logic Networks using Transfer and BottomUp Induction
"... Statistical relational learning (SRL) algorithms combine ideas from rich knowledge representations, such as firstorder logic, with those from probabilistic graphical models, such as Markov networks, to address the problem of learning from multirelational data. One challenge posed by such data is t ..."
Abstract
 Add to MetaCart
(Show Context)
Statistical relational learning (SRL) algorithms combine ideas from rich knowledge representations, such as firstorder logic, with those from probabilistic graphical models, such as Markov networks, to address the problem of learning from multirelational data. One challenge posed by such data is that individual instances are frequently very large and include complex relationships among the entities. Moreover, because separate instances do not follow the same structure and contain varying numbers of entities, they cannot be effectively represented as a featurevector. SRL models and algorithms have been successfully applied to a wide variety of domains such as social network analysis, biological data analysis, and planning, among others. Markov logic networks (MLNs) are a recentlydeveloped SRL model that consists of weighted firstorder clauses. MLNs can be viewed as templates that define Markov networks when provided with the set of constants present in a domain. MLNs are therefore very powerful because they inherit the expressivity of firstorder logic. At the same time, MLNs can flexibly deal with noisy or uncertain data to produce probabilistic predictions for a set of propositions. MLNs have also been shown to subsume several other popular SRL models. The expressive power of MLNs comes at a cost: structure learning, or learning the firstorder clauses
Learning with Markov Logic Networks: Transfer Learning, Structure Learning, and an Application to Web Query Disambiguation
, 2009
"... ..."
Information Theoretic Similarity Measures for Interdomain Predicate Mapping
"... The development of similarity functions for firstorder logic predicates and argument types is the initial step in the development of techniques for interdomain predicate mapping. Predicate mappings established across textual data sources can be applied in federated text search during resource selec ..."
Abstract
 Add to MetaCart
(Show Context)
The development of similarity functions for firstorder logic predicates and argument types is the initial step in the development of techniques for interdomain predicate mapping. Predicate mappings established across textual data sources can be applied in federated text search during resource selection, and by systems such as Markov Logic Networks for transfer learning. In this work, we propose similarity functions for mapping predicates and argument types. Each predicate is represented by a mutual information matrix characterizing statistical associations between predicate arguments. Drawbacks of using the Euclidean distance function as a similarity measure are discussed and mitigated in our approach. We also demonstrate that variations in the numbers of groundings of predicates have a significant and undesirable impact on their similarity scores, and propose a normalization scheme to address this deficiency. Preliminary experimental results on real world datasets collected from the web demonstrate the effects of normalization of mutual information matrices and the resulting invariance of the similarity functions to variations in the numbers of groundings of predicates. The results also show that predicates that encode the same type of relations (e.g., onetomany) tend to receive higher similarity scores than pairs of predicates where each predicate encodes a different type of relation (e.g., onetomany and onetoone). Overall, our approach to measuring similarity for predicate mapping promises to scale to a number of text mining applications including federated search and retrieval, as well as other domains such as transfer learning. 1
TABLE OF CONTENTS
"... Maclin, to my committee members, and to the Machine Learning Group at the University of WisconsinMadison. ..."
Abstract
 Add to MetaCart
(Show Context)
Maclin, to my committee members, and to the Machine Learning Group at the University of WisconsinMadison.
A Study of Boosting based Transfer Learning for Activity and Gesture Recognition
, 2011
"... i Realworld environments are characterized by nonstationary and continuously evolving data. Learning a classification model on this data would require a framework that is able to adapt itself to newer circumstances. Under such circumstances, transfer learning has come to be a dependable methodolog ..."
Abstract
 Add to MetaCart
(Show Context)
i Realworld environments are characterized by nonstationary and continuously evolving data. Learning a classification model on this data would require a framework that is able to adapt itself to newer circumstances. Under such circumstances, transfer learning has come to be a dependable methodology for improving classification performance with reduced training costs and without the need for explicit relearning from scratch. In this thesis, a novel instance transfer technique that adapts a “Costsensitive ” variation of AdaBoost is presented. The method capitalizes on the theoretical and functional properties of AdaBoost to selectively reuse outdated training instances obtained from a “source ” domain to effectively classify unseen instances occurring in a different, but related “target ” domain. The algorithm is evaluated on realworld classification problems namely accelerometer based 3D gesture recognition, smart home activity recognition and text categorization. The performance on these datasets is analyzed and evaluated against popular boostingbased instance transfer techniques. In addition, supporting empirical studies, that investigate some of the less explored bottlenecks of boosting based instance transfer methods, are presented, to understand the suitability and effectiveness of this form of knowledge transfer. ii