Results 1 
8 of
8
Gradient Weights help Nonparametric Regressors
"... In regression problems over R d, the unknown function f often varies more in some coordinates than in others. We show that weighting each coordinate i with the estimated norm of the ith derivative of f is an efficient way to significantly improve the performance of distancebased regressors, e.g. ke ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
(Show Context)
In regression problems over R d, the unknown function f often varies more in some coordinates than in others. We show that weighting each coordinate i with the estimated norm of the ith derivative of f is an efficient way to significantly improve the performance of distancebased regressors, e.g. kernel and kNN regressors. We propose a simple estimator of these derivative norms and prove its consistency. Moreover, the proposed estimator is efficiently learned online. 1
Gradients Weights improve Regression and Classification
, 2016
"... Abstract In regression problems over R d , the unknown function f often varies more in some coordinates than in others. We show that weighting each coordinate i according to an estimate of the variation of f along coordinate i e.g. the L 1 norm of the ithdirectional derivative of f is an efficie ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract In regression problems over R d , the unknown function f often varies more in some coordinates than in others. We show that weighting each coordinate i according to an estimate of the variation of f along coordinate i e.g. the L 1 norm of the ithdirectional derivative of f is an efficient way to significantly improve the performance of distancebased regressors such as kernel and kNN regressors. The approach, termed Gradient Weighting (GW), consists of a first pass regression estimate f n which serves to evaluate the directional derivatives of f , and a secondpass regression estimate on the reweighted data. The GW approach can be instantiated for both regression and classification, and is grounded in strong theoretical principles having to do with the way regression bias and variance are affected by a generic featureweighting scheme. These theoretical principles provide further technical foundation for some existing featureweighting heuristics that have proved successful in practice. We propose a simple estimator of these derivative norms and prove its consistency. The proposed estimator computes efficiently and easily extends to run online. We then derive a classification version of the GW approach which evaluates on realworlds datasets with as much success as its regression counterpart.
Recent Researches in Applied Mathematics and Economics Dynamic GP models: an overview and recent developments
"... Abstract: Various methods can be used for nonlinear, dynamicsystem identification and Gaussian process (GP) model is a relatively recent one. The GP model is an example of a probabilistic, nonparametric model with uncertainty predictions. It possesses several interesting features like model predict ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract: Various methods can be used for nonlinear, dynamicsystem identification and Gaussian process (GP) model is a relatively recent one. The GP model is an example of a probabilistic, nonparametric model with uncertainty predictions. It possesses several interesting features like model predictions contain the measure of confidence. Further, the model has a small number of training parameters, a facilitated structure determination and different possibilities of including prior knowledge about the modelled system. The framework for the identification of dynamic systems with GP models are presented and an overview of recent advances in the research of dynamicsystem identification with GP models and its applications are given. Key–Words: Nonlinearsystem identification, Gaussian process models, dynamic systems, regression, control systems, fault detection, Bayesian filtering.
Title: Adaptating to Observable Changes: Lifelong Learning for Dynamic Robots Submitted by:
, 2012
"... Brief Statement of the Problem: Adaptation is critical for robots that operate in changing environments. For systems that change abruptly—such as a new object coming into the robot’s manipulation workspace or a new part being added to the robot — changes are often observable, since the robot has a g ..."
Abstract
 Add to MetaCart
(Show Context)
Brief Statement of the Problem: Adaptation is critical for robots that operate in changing environments. For systems that change abruptly—such as a new object coming into the robot’s manipulation workspace or a new part being added to the robot — changes are often observable, since the robot has a good chance of either observing the change directly or being told by a human operator that a change has occurred. In this thesis, I will study the problem of adaptation to observable changes in underactuated dynamical systems. Robot table tennis will be the primary application. Within the adaptation domain, I will explore: (i) adapting models of system dynamics, and (ii) adapting perceptual routines. A novel Bayesian framework for (i) is presented in this proposal, while related work on (ii) and several other related research problems are also discussed at length. massachusetts institute of technology
Learning Control Under Uncertainty: A Probabilistic ValueIteration Approach
"... Abstract. In this paper, we introduce a probabilistic version of the wellstudied ValueIteration approach, i.e. Probabilistic ValueIteration (PVI). The PVI approach can handle continuous states and actions in an episodic Reinforcement Learning (RL) setting, while using Gaussian Processes to model ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract. In this paper, we introduce a probabilistic version of the wellstudied ValueIteration approach, i.e. Probabilistic ValueIteration (PVI). The PVI approach can handle continuous states and actions in an episodic Reinforcement Learning (RL) setting, while using Gaussian Processes to model the state uncertainties. We further show, how the approach can be efficiently realized making it suitable for learning with large data. The proposed PVI is evaluated on a benchmark problem, as well as on a real robot for learning a control task. A comparison of PVI with two stateoftheart RL algorithms shows that the proposed approach is competitive in performance while being efficient in learning. 1
CoRLab
"... Abstract—We present a novel approach to learn and combine multiple input to output mappings. Our system can employ the mappings to find solutions that satisfy multiple task constraints simultaneously. This is done by training a network for each mapping independently and maintaining all solutions to ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract—We present a novel approach to learn and combine multiple input to output mappings. Our system can employ the mappings to find solutions that satisfy multiple task constraints simultaneously. This is done by training a network for each mapping independently and maintaining all solutions to multivalued mappings. Redundancies are resolved online through dynamic competitions in neural fields. The performance of the approach is demonstrated in the example application of inverse kinematics learning. We show simulation results for the humanoid robot iCub where we trained two networks: One to learn the kinematics of the robot’s arm and one to learn which postures are close to joint limits. We show how our approach can be used to easily integrate multiple mappings that have been learned separately from each other. When multiple goals are given to the system, such as reaching for a target location and avoiding joint limits, it dynamically selects a solution that satisfies as many goals as possible. I.