Results 1  10
of
71,425
The Nature of Statistical Learning Theory
, 1999
"... Statistical learning theory was introduced in the late 1960’s. Until the 1990’s it was a purely theoretical analysis of the problem of function estimation from a given collection of data. In the middle of the 1990’s new types of learning algorithms (called support vector machines) based on the deve ..."
Abstract

Cited by 13236 (32 self)
 Add to MetaCart
Statistical learning theory was introduced in the late 1960’s. Until the 1990’s it was a purely theoretical analysis of the problem of function estimation from a given collection of data. In the middle of the 1990’s new types of learning algorithms (called support vector machines) based
Uncovering Structure . . . : Networks and Multitask Learning Problems
, 2013
"... Extracting knowledge and providing insights into complex mechanisms underlying noisy highdimensional data sets is of utmost importance in many scientific domains. Statistical modeling has become ubiquitous in the analysis of high dimensional functional data in search of better understanding of cogn ..."
Abstract
 Add to MetaCart
where the number of samplesnis much smaller than the ambient dimension p. Learning in highdimensions is difficult due to the curse of dimensionality, however,
Multitask feature learning
 Advances in Neural Information Processing Systems 19
, 2007
"... We present a method for learning a lowdimensional representation which is shared across a set of multiple related tasks. The method builds upon the wellknown 1norm regularization problem using a new regularizer which controls the number of learned features common for all the tasks. We show that th ..."
Abstract

Cited by 240 (8 self)
 Add to MetaCart
We present a method for learning a lowdimensional representation which is shared across a set of multiple related tasks. The method builds upon the wellknown 1norm regularization problem using a new regularizer which controls the number of learned features common for all the tasks. We show
Convex multitask feature learning
 MACHINE LEARNING
, 2007
"... We present a method for learning sparse representations shared across multiple tasks. This method is a generalization of the wellknown singletask 1norm regularization. It is based on a novel nonconvex regularizer which controls the number of learned features common across the tasks. We prove th ..."
Abstract

Cited by 258 (25 self)
 Add to MetaCart
that the method is equivalent to solving a convex optimization problem for which there is an iterative algorithm which converges to an optimal solution. The algorithm has a simple interpretation: it alternately performs a supervised and an unsupervised step, where in the former step it learns task
Multitask Learning,”
, 1997
"... Abstract. Multitask Learning is an approach to inductive transfer that improves generalization by using the domain information contained in the training signals of related tasks as an inductive bias. It does this by learning tasks in parallel while using a shared representation; what is learned for ..."
Abstract

Cited by 677 (6 self)
 Add to MetaCart
Abstract. Multitask Learning is an approach to inductive transfer that improves generalization by using the domain information contained in the training signals of related tasks as an inductive bias. It does this by learning tasks in parallel while using a shared representation; what is learned
Connection Between SVM+ and MultiTask Learning
"... Abstract—Exploiting additional information to improve traditional inductive learning is an active research in machine learning. When data are naturally separated into groups, SVM+[7] can effectively utilize this structure information to improve generalization. Alternatively, we can view learning bas ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
based on data from each group as an individual task, but all these tasks are somehow related; so the same problem can also be formulated as a multitask learning problem. Following the SVM+ approach, we propose a new multitask learning algorithm called svm+MTL, which can be thought as an adaptation
Accelerated gradient method for multitask sparse learning problem
 in Proceedings of the International Conference on Data Mining
, 2009
"... Abstract—Many real world learning problems can be recast as multitask learning problems which utilize correlations among different tasks to obtain better generalization performance than learning each task individually. The feature selection problem in multitask setting has many applications in fie ..."
Abstract

Cited by 24 (1 self)
 Add to MetaCart
Abstract—Many real world learning problems can be recast as multitask learning problems which utilize correlations among different tasks to obtain better generalization performance than learning each task individually. The feature selection problem in multitask setting has many applications
Regularized multitask learning
, 2004
"... This paper provides a foundation for multi–task learning using reproducing kernel Hilbert spaces of vector–valued functions. In this setting, the kernel is a matrix–valued function. Some explicit examples will be described which go beyond our earlier results in [7]. In particular, we characterize cl ..."
Abstract

Cited by 277 (2 self)
 Add to MetaCart
This paper provides a foundation for multi–task learning using reproducing kernel Hilbert spaces of vector–valued functions. In this setting, the kernel is a matrix–valued function. Some explicit examples will be described which go beyond our earlier results in [7]. In particular, we characterize
Multitask Gaussian Process Learning of Robot Inverse Dynamics
"... The inverse dynamics problem for a robotic manipulator is to compute the torques needed at the joints to drive it along a given trajectory; it is beneficial to be able to learn this function for adaptive control. A robotic manipulator will often need to be controlled while holding different loads in ..."
Abstract

Cited by 18 (1 self)
 Add to MetaCart
in its end effector, giving rise to a multitask learning problem. By placing independent Gaussian process priors over the latent functions of the inverse dynamics, we obtain a multitask Gaussian process prior for handling multiple loads, where the intertask similarity depends on the underlying
MultiTask Learning with Gaussian Matrix Generalized Inverse Gaussian Model
"... In this paper, we study the multitask learning problem with a new perspective of considering the structure of the residue error matrix and the lowrank approximation to the task covariance matrix simultaneously. In particular, we first introduce the Matrix Generalized Inverse Gaussian (MGIG) prior ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
In this paper, we study the multitask learning problem with a new perspective of considering the structure of the residue error matrix and the lowrank approximation to the task covariance matrix simultaneously. In particular, we first introduce the Matrix Generalized Inverse Gaussian (MGIG) prior
Results 1  10
of
71,425