Results 11  20
of
319,674
Learning neighborhoods for metric learning
 In ECMLPKDD
, 2012
"... Abstract. Metric learning methods have been shown to perform well on different learning tasks. Many of them rely on target neighborhood relationships that are computed in the original feature space and remain fixed throughout learning. As a result, the learned metric reflects the original neighborho ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
Abstract. Metric learning methods have been shown to perform well on different learning tasks. Many of them rely on target neighborhood relationships that are computed in the original feature space and remain fixed throughout learning. As a result, the learned metric reflects the original
BoundedDistortion Metric Learning
"... Metric learning aims to embed one metric space into another to benefit tasks like classification and clustering. Although a greatly distorted metric space has a high degree of freedom to fit training data, it is prone to overfitting and numerical inaccuracy. This paper presents boundeddistortion me ..."
Abstract
 Add to MetaCart
Metric learning aims to embed one metric space into another to benefit tasks like classification and clustering. Although a greatly distorted metric space has a high degree of freedom to fit training data, it is prone to overfitting and numerical inaccuracy. This paper presents bounded
Kernel Density Metric Learning
"... Abstract: This paper introduces a supervised metric learning algorithm, called kernel density metric learning (KDML), which is easy to use and provides nonlinear, probabilitybased distance measures. KDML constructs a direct nonlinear mapping from the original input space into a feature space based ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
Abstract: This paper introduces a supervised metric learning algorithm, called kernel density metric learning (KDML), which is easy to use and provides nonlinear, probabilitybased distance measures. KDML constructs a direct nonlinear mapping from the original input space into a feature space based
Sparse Compositional Metric Learning
"... We propose a new approach for metric learning by framing it as learning a sparse combination of locally discriminative metrics that are inexpensive to generate from the training data. This flexible framework allows us to naturally derive formulations for global, multitask and local metric learning. ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
We propose a new approach for metric learning by framing it as learning a sparse combination of locally discriminative metrics that are inexpensive to generate from the training data. This flexible framework allows us to naturally derive formulations for global, multitask and local metric learning
TwoStage Metric Learning
"... In this paper, we present a novel twostage metric learning algorithm. We first map each learning instance to a probability distribution by computing its similarities to a set of fixed anchor points. Then, we define the distance in the input data space as the Fisher information distance on the asso ..."
Abstract
 Add to MetaCart
In this paper, we present a novel twostage metric learning algorithm. We first map each learning instance to a probability distribution by computing its similarities to a set of fixed anchor points. Then, we define the distance in the input data space as the Fisher information distance
Robust Structural Metric Learning
"... Metric learning algorithms produce a linear transformation of data which is optimized for a prediction task, such as nearestneighbor classification or ranking. However, when the input data contains a large portion of noninformative features, existing methods fail to identify the relevant features, ..."
Abstract

Cited by 11 (1 self)
 Add to MetaCart
Metric learning algorithms produce a linear transformation of data which is optimized for a prediction task, such as nearestneighbor classification or ranking. However, when the input data contains a large portion of noninformative features, existing methods fail to identify the relevant features
Metric Learning for Kernel Regression
"... Kernel regression is a wellestablished method for nonlinear regression in which the target value for a test point is estimated using a weighted average of the surrounding training samples. The weights are typically obtained by applying a distancebased kernel function to each of the samples, which ..."
Abstract

Cited by 26 (0 self)
 Add to MetaCart
presumes the existence of a welldefined distance metric. In this paper, we construct a novel algorithm for supervised metric learning, which learns a distance function by directly minimizing the leaveoneout regression error. We show that our algorithm makes kernel regression comparable with the state
Nonlinear metric learning
 In NIPS
, 2012
"... In this paper, we introduce two novel metric learning algorithms, χ2LMNN and GBLMNN, which are explicitly designed to be nonlinear and easytouse. The two approaches achieve this goal in fundamentally different ways: χ2LMNN inherits the computational benefits of a linear mapping from linear met ..."
Abstract

Cited by 18 (3 self)
 Add to MetaCart
In this paper, we introduce two novel metric learning algorithms, χ2LMNN and GBLMNN, which are explicitly designed to be nonlinear and easytouse. The two approaches achieve this goal in fundamentally different ways: χ2LMNN inherits the computational benefits of a linear mapping from linear
Fantope regularization in metric learning
 In CVPR, 2014. 8
"... This paper introduces a regularization method to explicitly control the rank of a learned symmetric positive semidefinite distance matrix in distance metric learning. To this end, we propose to incorporate in the objective function a linear regularization term that minimizes the k smallest eigenval ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
This paper introduces a regularization method to explicitly control the rank of a learned symmetric positive semidefinite distance matrix in distance metric learning. To this end, we propose to incorporate in the objective function a linear regularization term that minimizes the k smallest
Deep Transfer Metric Learning
"... Conventional metric learning methods usually assume that the training and test samples are captured in similar scenarios so that their distributions are assumed to be the same. This assumption doesn’t hold in many real visual recognition applications, especially when samples are captured across dif ..."
Abstract
 Add to MetaCart
Conventional metric learning methods usually assume that the training and test samples are captured in similar scenarios so that their distributions are assumed to be the same. This assumption doesn’t hold in many real visual recognition applications, especially when samples are captured across
Results 11  20
of
319,674