Results 1  10
of
312
In defense of onevsall classification
 Journal of Machine Learning Research
, 2004
"... Editor: John ShaweTaylor We consider the problem of multiclass classification. Our main thesis is that a simple “onevsall ” scheme is as accurate as any other approach, assuming that the underlying binary classifiers are welltuned regularized classifiers such as support vector machines. This the ..."
Abstract

Cited by 312 (0 self)
 Add to MetaCart
Editor: John ShaweTaylor We consider the problem of multiclass classification. Our main thesis is that a simple “onevsall ” scheme is as accurate as any other approach, assuming that the underlying binary classifiers are welltuned regularized classifiers such as support vector machines. This thesis is interesting in that it disagrees with a large body of recent published work on multiclass classification. We support our position by means of a critical review of the existing literature, a substantial collection of carefully controlled experimental work, and theoretical arguments.
HAMMER: hierarchical attribute matching mechanism for elastic registration
 IEEE Trans. on Medical Imaging
, 2002
"... A new approach is presented for elastic registration of medical images, and is applied to magnetic resonance images of the brain. Experimental results demonstrate remarkably high accuracy in superposition of images from different subjects, thus enabling very precise localization of morphological cha ..."
Abstract

Cited by 269 (91 self)
 Add to MetaCart
(Show Context)
A new approach is presented for elastic registration of medical images, and is applied to magnetic resonance images of the brain. Experimental results demonstrate remarkably high accuracy in superposition of images from different subjects, thus enabling very precise localization of morphological characteristics in population studies. There are two major novelties in the proposed algorithm. First, it uses an attribute vector, i.e. a set of geometric moment invariants that is defined on each voxel in an image, to reflect the underlying anatomy at different scales. The attribute vector, if rich enough, can distinguish between different parts of an image, which helps establish anatomical correspondences in the deformation procedure. This is a fundamental deviation of our method from other volumetric deformation methods, which are typically based on maximizing image similarity. Second, it employs a hierarchical deformation mechanism, which is initially influenced by parts of the anatomy that can be identified relatively more reliably than others. Moreover, the deformation mechanism involves a sequence of local smooth transformations, which do not update positions of individual voxels, but rather are based on evaluating a similarity of attribute vectors over a larger subvolume of a volumetric image. This renders this algorithm very robust to suboptimal solutions. A number of experiments in this paper have demonstrated excellent performance. 1.
Intrinsic motivation systems for autonomous mental development
 IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION
, 2007
"... Exploratory activities seem to be intrinsically rewarding for children and crucial for their cognitive development. Can a machine be endowed with such an intrinsic motivation system? This is the question we study in this paper, presenting a number of computational systems that try to capture this dr ..."
Abstract

Cited by 253 (56 self)
 Add to MetaCart
(Show Context)
Exploratory activities seem to be intrinsically rewarding for children and crucial for their cognitive development. Can a machine be endowed with such an intrinsic motivation system? This is the question we study in this paper, presenting a number of computational systems that try to capture this drive towards novel or curious situations. After discussing related research coming from developmental psychology, neuroscience, developmental robotics, and active learning, this paper presents the mechanism of Intelligent Adaptive Curiosity, an intrinsic motivation system which pushes a robot towards situations in which it maximizes its learning progress. This drive makes the robot focus on situations which are neither too predictable nor too unpredictable, thus permitting autonomous mental development. The complexity of the robot’s activities autonomously increases and complex developmental sequences selforganize without
Interactive Deduplication using Active Learning
, 2002
"... Deduplication is a key operation in integrating data from multiple sources. The main challenge in this task is designing a function that can resolve when a pair of records refer to the same entity in spite of various data inconsistencies. Most existing systems use handcoded functions. One way to ov ..."
Abstract

Cited by 236 (5 self)
 Add to MetaCart
Deduplication is a key operation in integrating data from multiple sources. The main challenge in this task is designing a function that can resolve when a pair of records refer to the same entity in spite of various data inconsistencies. Most existing systems use handcoded functions. One way to overcome the tedium of handcoding is to train a classifier to distinguish between duplicates and nonduplicates. The success of this method critically hinges on being able to provide a covering and challenging set of training pairs that bring out the subtlety of the deduplication function. This is nontrivial because it requires manually searching for various data inconsistencies between any two records spread apart in large lists.
We present our design of a learningbased deduplication
system that uses a novel method of interactively discovering
challenging training pairs using active learning. Our
experiments on reallife datasets show that active learning
signi#12;cantly reduces the number of instances needed to
achieve high accuracy. We investigate various design issues
that arise in building a system to provide interactive
response, fast convergence, and interpretable output.
Support vector machines using GMM supervectors for speaker verification
 IEEE Signal Processing Letters
, 2006
"... pretations, conclusions, and recommendations are those of the authors and are not necessarily endorsed by the United States ..."
Abstract

Cited by 182 (6 self)
 Add to MetaCart
(Show Context)
pretations, conclusions, and recommendations are those of the authors and are not necessarily endorsed by the United States
SVM based speaker verification using a GMM supervector kernel and NAP variability compensation
 in Proceedings of ICASSP, 2006
"... Gaussian mixture models with universal backgrounds (UBMs) have become the standard method for speaker recognition. Typically, a speaker model is constructed by MAP adaptation of the means of the UBM. A GMM supervector is constructed by stacking the means of the adapted mixture components. A recent d ..."
Abstract

Cited by 157 (16 self)
 Add to MetaCart
(Show Context)
Gaussian mixture models with universal backgrounds (UBMs) have become the standard method for speaker recognition. Typically, a speaker model is constructed by MAP adaptation of the means of the UBM. A GMM supervector is constructed by stacking the means of the adapted mixture components. A recent discovery is that latent factor analysis of this GMM supervector is an effective method for variability compensation. We consider this GMM supervector in the context of support vector machines. We construct a support vector machine kernel using the GMM supervector. We show similarities based on this kernel between the method of SVM nuisance attribute projection (NAP) and the recent results in latent factor analysis. Experiments on a NIST SRE 2005 corpus demonstrate the effectiveness of the new technique. 1.
The Kernel Recursive Least Squares Algorithm
 IEEE Transactions on Signal Processing
, 2003
"... We present a nonlinear kernelbased version of the Recursive Least Squares (RLS) algorithm. Our KernelRLS (KRLS) algorithm performs linear regression in the feature space induced by a Mercer kernel, and can therefore be used to recursively construct the minimum mean squared error regressor. Spars ..."
Abstract

Cited by 138 (2 self)
 Add to MetaCart
(Show Context)
We present a nonlinear kernelbased version of the Recursive Least Squares (RLS) algorithm. Our KernelRLS (KRLS) algorithm performs linear regression in the feature space induced by a Mercer kernel, and can therefore be used to recursively construct the minimum mean squared error regressor. Sparsity of the solution is achieved by a sequential sparsification process that admits into the kernel representation a new input sample only if its feature space image cannot be suffciently well approximated by combining the images of previously admitted samples. This sparsification procedure is crucial to the operation of KRLS, as it allows it to operate online, and by effectively regularizing its solutions. A theoretical analysis of the sparsification method reveals its close affinity to kernel PCA, and a datadependent loss bound is presented, quantifying the generalization performance of the KRLS algorithm. We demonstrate the performance and scaling properties of KRLS and compare it to a stateof theart Support Vector Regression algorithm, using both synthetic and real data. We additionally test KRLS on two signal processing problems in which the use of traditional leastsquares methods is commonplace: Time series prediction and channel equalization.
The analysis of decomposition methods for support vector machines
 IEEE Transactions on Neural Networks
, 1999
"... Abstract. The decomposition method is currently one of the major methods for solving support vector machines. An important issue of this method is the selection of working sets. In this paper through the design of decomposition methods for boundconstrained SVM formulations we demonstrate that the w ..."
Abstract

Cited by 131 (20 self)
 Add to MetaCart
(Show Context)
Abstract. The decomposition method is currently one of the major methods for solving support vector machines. An important issue of this method is the selection of working sets. In this paper through the design of decomposition methods for boundconstrained SVM formulations we demonstrate that the working set selection is not a trivial task. Then from the experimental analysis we propose a simple selection of the working set which leads to faster convergences for difficult cases. Numerical experiments on different types of problems are conducted to demonstrate the viability of the proposed method.
Torch: A Modular Machine Learning Software Library
, 2002
"... Many scientific communities have expressed a growing interest in machine learning algorithms recently, mainly due to the generally good results they provide, compared to traditional statistical or AI approaches. However, these machine learning algorithms are often complex to implement and to use pro ..."
Abstract

Cited by 128 (22 self)
 Add to MetaCart
Many scientific communities have expressed a growing interest in machine learning algorithms recently, mainly due to the generally good results they provide, compared to traditional statistical or AI approaches. However, these machine learning algorithms are often complex to implement and to use properly and efficiently. We thus present in this paper a new machine learning software library in which most stateoftheart algorithms have already been implemented and are available in a unified framework, in order for scientists to be able to use them, compare them, and even extend them. More interestingly, this library is freely available under a BSD license and can be retrieved on the web by everyone.
A Parallel Mixture of SVMs for Very Large Scale Problems
, 2002
"... Support Vector Machines (SVMs) are currently the stateoftheart models for many classification problems but they suffer from the complexity of their training algorithm which is at least quadratic with respect to the number of examples. ..."
Abstract

Cited by 108 (0 self)
 Add to MetaCart
(Show Context)
Support Vector Machines (SVMs) are currently the stateoftheart models for many classification problems but they suffer from the complexity of their training algorithm which is at least quadratic with respect to the number of examples.