Results 1  10
of
56,588
The Nature of Statistical Learning Theory
, 1999
"... Statistical learning theory was introduced in the late 1960’s. Until the 1990’s it was a purely theoretical analysis of the problem of function estimation from a given collection of data. In the middle of the 1990’s new types of learning algorithms (called support vector machines) based on the deve ..."
Abstract

Cited by 13236 (32 self)
 Add to MetaCart
Statistical learning theory was introduced in the late 1960’s. Until the 1990’s it was a purely theoretical analysis of the problem of function estimation from a given collection of data. In the middle of the 1990’s new types of learning algorithms (called support vector machines) based
On the Generalization Ability of Online Learning Algorithms
 IEEE Transactions on Information Theory
, 2001
"... In this paper we show that online algorithms for classification and regression can be naturally used to obtain hypotheses with good datadependent tail bounds on their risk. Our results are proven without requiring complicated concentrationofmeasure arguments and they hold for arbitrary onlin ..."
Abstract

Cited by 176 (7 self)
 Add to MetaCart
learning algorithms. Furthermore, when applied to concrete online algorithms, our results yield tail bounds that in many cases are comparable or better than the best known bounds.
Combinatorial Fusion with Online Learning Algorithms
"... Abstract—We give a range of techniques to effectively apply online learning algorithms, such as Perceptron and Winnow, to both online and batch fusion problems. Our first technique is a new way to combine the predictions of multiple hypotheses. These hypotheses are selected from the many hypothese ..."
Abstract
 Add to MetaCart
Abstract—We give a range of techniques to effectively apply online learning algorithms, such as Perceptron and Winnow, to both online and batch fusion problems. Our first technique is a new way to combine the predictions of multiple hypotheses. These hypotheses are selected from the many
On the Generalization Ability of OnLine Learning Algorithms
"... Abstract—In this paper, it is shown how to extract a hypothesis with small risk from the ensemble of hypotheses generated by an arbitrary online learning algorithm run on an independent and identically distributed (i.i.d.) sample of data. Using a simple large deviation argument, we prove tight data ..."
Abstract
 Add to MetaCart
Abstract—In this paper, it is shown how to extract a hypothesis with small risk from the ensemble of hypotheses generated by an arbitrary online learning algorithm run on an independent and identically distributed (i.i.d.) sample of data. Using a simple large deviation argument, we prove tight
Adaptive and SelfConfident OnLine Learning Algorithms
, 2000
"... We study online learning in the linear regression framework. Most of the performance bounds for online algorithms in this framework assume a constant learning rate. To achieve these bounds the learning rate must be optimized based on a posteriori information. This information depends on the wh ..."
Abstract

Cited by 97 (8 self)
 Add to MetaCart
We study online learning in the linear regression framework. Most of the performance bounds for online algorithms in this framework assume a constant learning rate. To achieve these bounds the learning rate must be optimized based on a posteriori information. This information depends
An Online Learning Algorithm for Blind Equalization
, 1996
"... An online algorithm for blind equalization of an FIR channel is proposed by minimizing the mutual information of the output. The algorithm is closely related to the blind separation algorithm based on independent component analysis. It is assumed that the channel impulse responses are unknown and t ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
An online algorithm for blind equalization of an FIR channel is proposed by minimizing the mutual information of the output. The algorithm is closely related to the blind separation algorithm based on independent component analysis. It is assumed that the channel impulse responses are unknown
Partition Nets: An Efficient OnLine Learning Algorithm
 Ninth International Conference on Advanced Robotics
, 1999
"... Partition nets provide a fast method for learning sensorimotor mappings. They combine the generalizing power of neural networks with the "one shot" learning of instancebased search algorithms. Partition nets adjust receptive fields and allocate network weights "on the fly," in r ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
Partition nets provide a fast method for learning sensorimotor mappings. They combine the generalizing power of neural networks with the "one shot" learning of instancebased search algorithms. Partition nets adjust receptive fields and allocate network weights "on the fly
Using Ancillary Statistics in OnLine Learning Algorithms
"... Neural networks are usually curved statistical models. They do not have finite dimensional sufficient statistics, so online learning on the model itself inevitably loses information. In this paper we propose a new scheme for training curved models, inspired by the ideas of ancillary statistics and ..."
Abstract
 Add to MetaCart
Neural networks are usually curved statistical models. They do not have finite dimensional sufficient statistics, so online learning on the model itself inevitably loses information. In this paper we propose a new scheme for training curved models, inspired by the ideas of ancillary statistics
Online learning algorithms for neural networks with IIR synapses
 In Proc. IEEE International Conference of Neural Networks (Perth
, 1995
"... This paper is focused on the learning algorithms for dynamic multilayer perceptron neural networks where each neuron synapsis is modelled by an infinite impulse response (IIR) filter (IIR MLP). In particular, the Backpropagation Through Time (BPTT) algorithm and its less demanding approximated onli ..."
Abstract

Cited by 9 (7 self)
 Add to MetaCart
This paper is focused on the learning algorithms for dynamic multilayer perceptron neural networks where each neuron synapsis is modelled by an infinite impulse response (IIR) filter (IIR MLP). In particular, the Backpropagation Through Time (BPTT) algorithm and its less demanding approximated online
Smooth OnLine Learning Algorithms for Hidden Markov Models
, 1994
"... he modeling and analysis of DNA and protein sequences in biology (Baldi et al. (1992) and (1993), Cardon and Stormo (1992), Haussler et al. (1992), Krogh et al. (1993), and references therein) and optical character recognition (Levin and Pieraccini (1993)). A first order HMMM is characterized by a ..."
Abstract

Cited by 56 (7 self)
 Add to MetaCart
he modeling and analysis of DNA and protein sequences in biology (Baldi et al. (1992) and (1993), Cardon and Stormo (1992), Haussler et al. (1992), Krogh et al. (1993), and references therein) and optical character recognition (Levin and Pieraccini (1993)). A first order HMMM is characterized by a set of states, an alphabet of symbols, a probability transition matrix T = (t ij ) and a probability emission matrix E = (e ij ). The parameter t ij (resp. e ij ) represents the probability of transition from state i to state j (resp. of emission of symbol j from state i). HMMs can be viewed as adaptive systems: given a training sequence of symbols O, the parameters of a HMM can be iteratively adjusted in order the optimize the fit between the model and the data, as measu
Results 1  10
of
56,588