Results 1  10
of
1,330
Efficient noisetolerant learning from statistical queries
 JOURNAL OF THE ACM
, 1998
"... In this paper, we study the problem of learning in the presence of classification noise in the probabilistic learning model of Valiant and its variants. In order to identify the class of “robust” learning algorithms in the most general way, we formalize a new but related model of learning from stat ..."
Abstract

Cited by 353 (5 self)
 Add to MetaCart
In this paper, we study the problem of learning in the presence of classification noise in the probabilistic learning model of Valiant and its variants. In order to identify the class of “robust” learning algorithms in the most general way, we formalize a new but related model of learning from
PAC Learning from Positive Statistical Queries
 Proc. 9th International Conference on Algorithmic Learning Theory  ALT ’98
, 1998
"... . Learning from positive examples occurs very frequently in natural learning. The PAC learning model of Valiant takes many features of natural learning into account, but in most cases it fails to describe such kind of learning. We show that in order to make the learning from positive data possible, ..."
Abstract

Cited by 55 (3 self)
 Add to MetaCart
93]) and constantpartition classification noise model ([Dec97]) are studied. We show that kDNF and kdecision lists are learnable in both models, i.e. with far less information than it is assumed in previously used algorithms. 1 Introduction The PAC learning model of Valiant ([Val84]) has become
Exemplar dynamics: Word frequency, lenition and contrast
 In
, 2001
"... Exemplar theory was first developed as a model of similarity and classification in perception. In this paper, the theory is extended to model speech production as well as speech perception. Straightforward extension of the model provides a formal framework for thinking about the quantitative predict ..."
Abstract

Cited by 243 (7 self)
 Add to MetaCart
Exemplar theory was first developed as a model of similarity and classification in perception. In this paper, the theory is extended to model speech production as well as speech perception. Straightforward extension of the model provides a formal framework for thinking about the quantitative
Noisetolerant learning, the parity problem, and the statistical query model
 J. ACM
"... We describe a slightly subexponential time algorithm for learning parity functions in the presence of random classification noise. This results in a polynomialtime algorithm for the case of parity functions that depend on only the first O(log n log log n) bits of input. This is the first known ins ..."
Abstract

Cited by 165 (2 self)
 Add to MetaCart
We describe a slightly subexponential time algorithm for learning parity functions in the presence of random classification noise. This results in a polynomialtime algorithm for the case of parity functions that depend on only the first O(log n log log n) bits of input. This is the first known
Monte Carlo Implementation of Gaussian Process Models for Bayesian Regression and Classification
, 1997
"... Abstract. Gaussian processes are a natural way of defining prior distributions over functions of one or more input variables. In a simple nonparametric regression problem, where such a function gives the mean of a Gaussian distribution for an observed response, a Gaussian process model can easily be ..."
Abstract

Cited by 150 (1 self)
 Add to MetaCart
be implemented using matrix computations that are feasible for datasets of up to about a thousand cases. Hyperparameters that define the covariance function of the Gaussian process can be sampled using Markov chain methods. Regression models where the noise has a t distribution and logistic or probit models
Magnetic resonance image tissue classification using a partial volume model
 NEUROIMAGE
, 2001
"... We describe a sequence of lowlevel operations to isolate and classify brain tissue within T1weighted magnetic resonance images (MRI). Our method first removes nonbrain tissue using a combination of anisotropic diffusion filtering, edge detection, and mathematical morphology. We compensate for imag ..."
Abstract

Cited by 137 (6 self)
 Add to MetaCart
each estimate point. The measurement model uses mean tissue intensity and noise variance values computed from the global image and a multiplicative bias parameter that is estimated for each region during the histogram fit. Voxels in the intensitynormalized image are then classified into six tissue
Predicting Time Series with Support Vector Machines
, 1997
"... . Support Vector Machines are used for time series prediction and compared to radial basis function networks. We make use of two different cost functions for Support Vectors: training with (i) an ffl insensitive loss and (ii) Huber's robust loss function and discuss how to choose the regulariza ..."
Abstract

Cited by 189 (13 self)
 Add to MetaCart
the regularization parameters in these models. Two applications are considered: data from (a) a noisy (normal and uniform noise) Mackey Glass equation and (b) the Santa Fe competition (set D). In both cases Support Vector Machines show an excellent performance. In case (b) the Support Vector approach improves
Incremental algorithms for hierarchical classification
 Journal of Machine Learning Research
, 2004
"... We study the problem of classifying data in a given taxonomy when classifications associated with multiple and/or partial paths are allowed. We introduce a new algorithm that incrementally learns a linearthreshold classifier for each node of the taxonomy. A hierarchical classification is obtained b ..."
Abstract

Cited by 109 (9 self)
 Add to MetaCart
additional mistake occurring in the subtree of that node. Making no assumptions on the mechanism generating the data instances, and assuming a linear noise model for the labels, we bound the Hloss of our online algorithm in terms of the Hloss of a reference classifier knowing the true parameters
Multihop diversity in wireless relaying channels
 IEEE Trans. on Communications
"... Abstract—This paper presents theoretical characterizations and analysis for the physical layer of multihop wireless communications channels. Four channel models are considered and developed: the decoded relaying multihop channel; the amplified relaying multihop channel; the decoded relaying multihop ..."
Abstract

Cited by 149 (23 self)
 Add to MetaCart
multihop diversity channel; and the amplified relaying multihop diversity channel. Two classifications are discussed: decoded relaying versus amplified relaying, and multihop channels versus multihop diversity channels. The channel models are compared, through analysis and simulations, with the “singlehop
Learning and classification of complex dynamics
 IEEE Transactions on Pattern Analysis and Machine Intelligence
, 2000
"... AbstractÐStandard, exact techniques based on likelihood maximization are available for learning AutoRegressive Process models of dynamical processes. The uncertainty of observations obtained from real sensors means that dynamics can be observed only approximately. Learning can still be achieved via ..."
Abstract

Cited by 89 (2 self)
 Add to MetaCart
dynamics are studied via visually observed juggling; plausible dynamical models have been found to emerge from the learning process, and accurate classification of motion has resulted. In practice, EMC learning is computationally burdensome and the paper concludes with some discussion of computational
Results 1  10
of
1,330