Results 1  10
of
6,092
Separating DistributionFree And MistakeBound Learning Models Over The Boolean Domain
 SIAM J. COMPUT
, 1990
"... Two of the most commonly used models in computational learning theory are the distributionfree model in which examples are chosen from a fixed but arbitrary distribution, and the absolute mistakebound model in which examples are presented in an arbitrary order. Over the Boolean domain , it is ..."
Abstract

Cited by 23 (2 self)
 Add to MetaCart
Two of the most commonly used models in computational learning theory are the distributionfree model in which examples are chosen from a fixed but arbitrary distribution, and the absolute mistakebound model in which examples are presented in an arbitrary order. Over the Boolean domain
Learnability and the VapnikChervonenkis dimension
, 1989
"... Valiant’s learnability model is extended to learning classes of concepts defined by regions in Euclidean space E”. The methods in this paper lead to a unified treatment of some of Valiant’s results, along with previous results on distributionfree convergence of certain pattern recognition algorith ..."
Abstract

Cited by 727 (22 self)
 Add to MetaCart
Valiant’s learnability model is extended to learning classes of concepts defined by regions in Euclidean space E”. The methods in this paper lead to a unified treatment of some of Valiant’s results, along with previous results on distributionfree convergence of certain pattern recognition
Learning in the Presence of Malicious Errors
 SIAM Journal on Computing
, 1993
"... In this paper we study an extension of the distributionfree model of learning introduced by Valiant [23] (also known as the probably approximately correct or PAC model) that allows the presence of malicious errors in the examples given to a learning algorithm. Such errors are generated by an advers ..."
Abstract

Cited by 184 (12 self)
 Add to MetaCart
In this paper we study an extension of the distributionfree model of learning introduced by Valiant [23] (also known as the probably approximately correct or PAC model) that allows the presence of malicious errors in the examples given to a learning algorithm. Such errors are generated
Fit indices in covariance structure modeling: Sensitivity to underparameterized model misspecification
 Psychological Methods
, 1998
"... This study evaluated the sensitivity of maximum likelihood (ML), generalized least squares (GLS), and asymptotic distributionfree (ADF)based fit indices to model misspecification, under conditions that varied sample size and distribution. The effect of violating assumptions of asymptotic robustn ..."
Abstract

Cited by 543 (0 self)
 Add to MetaCart
This study evaluated the sensitivity of maximum likelihood (ML), generalized least squares (GLS), and asymptotic distributionfree (ADF)based fit indices to model misspecification, under conditions that varied sample size and distribution. The effect of violating assumptions of asymptotic
Efficient Distributionfree Learning of Probabilistic Concepts
 Journal of Computer and System Sciences
, 1993
"... In this paper we investigate a new formal model of machine learning in which the concept (boolean function) to be learned may exhibit uncertain or probabilistic behaviorthus, the same input may sometimes be classified as a positive example and sometimes as a negative example. Such probabilistic c ..."
Abstract

Cited by 214 (8 self)
 Add to MetaCart
In this paper we investigate a new formal model of machine learning in which the concept (boolean function) to be learned may exhibit uncertain or probabilistic behaviorthus, the same input may sometimes be classified as a positive example and sometimes as a negative example. Such probabilistic
Modeling and Forecasting Realized Volatility
, 2002
"... this paper is built. First, although raw returns are clearly leptokurtic, returns standardized by realized volatilities are approximately Gaussian. Second, although the distributions of realized volatilities are clearly rightskewed, the distributions of the logarithms of realized volatilities are a ..."
Abstract

Cited by 549 (50 self)
 Add to MetaCart
frequency models, we find that our simple Gaussian VAR forecasts generally produce superior forecasts. Furthermore, we show that, given the theoretically motivated and empirically plausible assumption of normally distributed returns conditional on the realized volatilities, the resulting lognormalnormal mixture
The strength of weak learnability
 MACHINE LEARNING
, 1990
"... This paper addresses the problem of improving the accuracy of an hypothesis output by a learning algorithm in the distributionfree (PAC) learning model. A concept class is learnable (or strongly learnable) if, given access to a Source of examples of the unknown concept, the learner with high prob ..."
Abstract

Cited by 871 (26 self)
 Add to MetaCart
This paper addresses the problem of improving the accuracy of an hypothesis output by a learning algorithm in the distributionfree (PAC) learning model. A concept class is learnable (or strongly learnable) if, given access to a Source of examples of the unknown concept, the learner with high
Cryptographic Limitations on Learning Boolean Formulae and Finite Automata
 PROCEEDINGS OF THE TWENTYFIRST ANNUAL ACM SYMPOSIUM ON THEORY OF COMPUTING
, 1989
"... In this paper we prove the intractability of learning several classes of Boolean functions in the distributionfree model (also called the Probably Approximately Correct or PAC model) of learning from examples. These results are representation independent, in that they hold regardless of the syntact ..."
Abstract

Cited by 347 (14 self)
 Add to MetaCart
In this paper we prove the intractability of learning several classes of Boolean functions in the distributionfree model (also called the Probably Approximately Correct or PAC model) of learning from examples. These results are representation independent, in that they hold regardless
A View Of The Em Algorithm That Justifies Incremental, Sparse, And Other Variants
 Learning in Graphical Models
, 1998
"... . The EM algorithm performs maximum likelihood estimation for data in which some variables are unobserved. We present a function that resembles negative free energy and show that the M step maximizes this function with respect to the model parameters and the E step maximizes it with respect to the d ..."
Abstract

Cited by 993 (18 self)
 Add to MetaCart
. The EM algorithm performs maximum likelihood estimation for data in which some variables are unobserved. We present a function that resembles negative free energy and show that the M step maximizes this function with respect to the model parameters and the E step maximizes it with respect
A fast learning algorithm for deep belief nets
 Neural Computation
, 2006
"... We show how to use “complementary priors ” to eliminate the explaining away effects that make inference difficult in denselyconnected belief nets that have many hidden layers. Using complementary priors, we derive a fast, greedy algorithm that can learn deep, directed belief networks one layer at a ..."
Abstract

Cited by 970 (49 self)
 Add to MetaCart
very good generative model of the joint distribution of handwritten digit images and their labels. This generative model gives better digit classification than the best discriminative learning algorithms. The lowdimensional manifolds on which the digits lie are modelled by long ravines in the free
Results 1  10
of
6,092