• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations

Tools

Sorted by:
Try your query at:
Semantic Scholar Scholar Academic
Google Bing DBLP
Results 1 - 10 of 4,199
Next 10 →

• Hypothesis Class: H

by Ym X
"... {0, 1}-valued random variables X1,..., Xn are drawn independently each from Bernoulli distribution with parameter p = 0.1. Define Pn: = P ( 1n ∑n i=1Xi ≤ 0.2). (a) For n = 1 to 30 calculate and plot the below in the same plot (see [1, section 6.1] for definition of Hoeffding and Bernstein inequaliti ..."
Abstract - Add to MetaCart
{0, 1}-valued random variables X1,..., Xn are drawn independently each from Bernoulli distribution with parameter p = 0.1. Define Pn: = P ( 1n ∑n i=1Xi ≤ 0.2). (a) For n = 1 to 30 calculate and plot the below in the same plot (see [1, section 6.1] for definition of Hoeffding and Bernstein inequalities): i. Exact value of Pn (binomial distribution). ii. Normal approximation for Pn. iii. Hoeffding inequality bound on Pn. iv. Bernstein inequality bound on Pn. (b) For n = 30 to 300 calculate and plot the below in the same plot: i. Normal approximation for Pn. ii. Hoeffding inequality bound on Pn. iii. Bernstein inequality bound on Pn. 2. VC Bound:

Learning Agents with Evolving Hypothesis Classes

by Peter Sunehag, Marcus Hutter
"... Abstract. It has recently been shown that a Bayesian agent with a universal hypothesis class resolves most induction problems discussed in the philosophy of science. These ideal agents are, however, neither practical nor a good model for how real science works. We here introduce a framework for lear ..."
Abstract - Add to MetaCart
Abstract. It has recently been shown that a Bayesian agent with a universal hypothesis class resolves most induction problems discussed in the philosophy of science. These ideal agents are, however, neither practical nor a good model for how real science works. We here introduce a framework

N.: Multi-instance learning with any hypothesis class

by Sivan Sabato, Naftali Tishby, Nicolas Vayatis , 2011
"... In the supervised learning setting termed Multiple-Instance Learning (MIL), the examples are bags of instances, and the bag label is a function of the labels of its instances. Typically, this function is the Boolean OR. The learner observes a sample of bags and the bag labels, but not the instance l ..."
Abstract - Cited by 5 (0 self) - Add to MetaCart
provide a unified theoretical analysis for MIL, which holds for any underlying hypothesis class, regardless of a specific application or problem domain. We show that the sample complexity of MIL is only poly-logarithmically dependent on the size of the bag, for any underlying hypothesis class. In addition

The strength of weak learnability

by Robert E. Schapire - MACHINE LEARNING , 1990
"... This paper addresses the problem of improving the accuracy of an hypothesis output by a learning algorithm in the distribution-free (PAC) learning model. A concept class is learnable (or strongly learnable) if, given access to a Source of examples of the unknown concept, the learner with high prob ..."
Abstract - Cited by 871 (26 self) - Add to MetaCart
This paper addresses the problem of improving the accuracy of an hypothesis output by a learning algorithm in the distribution-free (PAC) learning model. A concept class is learnable (or strongly learnable) if, given access to a Source of examples of the unknown concept, the learner with high

Learning quickly when irrelevant attributes abound: A new linear-threshold algorithm

by Nick Littlestone - Machine Learning , 1988
"... learning Boolean functions, linear-threshold algorithms Abstract. Valiant (1984) and others have studied the problem of learning various classes of Boolean functions from examples. Here we discuss incremental learning of these functions. We consider a setting in which the learner responds to each ex ..."
Abstract - Cited by 773 (5 self) - Add to MetaCart
example according to a current hypothesis. Then the learner updates the hypothesis, if necessary, based on the correct classification of the example. One natural measure of the quality of learning in this setting is the number of mistakes the learner makes. For suitable classes of functions, learning

Generalizing apprenticeship learning across hypothesis classes

by Thomas J. Walsh, Michael L. Littman, Carlos Diuk, Thomas J. Walsh, Kaushik Subramanian, Michael L. Littman, Carlos Diuk - In ICML , 2010
"... All in-text references underlined in blue are linked to publications on ResearchGate, letting you access and read them immediately. ..."
Abstract - Cited by 19 (10 self) - Add to MetaCart
All in-text references underlined in blue are linked to publications on ResearchGate, letting you access and read them immediately.

Motivation through the Design of Work: Test of a Theory. Organizational Behavior and Human Performance,

by ] Richard Hackman , Grec R Oldham , 1976
"... A model is proposed that specifies the conditions under which individuals will become internally motivated to perform effectively on their jobs. The model focuses on the interaction among three classes of variables: (a) the psychological states of employees that must be present for internally motiv ..."
Abstract - Cited by 622 (2 self) - Add to MetaCart
A model is proposed that specifies the conditions under which individuals will become internally motivated to perform effectively on their jobs. The model focuses on the interaction among three classes of variables: (a) the psychological states of employees that must be present for internally

Loopy belief propagation for approximate inference: An empirical study. In:

by Kevin P Murphy , Yair Weiss , Michael I Jordan - Proceedings of Uncertainty in AI, , 1999
"... Abstract Recently, researchers have demonstrated that "loopy belief propagation" -the use of Pearl's polytree algorithm in a Bayesian network with loops -can perform well in the context of error-correcting codes. The most dramatic instance of this is the near Shannon-limit performanc ..."
Abstract - Cited by 676 (15 self) - Add to MetaCart
nothing directly to do with coding or decoding will show that in some sense belief propagation "converges with high probability to a near-optimum value" of the desired belief on a class of loopy DAGs Progress in the analysis of loopy belief propagation has been made for the case of networks

Machine Learning with Data Dependent Hypothesis Classes

by Adam Cannon, J. Mark Ettinger, Peter Bartlett - Journal of Machine Learning Research , 2002
"... We extend the VC theory of statistical learning to data dependent spaces of classifiers. ..."
Abstract - Cited by 11 (0 self) - Add to MetaCart
We extend the VC theory of statistical learning to data dependent spaces of classifiers.

MULTI-INSTANCE LEARNING WITH ANY HYPOTHESIS CLASS Multi-Instance Learning with Any Hypothesis Class

by Sivan Sabato Sivan, Naftali Tishby
"... ar ..."
Abstract - Add to MetaCart
Abstract not found
Next 10 →
Results 1 - 10 of 4,199
Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University