• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations

Tools

Sorted by:
Try your query at:
Semantic Scholar Scholar Academic
Google Bing DBLP
Results 1 - 10 of 3,628
Next 10 →

AS OF 6/22/2015 ALL PRIOR VERSIONS ARE OBSOLETE

by unknown authors
"... U.S. COMMODITY FUTURES TRADING COMMISSION Division of Market Oversight ..."
Abstract - Add to MetaCart
U.S. COMMODITY FUTURES TRADING COMMISSION Division of Market Oversight

Learning generative visual models from few training examples: an incremental Bayesian approach tested on 101 object categories

by Li Fei-fei , 2004
"... Abstract — Current computational approaches to learning visual object categories require thousands of training images, are slow, cannot learn in an incremental manner and cannot incorporate prior information into the learning process. In addition, no algorithm presented in the literature has been te ..."
Abstract - Cited by 784 (16 self) - Add to MetaCart
Abstract — Current computational approaches to learning visual object categories require thousands of training images, are slow, cannot learn in an incremental manner and cannot incorporate prior information into the learning process. In addition, no algorithm presented in the literature has been

A fast learning algorithm for deep belief nets

by Geoffrey E. Hinton, Simon Osindero - Neural Computation , 2006
"... We show how to use “complementary priors ” to eliminate the explaining away effects that make inference difficult in densely-connected belief nets that have many hidden layers. Using complementary priors, we derive a fast, greedy algorithm that can learn deep, directed belief networks one layer at a ..."
Abstract - Cited by 970 (49 self) - Add to MetaCart
We show how to use “complementary priors ” to eliminate the explaining away effects that make inference difficult in densely-connected belief nets that have many hidden layers. Using complementary priors, we derive a fast, greedy algorithm that can learn deep, directed belief networks one layer

Finding structure in time

by Jeffrey L. Elman - COGNITIVE SCIENCE , 1990
"... Time underlies many interesting human behaviors. Thus, the question of how to represent time in connectionist models is very important. One approach is to represent time implicitly by its effects on processing rather than explicitly (as in a spatial representation). The current report develops a pro ..."
Abstract - Cited by 2071 (23 self) - Add to MetaCart
of prior internal states. A set of simulations is reported which range from relatively simple problems (temporal version of XOR) to discovering syntactic/semantic features for words. The networks are able to learn interesting internal representations which incorporate task demands with memory demands

A comparison of bayesian methods for haplotype reconstruction from population genotype data.

by Matthew Stephens , Peter Donnelly , Dr Matthew Stephens - Am J Hum Genet , 2003
"... In this report, we compare and contrast three previously published Bayesian methods for inferring haplotypes from genotype data in a population sample. We review the methods, emphasizing the differences between them in terms of both the models ("priors") they use and the computational str ..."
Abstract - Cited by 557 (7 self) - Add to MetaCart
In this report, we compare and contrast three previously published Bayesian methods for inferring haplotypes from genotype data in a population sample. We review the methods, emphasizing the differences between them in terms of both the models ("priors") they use and the computational

Segmentation of brain MR images through a hidden Markov random field model and the expectation-maximization algorithm

by Yongyue Zhang, Michael Brady, Stephen Smith - IEEE TRANSACTIONS ON MEDICAL. IMAGING , 2001
"... The finite mixture (FM) model is the most commonly used model for statistical segmentation of brain magnetic resonance (MR) images because of its simple mathematical form and the piecewise constant nature of ideal brain MR images. However, being a histogram-based model, the FM has an intrinsic limi ..."
Abstract - Cited by 639 (15 self) - Add to MetaCart
that the FM model is a degenerate version of the HMRF model. The advantage of the HMRF model derives from the way in which the spatial information is encoded through the mutual influences of neighboring sites. Although MRF modeling has been employed in MR image segmentation by other researchers, most reported

Loopy belief propagation for approximate inference: An empirical study. In:

by Kevin P Murphy , Yair Weiss , Michael I Jordan - Proceedings of Uncertainty in AI, , 1999
"... Abstract Recently, researchers have demonstrated that "loopy belief propagation" -the use of Pearl's polytree algorithm in a Bayesian network with loops -can perform well in the context of error-correcting codes. The most dramatic instance of this is the near Shannon-limit performanc ..."
Abstract - Cited by 676 (15 self) - Add to MetaCart
;belief revision") version, Weiss For the case of networks with multiple loops, Richard son To summarize, what is currently known about loopy propagation is that ( 1) it works very well in an error correcting code setting and (2) there are conditions for a single-loop network for which it can be guaranteed

Gibbs Sampling Methods for Stick-Breaking Priors

by Hemant Ishwaran, Lancelot F. James
"... ... In this paper we present two general types of Gibbs samplers that can be used to fit posteriors of Bayesian hierarchical models based on stick-breaking priors. The first type of Gibbs sampler, referred to as a Polya urn Gibbs sampler, is a generalized version of a widely used Gibbs sampling meth ..."
Abstract - Cited by 388 (19 self) - Add to MetaCart
... In this paper we present two general types of Gibbs samplers that can be used to fit posteriors of Bayesian hierarchical models based on stick-breaking priors. The first type of Gibbs sampler, referred to as a Polya urn Gibbs sampler, is a generalized version of a widely used Gibbs sampling

Comparability of Scores on the New and Prior Versions of the SAT Reasoning Test ™

by Jennifer L. Kobrin, Gerald J. Melican , 2005
"... strengthen its alignment with curriculum and instructional practices in high school and college. An important assumption underlying the changes to the SAT ® was that scores on ..."
Abstract - Cited by 1 (0 self) - Add to MetaCart
strengthen its alignment with curriculum and instructional practices in high school and college. An important assumption underlying the changes to the SAT ® was that scores on

This paper replaces all prior versions that have circulated under the same title. Its evolution has been greatly

by Amitabh Chandra, Josh Angrist, David Autor, Mark Berger, Dan Black, Chris Bollinger, Charles Brown, Ken Chay, Richard Freeman, Claudia Goldin, Lawrence Katz, Jel No. J , 2003
"... Hampshire, Purdue, RAND and Washington University. The views expressed in this paper are not necessarily those of any institution with which I am affiliated, and I am solely responsible for any errors. The views expressed herein are those of the authors and not necessarily those of the National Bure ..."
Abstract - Add to MetaCart
Hampshire, Purdue, RAND and Washington University. The views expressed in this paper are not necessarily those of any institution with which I am affiliated, and I am solely responsible for any errors. The views expressed herein are those of the authors and not necessarily those of the National Bureau of Economic Research.
Next 10 →
Results 1 - 10 of 3,628
Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University