Results 1  10
of
57,200
A simplified approach to the HajekLeCam bound
, 1995
"... Introduction The lower HajekLeCam bound is a central statement of asymptotic decision theory. The traditional way (see Strasser [4], LeCam [1]) to establish this statement is carried out in the following way. Using the concept of "deficiency, a metric is introduced which describes the socal ..."
Abstract
 Add to MetaCart
Introduction The lower HajekLeCam bound is a central statement of asymptotic decision theory. The traditional way (see Strasser [4], LeCam [1]) to establish this statement is carried out in the following way. Using the concept of "deficiency, a metric is introduced which describes the so
A COMPLEMENT TO LE CAM’S THEOREM
, 708
"... This paper examines asymptotic equivalence in the sense of Le Cam between density estimation experiments and the accompanying Poisson experiments. The significance of asymptotic equivalence is that all asymptotically optimal statistical procedures can be carried over from one experiment to the other ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
This paper examines asymptotic equivalence in the sense of Le Cam between density estimation experiments and the accompanying Poisson experiments. The significance of asymptotic equivalence is that all asymptotically optimal statistical procedures can be carried over from one experiment
Some thoughts on Le Cam's statistical decision theory
, 2000
"... . The paper contains some musings about the abstractions introduced by Lucien Le Cam into the asymptotic theory of statistical inference and decision theory. A short, selfcontained proof of a key result (existence of randomizations via convergence in distribution of likelihood ratios), and an outlin ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
. The paper contains some musings about the abstractions introduced by Lucien Le Cam into the asymptotic theory of statistical inference and decision theory. A short, selfcontained proof of a key result (existence of randomizations via convergence in distribution of likelihood ratios
Cauchy Example for Le Cam Made Simple
, 2005
"... We do a very simple oneparameter model for which everything is trivial, the Cauchy location model. This model is mildly notorious among theoreticians because the likelihood is multimodal and the multimodality does not go away as n goes to infinity (Reeds, 1985). It’s not that there is any theory ..."
Abstract
 Add to MetaCart
We do a very simple oneparameter model for which everything is trivial, the Cauchy location model. This model is mildly notorious among theoreticians because the likelihood is multimodal and the multimodality does not go away as n goes to infinity (Reeds, 1985). It’s not that there is any theory
Fisher Example for Le Cam Made Simple
, 2005
"... We do, rather inefficiently, an example from quantitative genetics, using a model originally proposed by Fisher (1918) that directly began modern quantitative genetics and indirectly led to much of modern regression and ..."
Abstract
 Add to MetaCart
We do, rather inefficiently, an example from quantitative genetics, using a model originally proposed by Fisher (1918) that directly began modern quantitative genetics and indirectly led to much of modern regression and
© Institute of Mathematical Statistics, 2007 A COMPLEMENT TO LE CAM’S THEOREM
"... This paper examines asymptotic equivalence in the sense of Le Cam between density estimation experiments and the accompanying Poisson experiments. The significance of asymptotic equivalence is that all asymptotically optimal statistical procedures can be carried over from one experiment to the other ..."
Abstract
 Add to MetaCart
This paper examines asymptotic equivalence in the sense of Le Cam between density estimation experiments and the accompanying Poisson experiments. The significance of asymptotic equivalence is that all asymptotically optimal statistical procedures can be carried over from one experiment
Le Cam meets LeCun: Deficiency and Generic Feature Learning
"... “Deep Learning ” methods attempt to learn generic features in an unsupervised fashion from a large unlabelled data set. These generic features should perform as well as the best hand crafted features for any learning problem that makes use of this data. We provide a definition of generic features, c ..."
Abstract
 Add to MetaCart
“Deep Learning ” methods attempt to learn generic features in an unsupervised fashion from a large unlabelled data set. These generic features should perform as well as the best hand crafted features for any learning problem that makes use of this data. We provide a definition of generic features, characterize when it is possible to learn them and provide algorithms closely related to the deep belief network and autoencoders of deep learning. In order to do so we use the notion of deficiency distance and illustrate its value in studying certain general learning problems. 1.
Results 1  10
of
57,200