Results 1  10
of
12,462
Normal Approximation
"... Let the likelihood function be L(θ; y) = c exp(−f(θ; y)) (1) for some arbitrary bivariate function f and for some constant c. It might be useful if we have a normal likelihood function in which case it is possible to compute posterior either analytically (eg. if the prior is also normal)or by reso ..."
Abstract
 Add to MetaCart
Let the likelihood function be L(θ; y) = c exp(−f(θ; y)) (1) for some arbitrary bivariate function f and for some constant c. It might be useful if we have a normal likelihood function in which case it is possible to compute posterior either analytically (eg. if the prior is also normal
Constructing Free Energy Approximations and Generalized Belief Propagation Algorithms
 IEEE Transactions on Information Theory
, 2005
"... Important inference problems in statistical physics, computer vision, errorcorrecting coding theory, and artificial intelligence can all be reformulated as the computation of marginal probabilities on factor graphs. The belief propagation (BP) algorithm is an efficient way to solve these problems t ..."
Abstract

Cited by 585 (13 self)
 Add to MetaCart
the Bethe approximation, and corresponding generalized belief propagation (GBP) algorithms. We emphasize the conditions a free energy approximation must satisfy in order to be a “valid ” or “maxentnormal ” approximation. We describe the relationship between four different methods that can be used
Normal approximation in geometric probability
 In Stein’s Method and Applications. Lect. Notes
, 2005
"... Statistics arising in geometric probability can often be expressed as sums of stabilizing functionals, that is functionals which satisfy a local dependence structure. In this note we show that stabilization leads to nearly optimal rates of convergence in the CLT for statistics such as total edge len ..."
Abstract

Cited by 23 (9 self)
 Add to MetaCart
Statistics arising in geometric probability can often be expressed as sums of stabilizing functionals, that is functionals which satisfy a local dependence structure. In this note we show that stabilization leads to nearly optimal rates of convergence in the CLT for statistics such as total edge length and total number of edges of graphs in computational geometry and the total number of particles accepted in random sequential packing models. These rates also apply to the 1dimensional marginals of the random measures associated with these statistics.
L¹ bounds in normal approximation
 ANNALS OF PROBABILITY
, 2007
"... The zero bias distribution W ∗ of W, defined though the characterizing equation EW f(W) = σ 2 Ef ′ (W ∗ ) for all smooth functions f, exists for all W with mean zero and finite variance σ 2. For W and W ∗ defined on the same probability space, the L 1 distance between F, the distribution function o ..."
Abstract

Cited by 12 (4 self)
 Add to MetaCart
of W with EW = 0 and Var(W) = 1, and the cumulative standard normal Φ has the simple upper bound ‖F − Φ‖1 ≤ 2EW ∗ − W . This inequality is used to provide explicit L 1 bounds with moderatesized constants for independent sums, projections of cone measure on the sphere S(ℓ p n), simple random
Normal approximation for random sums
, 2006
"... In this paper, we adapt the very effective Berry–Esseen theorems of Chen and Shao (2004), which apply to sums of locally dependent random variables, for use with randomly indexed sums. Our particular interest is in random variables resulting from integrating a random field with respect to a point pr ..."
Abstract

Cited by 10 (5 self)
 Add to MetaCart
In this paper, we adapt the very effective Berry–Esseen theorems of Chen and Shao (2004), which apply to sums of locally dependent random variables, for use with randomly indexed sums. Our particular interest is in random variables resulting from integrating a random field with respect to a point process. We illustrate the use of our theorems in three examples; in a rather general model of the insurance collective, in problems in geometrical probability involving stabilizing functionals, and in counting the maximal points in a 2dimensional region.
A new method of normal approximation
, 2006
"... Abstract. We introduce a new version of Stein’s method that reduces a large class of normal approximation problems to variance bounding exercises, thus making a connection between central limit theorems and concentration of measure. Unlike Skorokhod embeddings, the object whose variance has to be bo ..."
Abstract

Cited by 39 (7 self)
 Add to MetaCart
Abstract. We introduce a new version of Stein’s method that reduces a large class of normal approximation problems to variance bounding exercises, thus making a connection between central limit theorems and concentration of measure. Unlike Skorokhod embeddings, the object whose variance has
NORMAL APPROXIMATION FOR HIERARCHICAL STRUCTURES
, 2005
"... Given F:[a,b] k → [a,b] and a nonconstant X0 withP(X0 ∈ [a,b]) =1, define the hierarchical sequence of random variables {Xn}n≥0 by Xn+1 = F(Xn,1,...,Xn,k), where Xn,i are i.i.d. as Xn. Such sequences arise from hierarchical structures which have been extensively studied in the physics literature to ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
to model, for example, the conductivity of a random medium. Under an averaging and smoothness condition on nontrivial F, an upper bound of the form Cγ n for 0 < γ < 1 is obtained on the Wasserstein distance between the standardized distribution of Xn and the normal. The results apply, for instance
ON NORMAL APPROXIMATIONS TO USTATISTICS
 SUBMITTED TO THE “ANNALS OF PROBABILITY”
, 2009
"... Let X1,..., Xn be i.i.d. random observations. Let S = L + T be a Ustatistic of order k ≥ 2, where L is a linear statistic having asymptotic normal distribution, and T is a stochastically smaller statistic. We show that the rate of convergence to normality for S can be simply expressed as the rate ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Let X1,..., Xn be i.i.d. random observations. Let S = L + T be a Ustatistic of order k ≥ 2, where L is a linear statistic having asymptotic normal distribution, and T is a stochastically smaller statistic. We show that the rate of convergence to normality for S can be simply expressed as the rate
Results 1  10
of
12,462