@MISC{Opencourseware_bayesleast-squares, author = {Mit Opencourseware}, title = {Bayes least-squares estimation}, year = {} }

Share

OpenURL

Abstract

In the lecture on Monday the 9th I gave material on Bayes estimation that I haven’t found in Rice, so here is a printed form of it. First here is a very simple fact. Proposition 1. For any random variable X with E(X 2) < +∞, the unique constant c that minimizes E((X − c) 2) is c = EX. Proof. E((X − c) 2) = E(X 2) − 2cEX + c 2 is a quadratic polynomial in c which goes to + ∞ as c → ±∞, so it’s minimized where its derivative with respect to c equals 0, namely at c = EX, Q.E.D. Now suppose given a prior density π(θ) for a continuous parameter θ, so π(θ)> 0 and π(θ)dθ = 1. (If θ is m-multidimensional we would need an m-fold multiple integral but nothing essential in the following would be changed, so I’ll write as if m = 1.) Let f(x, θ) be a likelihood function for one observation, which may be either a probability mass function if x is discrete or a density function if x is continuous. If we have i.i.d. observations X = (X1,..., Xn) we get a likelihood function f(X, θ) = �n f(Xj, θ). The j=1