Results 1  10
of
19,265
Maximum likelihood from incomplete data via the EM algorithm
 JOURNAL OF THE ROYAL STATISTICAL SOCIETY, SERIES B
, 1977
"... A broadly applicable algorithm for computing maximum likelihood estimates from incomplete data is presented at various levels of generality. Theory showing the monotone behaviour of the likelihood and convergence of the algorithm is derived. Many examples are sketched, including missing value situat ..."
Abstract

Cited by 11972 (17 self)
 Add to MetaCart
situations, applications to grouped, censored or truncated data, finite mixture models, variance component estimation, hyperparameter estimation, iteratively reweighted least squares and factor analysis.
Nonlinear leastsquares estimation
 Journal of Multivariate Analysis
, 2006
"... ABSTRACT. The paper uses empirical process techniques to study the asymptotics of the leastsquares estimator for the fitting of a nonlinear regression function. By combining and extending ideas of Wu and Van de Geer, it establishes new consistency and central limit theorems that hold under only s ..."
Abstract

Cited by 7 (1 self)
 Add to MetaCart
ABSTRACT. The paper uses empirical process techniques to study the asymptotics of the leastsquares estimator for the fitting of a nonlinear regression function. By combining and extending ideas of Wu and Van de Geer, it establishes new consistency and central limit theorems that hold under only
On the Least Trimmed Squares Estimator
, 2007
"... The linear least trimmed squares (LTS) estimator is a statistical technique for estimating the line (or generally hyperplane) of fit for a set of points. It was proposed by Rousseeuw as a robust alternative to the classical least squares estimator. Given a set of n points in R d, in classical least ..."
Abstract
 Add to MetaCart
The linear least trimmed squares (LTS) estimator is a statistical technique for estimating the line (or generally hyperplane) of fit for a set of points. It was proposed by Rousseeuw as a robust alternative to the classical least squares estimator. Given a set of n points in R d, in classical least
Least Squares Estimation Of
 Journal of Statistical Computation and Simulation
, 1998
"... this paper is the development of statistical estimation procedures for fitting the rate and meanvalue functions of an NHPP. In particular, we have developed a least squares procedure for estimating the parameters of an NHPP with an EPTMPtype rate function. This procedure uses a least squares metho ..."
Abstract
 Add to MetaCart
this paper is the development of statistical estimation procedures for fitting the rate and meanvalue functions of an NHPP. In particular, we have developed a least squares procedure for estimating the parameters of an NHPP with an EPTMPtype rate function. This procedure uses a least squares
NORMALISED LEASTSQUARES ESTIMATION IN . . .
, 2006
"... We investigate the timevarying ARCH (tvARCH) process. It is shown that it can be used to describe the slow decay of the sample autocorrelations of the squared returns often observed in financial time series, which warrants the further study of parameter estimation methods for the model. Since the p ..."
Abstract
 Add to MetaCart
We investigate the timevarying ARCH (tvARCH) process. It is shown that it can be used to describe the slow decay of the sample autocorrelations of the squared returns often observed in financial time series, which warrants the further study of parameter estimation methods for the model. Since
Convolutions and mean square estimates . . .
, 2005
"... We study the convolution function x C[f(x)]:= 1 f(y)f ( x dy y y when f(x) is a suitable numbertheoretic error term. Asymptotics and upper bounds for C[f(x)] are derived from mean square bounds for f(x). Some applications are given, in particular to ζ ( 1 2 + ix)2k and the classical Rankin–Selber ..."
Abstract
 Add to MetaCart
We study the convolution function x C[f(x)]:= 1 f(y)f ( x dy y y when f(x) is a suitable numbertheoretic error term. Asymptotics and upper bounds for C[f(x)] are derived from mean square bounds for f(x). Some applications are given, in particular to ζ ( 1 2 + ix)2k and the classical Rankin
On Inconsistency of the Least Squares Estimator
 Odense Univ., Denmark
, 1998
"... Infill asymptotics inside increasing domain 1 Infill asymptotics inside increasing domain for ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
Infill asymptotics inside increasing domain 1 Infill asymptotics inside increasing domain for
Bayes leastsquares estimation
"... In the lecture on Monday the 9th I gave material on Bayes estimation that I haven’t found in Rice, so here is a printed form of it. First here is a very simple fact. Proposition 1. For any random variable X with E(X 2) < +∞, the unique constant c that minimizes E((X − c) 2) is c = EX. Proof. E((X ..."
Abstract
 Add to MetaCart
In the lecture on Monday the 9th I gave material on Bayes estimation that I haven’t found in Rice, so here is a printed form of it. First here is a very simple fact. Proposition 1. For any random variable X with E(X 2) < +∞, the unique constant c that minimizes E((X − c) 2) is c = EX. Proof. E
Over The Least Squares Estimators Of
, 2006
"... All intext references underlined in blue are linked to publications on ResearchGate, letting you access and read them immediately. ..."
Abstract
 Add to MetaCart
All intext references underlined in blue are linked to publications on ResearchGate, letting you access and read them immediately.
Generalized Least Squares Estimation for
, 2009
"... If cited or quoted, reference should be made to the full name of the author(s), editor(s), the title, the working paper or other series, the year, and the publisher. ..."
Abstract
 Add to MetaCart
If cited or quoted, reference should be made to the full name of the author(s), editor(s), the title, the working paper or other series, the year, and the publisher.
Results 1  10
of
19,265