Results 11  20
of
19,265
LSQR: An Algorithm for Sparse Linear Equations and Sparse Least Squares
 ACM Trans. Math. Software
, 1982
"... An iterative method is given for solving Ax ~ffi b and minU Ax b 112, where the matrix A is large and sparse. The method is based on the bidiagonalization procedure of Golub and Kahan. It is analytically equivalent to the standard method of conjugate gradients, but possesses more favorable numerica ..."
Abstract

Cited by 653 (21 self)
 Add to MetaCart
numerical properties. Reliable stopping criteria are derived, along with estimates of standard errors for x and the condition number of A. These are used in the FORTRAN implementation of the method, subroutine LSQR. Numerical tests are described comparing I~QR with several other conjugate
Valuing American options by simulation: A simple leastsquares approach
 Review of Financial Studies
, 2001
"... This article presents a simple yet powerful new approach for approximating the value of America11 options by simulation. The kcy to this approach is the use of least squares to estimate the conditional expected payoff to the optionholder from continuation. This makes this approach readily applicable ..."
Abstract

Cited by 517 (9 self)
 Add to MetaCart
This article presents a simple yet powerful new approach for approximating the value of America11 options by simulation. The kcy to this approach is the use of least squares to estimate the conditional expected payoff to the optionholder from continuation. This makes this approach readily
The Multivariate Least Trimmed Squares Estimator
"... In this paper we introduce the least trimmed squares estimator for multivariate regression. We give three equivalent formulations of the estimator and obtain its breakdown point. A fast algorithm for its computation is proposed. We prove Fisherconsistency at the multivariate regression model with e ..."
Abstract
 Add to MetaCart
In this paper we introduce the least trimmed squares estimator for multivariate regression. We give three equivalent formulations of the estimator and obtain its breakdown point. A fast algorithm for its computation is proposed. We prove Fisherconsistency at the multivariate regression model
Covariance shaping leastsquares estimation
 IEEE Trans. Signal Process
, 2003
"... Abstract—A new linear estimator is proposed, which we refer to as the covariance shaping leastsquares (CSLS) estimator, for estimating a set of unknown deterministic parameters x observed through a known linear transformation H and corrupted by additive noise. The CSLS estimator is a biased estimat ..."
Abstract

Cited by 32 (17 self)
 Add to MetaCart
Abstract—A new linear estimator is proposed, which we refer to as the covariance shaping leastsquares (CSLS) estimator, for estimating a set of unknown deterministic parameters x observed through a known linear transformation H and corrupted by additive noise. The CSLS estimator is a biased
Image denoising using a scale mixture of Gaussians in the wavelet domain
 IEEE TRANS IMAGE PROCESSING
, 2003
"... We describe a method for removing noise from digital images, based on a statistical model of the coefficients of an overcomplete multiscale oriented basis. Neighborhoods of coefficients at adjacent positions and scales are modeled as the product of two independent random variables: a Gaussian vecto ..."
Abstract

Cited by 513 (17 self)
 Add to MetaCart
vector and a hidden positive scalar multiplier. The latter modulates the local variance of the coefficients in the neighborhood, and is thus able to account for the empirically observed correlation between the coefficient amplitudes. Under this model, the Bayesian least squares estimate of each
A HeteroskedasticityConsistent Covariance Matrix Estimator And A Direct Test For Heteroskedasticity
, 1980
"... This paper presents a parameter covariance matrix estimator which is consistent even when the disturbances of a linear regression model are heteroskedastic. This estimator does not depend on a formal model of the structure of the heteroskedasticity. By comparing the elements of the new estimator ..."
Abstract

Cited by 3211 (5 self)
 Add to MetaCart
to those of the usual covariance estimator, one obtains a direct test for heteroskedasticity, since in the absence of heteroskedasticity, the two estimators will be approximately equal, but will generally diverge otherwise. The test has an appealing least squares interpretation
Does Compulsory School Attendance Affect Schooling and Earnings?
, 1990
"... This paper presents evidence showing that individuals' season of birth is related to their educational attainment because of the combined effects of school start age policy and compulsory school attendance laws. In most school districts, individuals born in the beginning of the year start sc ..."
Abstract

Cited by 662 (13 self)
 Add to MetaCart
variables estimate of the rate of return to education is remarkably close to the ordinary least squares estimate, suggesting that there is little ability bias in conventional estimates of the return to education. The results also imply that individuals who are compelled to attend school longer than
Asymptotic Least Squares Estimators for Dynamic Games
 Review of Economic Studies
, 2008
"... This paper considers the estimation problem in dynamic games with
nite actions. We derive the equation system that characterizes the Markovian equilibria. The equilibrium equation system enables us to characterize conditions for identi
cation. We consider a class of asymptotic least squares estima ..."
Abstract

Cited by 59 (0 self)
 Add to MetaCart
This paper considers the estimation problem in dynamic games with
nite actions. We derive the equation system that characterizes the Markovian equilibria. The equilibrium equation system enables us to characterize conditions for identi
cation. We consider a class of asymptotic least squares
The Dantzig selector: statistical estimation when p is much larger than n
, 2005
"... In many important statistical applications, the number of variables or parameters p is much larger than the number of observations n. Suppose then that we have observations y = Ax + z, where x ∈ R p is a parameter vector of interest, A is a data matrix with possibly far fewer rows than columns, n ≪ ..."
Abstract

Cited by 879 (14 self)
 Add to MetaCart
‖ˆx − x ‖ 2 ℓ2 ≤ C2 ( · 2 log p · σ 2 + ∑ min(x 2 i, σ 2) Our results are nonasymptotic and we give values for the constant C. In short, our estimator achieves a loss within a logarithmic factor of the ideal mean squared error one would achieve with an oracle which would supply perfect information
Results 11  20
of
19,265