• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations

Tools

Sorted by:
Try your query at:
Semantic Scholar Scholar Academic
Google Bing DBLP
Results 1 - 10 of 765,770
Next 10 →

Maximum likelihood from incomplete data via the EM algorithm

by A. P. Dempster, N. M. Laird, D. B. Rubin - JOURNAL OF THE ROYAL STATISTICAL SOCIETY, SERIES B , 1977
"... A broadly applicable algorithm for computing maximum likelihood estimates from incomplete data is presented at various levels of generality. Theory showing the monotone behaviour of the likelihood and convergence of the algorithm is derived. Many examples are sketched, including missing value situat ..."
Abstract - Cited by 11807 (17 self) - Add to MetaCart
A broadly applicable algorithm for computing maximum likelihood estimates from incomplete data is presented at various levels of generality. Theory showing the monotone behaviour of the likelihood and convergence of the algorithm is derived. Many examples are sketched, including missing value

Linear least-squares algorithms for temporal difference learning

by Steven J. Bradtke, Andrew G. Barto, Pack Kaelbling - Machine Learning , 1996
"... Abstract. We introduce two new temporal difference (TD) algorithms based on the theory of linear leastsquares function approximation. We define an algorithm we call Least-Squares TD (LS TD) for which we prove probability-one convergence when it is used with a function approximator linear in the adju ..."
Abstract - Cited by 257 (1 self) - Add to MetaCart
Abstract. We introduce two new temporal difference (TD) algorithms based on the theory of linear leastsquares function approximation. We define an algorithm we call Least-Squares TD (LS TD) for which we prove probability-one convergence when it is used with a function approximator linear

The Kernel Recursive Least Squares Algorithm

by Yaakov Engel, Shie Mannor, Ron Meir - IEEE Transactions on Signal Processing , 2003
"... We present a non-linear kernel-based version of the Recursive Least Squares (RLS) algorithm. Our Kernel-RLS (KRLS) algorithm performs linear regression in the feature space induced by a Mercer kernel, and can therefore be used to recursively construct the minimum mean squared -error regressor. Spars ..."
Abstract - Cited by 138 (2 self) - Add to MetaCart
We present a non-linear kernel-based version of the Recursive Least Squares (RLS) algorithm. Our Kernel-RLS (KRLS) algorithm performs linear regression in the feature space induced by a Mercer kernel, and can therefore be used to recursively construct the minimum mean squared -error regressor

Least Median of Squares Regression

by Peter J. Rousseeuw - JOURNAL OF THE AMERICAN STATISTICAL ASSOCIATION , 1984
"... ..."
Abstract - Cited by 622 (22 self) - Add to MetaCart
Abstract not found

LSQR: An Algorithm for Sparse Linear Equations and Sparse Least Squares

by Christopher C. Paige, Michael A. Saunders - ACM Trans. Math. Software , 1982
"... An iterative method is given for solving Ax ~ffi b and minU Ax- b 112, where the matrix A is large and sparse. The method is based on the bidiagonalization procedure of Golub and Kahan. It is analytically equivalent to the standard method of conjugate gradients, but possesses more favorable numerica ..."
Abstract - Cited by 649 (21 self) - Add to MetaCart
-gradient algorithms, indicating that I~QR is the most reliable algorithm when A is ill-conditioned. Categories and Subject Descriptors: G.1.2 [Numerical Analysis]: ApprorJmation--least squares approximation; G.1.3 [Numerical Analysis]: Numerical Linear Algebra--linear systems (direct and

On the Sum-of-Squares Algorithm for Bin Packing

by Janos Csirik, David S. Johnson , Peter W. Shor, Claire Kenyon, et al. , 2000
"... In this paper we present a theoretical analysis of the deterministic online Sum of Squares algorithm (SS) for bin packing, introduced and studied experimentally in [8], along with several new variants. SS is applicable to any instance of bin packing in which the bin capacity B and item sizes s(a) ar ..."
Abstract - Cited by 126 (6 self) - Add to MetaCart
In this paper we present a theoretical analysis of the deterministic online Sum of Squares algorithm (SS) for bin packing, introduced and studied experimentally in [8], along with several new variants. SS is applicable to any instance of bin packing in which the bin capacity B and item sizes s

Least-Squares Policy Iteration

by Michail G. Lagoudakis, Ronald Parr - JOURNAL OF MACHINE LEARNING RESEARCH , 2003
"... We propose a new approach to reinforcement learning for control problems which combines value-function approximation with linear architectures and approximate policy iteration. This new approach ..."
Abstract - Cited by 461 (12 self) - Add to MetaCart
We propose a new approach to reinforcement learning for control problems which combines value-function approximation with linear architectures and approximate policy iteration. This new approach

Planning Algorithms

by Steven M LaValle , 2004
"... This book presents a unified treatment of many different kinds of planning algorithms. The subject lies at the crossroads between robotics, control theory, artificial intelligence, algorithms, and computer graphics. The particular subjects covered include motion planning, discrete planning, planning ..."
Abstract - Cited by 1108 (51 self) - Add to MetaCart
This book presents a unified treatment of many different kinds of planning algorithms. The subject lies at the crossroads between robotics, control theory, artificial intelligence, algorithms, and computer graphics. The particular subjects covered include motion planning, discrete planning

Algorithms for Non-negative Matrix Factorization

by Daniel D. Lee, H. Sebastian Seung - In NIPS , 2001
"... Non-negative matrix factorization (NMF) has previously been shown to be a useful decomposition for multivariate data. Two different multiplicative algorithms for NMF are analyzed. They differ only slightly in the multiplicative factor used in the update rules. One algorithm can be shown to minim ..."
Abstract - Cited by 1230 (5 self) - Add to MetaCart
to minimize the conventional least squares error while the other minimizes the generalized Kullback-Leibler divergence. The monotonic convergence of both algorithms can be proven using an auxiliary function analogous to that used for proving convergence of the ExpectationMaximization algorithm

A new scaling and squaring algorithm for the . . .

by Awad H. Al-Mohy, Nicholas J. Higham , 2009
"... The scaling and squaring method for the matrix exponential is based on the approx-imation eA ≈ (rm(2−sA))2s, where rm(x) is the [m/m] Pade ́ approximant to ex and the integers m and s are to be chosen. Several authors have identified a weakness of existing scaling and squaring algorithms termed ov ..."
Abstract - Add to MetaCart
The scaling and squaring method for the matrix exponential is based on the approx-imation eA ≈ (rm(2−sA))2s, where rm(x) is the [m/m] Pade ́ approximant to ex and the integers m and s are to be chosen. Several authors have identified a weakness of existing scaling and squaring algorithms termed
Next 10 →
Results 1 - 10 of 765,770
Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University