Results 1  10
of
2,399,945
Maximum likelihood from incomplete data via the EM algorithm
 JOURNAL OF THE ROYAL STATISTICAL SOCIETY, SERIES B
, 1977
"... A broadly applicable algorithm for computing maximum likelihood estimates from incomplete data is presented at various levels of generality. Theory showing the monotone behaviour of the likelihood and convergence of the algorithm is derived. Many examples are sketched, including missing value situat ..."
Abstract

Cited by 11807 (17 self)
 Add to MetaCart
situations, applications to grouped, censored or truncated data, finite mixture models, variance component estimation, hyperparameter estimation, iteratively reweighted least squares and factor analysis.
Regularized LeastSquares Classification
"... We consider the solution of binary classification problems via Tikhonov regularization in a Reproducing Kernel Hilbert Space using the square loss, and denote the resulting algorithm Regularized LeastSquares Classification (RLSC). We sketch ..."
Abstract

Cited by 100 (1 self)
 Add to MetaCart
We consider the solution of binary classification problems via Tikhonov regularization in a Reproducing Kernel Hilbert Space using the square loss, and denote the resulting algorithm Regularized LeastSquares Classification (RLSC). We sketch
Notes on regularized leastsquares
, 2007
"... This is a collection of information about regularized least squares (RLS). The facts here are not “new results”, but we have not seen them usefully collected together before. A key goal of this work is to demonstrate that with RLS, we get certain things “for free”: if we can solve a single supervise ..."
Abstract

Cited by 27 (0 self)
 Add to MetaCart
This is a collection of information about regularized least squares (RLS). The facts here are not “new results”, but we have not seen them usefully collected together before. A key goal of this work is to demonstrate that with RLS, we get certain things “for free”: if we can solve a single
Discriminatively Regularized LeastSquares Classification
"... Over the past decades, regularization theory is widely applied in various areas of machine learning to derive a large family of novel algorithms. Traditionally, regularization focuses on smoothing only, and does not fully utilize the underlying discriminative knowledge which is vital for classificat ..."
Abstract

Cited by 5 (2 self)
 Add to MetaCart
for classification. In this paper, we propose a novel regularization algorithm in the leastsquares sense, called Discriminatively Regularized LeastSquares Classification (DRLSC) method, which is specifically designed for classification. Inspired by several new geometrically motivated methods, DRLSC directly embeds
The regularized least square algorithm and the problem of learning halfspaces
, 2007
"... Abstract. We provide sample complexity of the problem of learning halfspaces with monotonic noise, using the regularized least square algorithm. 1. ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
Abstract. We provide sample complexity of the problem of learning halfspaces with monotonic noise, using the regularized least square algorithm. 1.
2 Regularized Least Squares Approximation
"... However, many times it makes more sense to approximate the given data by a least squares fit. fasshauer@iit.edu MATH 590 – Chapter 19 Fall 2008Up to now we have looked only at interpolation. However, many times it makes more sense to approximate the given data by a least squares fit. This is especia ..."
Abstract
 Add to MetaCart
However, many times it makes more sense to approximate the given data by a least squares fit. fasshauer@iit.edu MATH 590 – Chapter 19 Fall 2008Up to now we have looked only at interpolation. However, many times it makes more sense to approximate the given data by a least squares fit
Local Regularized LeastSquare Dimensionality Reduction
"... In this paper, we propose a new nonlinear dimensionality reduction algorithm by adopting regularized leastsquare criterion on local areas of the data distribution. We first propose a local linear model to describe the characteristic of the lowdimensional coordinates of the neighborhood centered in ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
In this paper, we propose a new nonlinear dimensionality reduction algorithm by adopting regularized leastsquare criterion on local areas of the data distribution. We first propose a local linear model to describe the characteristic of the lowdimensional coordinates of the neighborhood centered
Optimal Rates for Regularized Least Squares Regression
, 2009
"... We establish a new oracle inequality for kernelbased, regularized least squares regression methods, which uses the eigenvalues of the associated integral operator as a complexity measure. We then use this oracle inequality to derive learning rates for these methods. Here, it turns out that these rat ..."
Abstract

Cited by 19 (3 self)
 Add to MetaCart
We establish a new oracle inequality for kernelbased, regularized least squares regression methods, which uses the eigenvalues of the associated integral operator as a complexity measure. We then use this oracle inequality to derive learning rates for these methods. Here, it turns out
Learning to rank with pairwise regularized leastsquares
 SIGIR 2007 Workshop on Learning to Rank for Information Retrieval
, 2007
"... Learning preference relations between objects of interest is one of the key problems in machine learning. Our approach for addressing this task is based on pairwise comparisons for estimation of overall ranking. In this paper, we propose a simple preference learning algorithm based on regularized le ..."
Abstract

Cited by 17 (10 self)
 Add to MetaCart
least squares and describe it within the kernel methods framework. Our algorithm, that we call RankRLS, minimizes a regularized leastsquares approximation of a ranking error function that counts the number of incorrectly ranked pairs of data points. We consider both primal and dual versions
Risk Bounds and Model Selection for Regularized Least Squares
"... The Problem: To investigate criteria of choice of the regularization parameter for the regularized least squares regression algorithm. Motivation: A central problem in learning theory is a quantitative assessment of the inference property of a learning algorithm ensuring consistency. A number of sem ..."
Abstract
 Add to MetaCart
The Problem: To investigate criteria of choice of the regularization parameter for the regularized least squares regression algorithm. Motivation: A central problem in learning theory is a quantitative assessment of the inference property of a learning algorithm ensuring consistency. A number
Results 1  10
of
2,399,945