Results 1  10
of
7,861
Maximum likelihood from incomplete data via the EM algorithm
 JOURNAL OF THE ROYAL STATISTICAL SOCIETY, SERIES B
, 1977
"... A broadly applicable algorithm for computing maximum likelihood estimates from incomplete data is presented at various levels of generality. Theory showing the monotone behaviour of the likelihood and convergence of the algorithm is derived. Many examples are sketched, including missing value situat ..."
Abstract

Cited by 11972 (17 self)
 Add to MetaCart
A broadly applicable algorithm for computing maximum likelihood estimates from incomplete data is presented at various levels of generality. Theory showing the monotone behaviour of the likelihood and convergence of the algorithm is derived. Many examples are sketched, including missing value
How Many Iterations in the Gibbs Sampler?
 In Bayesian Statistics 4
, 1992
"... When the Gibbs sampler is used to estimate posterior distributions (Gelfand and Smith, 1990), the question of how many iterations are required is central to its implementation. When interest focuses on quantiles of functionals of the posterior distribution, we describe an easilyimplemented metho ..."
Abstract

Cited by 159 (6 self)
 Add to MetaCart
When the Gibbs sampler is used to estimate posterior distributions (Gelfand and Smith, 1990), the question of how many iterations are required is central to its implementation. When interest focuses on quantiles of functionals of the posterior distribution, we describe an easily
How Many Iterations are Sufficient for Efficient
, 2012
"... Abstract: A common practice in obtaining an efficient semiparametric estimate is through iteratively maximizing the (penalized) full loglikelihood w.r.t. its Euclidean parameter and functional nuisance parameter. A rigorous theoretical study of this semiparametric iterative estimation approach is t ..."
Abstract
 Add to MetaCart
Abstract: A common practice in obtaining an efficient semiparametric estimate is through iteratively maximizing the (penalized) full loglikelihood w.r.t. its Euclidean parameter and functional nuisance parameter. A rigorous theoretical study of this semiparametric iterative estimation approach
rates. How Many Iterations in the Gibbs Sampler?
, 1991
"... This technical report consists of three short papers on Monte Carlo Markov chain inference. The first paper, "How many iterations in the Gibbs sampler?, " proposes an easily implemented method for determining the totalnumber of iterations required to estimate probabilities and quantiles of ..."
Abstract
 Add to MetaCart
This technical report consists of three short papers on Monte Carlo Markov chain inference. The first paper, "How many iterations in the Gibbs sampler?, " proposes an easily implemented method for determining the totalnumber of iterations required to estimate probabilities and quantiles
How Many Iterations are Sufficient for Efficient Semiparametric Estimation?
, 2012
"... A common practice in obtaining an efficient semiparametric estimate is through iteratively maximizing the (penalized) full loglikelihood w.r.t. its Euclidean parameter and functional nuisance parameter. A rigorous theoretical study of this semiparametric iterative estimation approach is the main p ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
A common practice in obtaining an efficient semiparametric estimate is through iteratively maximizing the (penalized) full loglikelihood w.r.t. its Euclidean parameter and functional nuisance parameter. A rigorous theoretical study of this semiparametric iterative estimation approach is the main
Iterative point matching for registration of freeform curves and surfaces
, 1994
"... A heuristic method has been developed for registering two sets of 3D curves obtained by using an edgebased stereo system, or two dense 3D maps obtained by using a correlationbased stereo system. Geometric matching in general is a difficult unsolved problem in computer vision. Fortunately, in ma ..."
Abstract

Cited by 660 (8 self)
 Add to MetaCart
, in many practical applications, some a priori knowledge exists which considerably simplifies the problem. In visual navigation, for example, the motion between successive positions is usually approximately known. From this initial estimate, our algorithm computes observer motion with very good precision
A Singular Value Thresholding Algorithm for Matrix Completion
, 2008
"... This paper introduces a novel algorithm to approximate the matrix with minimum nuclear norm among all matrices obeying a set of convex constraints. This problem may be understood as the convex relaxation of a rank minimization problem, and arises in many important applications as in the task of reco ..."
Abstract

Cited by 555 (22 self)
 Add to MetaCart
This paper introduces a novel algorithm to approximate the matrix with minimum nuclear norm among all matrices obeying a set of convex constraints. This problem may be understood as the convex relaxation of a rank minimization problem, and arises in many important applications as in the task
kmeans requires exponentially many iterations even in the plane
 DCG
"... The kmeans algorithm is a wellknown method for partitioning n points that lie in the ddimensional space into k clusters. Its main features are simplicity and speed in practice. Theoretically, however, the best known upper bound on its running time (i.e. O(n kd)) can be exponential in the number o ..."
Abstract

Cited by 26 (2 self)
 Add to MetaCart
The kmeans algorithm is a wellknown method for partitioning n points that lie in the ddimensional space into k clusters. Its main features are simplicity and speed in practice. Theoretically, however, the best known upper bound on its running time (i.e. O(n kd)) can be exponential in the number of points. Recently, Arthur and Vassilvitskii [2] showed a superpolynomial worstcase analysis, improving the best known lower bound from Ω(n) to 2 Ω( √ n) with a construction in d = Ω ( √ n) dimensions. In [2] they also conjectured the existence of superpolynomial lower bounds for any d ≥ 2. Our contribution is twofold: we prove this conjecture and we improve the lower bound, by The kmeans method is one of the most widely used algorithms for geometric clustering. It was originally proposed by Forgy in 1965 [7] and McQueen in 1967 [13], and is often known as Lloyd’s algorithm [12]. It is a local search algorithm and partitions n data points into k clusters in this way: seeded with k initial cluster centers, it assigns every data point to its closest center,
Factor Graphs and the SumProduct Algorithm
 IEEE TRANSACTIONS ON INFORMATION THEORY
, 1998
"... A factor graph is a bipartite graph that expresses how a "global" function of many variables factors into a product of "local" functions. Factor graphs subsume many other graphical models including Bayesian networks, Markov random fields, and Tanner graphs. Following one simple c ..."
Abstract

Cited by 1791 (69 self)
 Add to MetaCart
A factor graph is a bipartite graph that expresses how a "global" function of many variables factors into a product of "local" functions. Factor graphs subsume many other graphical models including Bayesian networks, Markov random fields, and Tanner graphs. Following one simple
Efficient Variants of the ICP Algorithm
 INTERNATIONAL CONFERENCE ON 3D DIGITAL IMAGING AND MODELING
, 2001
"... The ICP (Iterative Closest Point) algorithm is widely used for geometric alignment of threedimensional models when an initial estimate of the relative pose is known. Many variants of ICP have been proposed, affecting all phases of the algorithm from the selection and matching of points to the minim ..."
Abstract

Cited by 718 (5 self)
 Add to MetaCart
The ICP (Iterative Closest Point) algorithm is widely used for geometric alignment of threedimensional models when an initial estimate of the relative pose is known. Many variants of ICP have been proposed, affecting all phases of the algorithm from the selection and matching of points
Results 1  10
of
7,861