Results 1  10
of
795,681
Denote the corresponding precision matrix
, 2012
"... In this note we describe two ways of generating random variables with the Gibbs sampling approach for a truncated multivariate normal variable x, whose density function can be expressed as: exp − f (x, µ, Σ, a, b) = 1 2 (x − µ) ′ Σ−1} (x − µ) ba exp − 1 2 (x − µ) ′ Σ−1} (x − µ) dx for a ≤ x ≤ b and ..."
Abstract
 Add to MetaCart
and 0 otherwise. The first approach, as described by Kotecha and Djuric [1999], uses the covariance matrix Σ and has been implemented in the R package tmvtnorm since version 0.9 (Wilhelm and Manjunath [2010]). The second way is based on the works of Geweke [1991, 2005] and uses the precision matrix H
Minimum Phone Error Training of Precision Matrix Models
"... Abstract — Gaussian Mixture Models (GMMs) are commonly used as the output density function for large vocabulary continuous speech recognition (LVCSR) systems. A standard problem when using multivariate GMMs to classify data is how to accurately represent the correlation in the feature vector. Full c ..."
Abstract
 Add to MetaCart
covariance matrices yield a good model, but dramatically increase the number of model parameters. Hence diagonal covariance matrices are commonly used. Structured precision matrix approximations provide an alternative, flexible and compact representation. Schemes in this category include the extended maximum
Bayesian estimation of a sparse precision matrix
"... We consider the problem of estimating a sparse precision matrix of a multivariate Gaussian distribution, including the case where the dimension p is large. Gaussian graphical models provide an important tool in describing conditional independence through presence or absence of the edges in the unde ..."
Abstract
 Add to MetaCart
We consider the problem of estimating a sparse precision matrix of a multivariate Gaussian distribution, including the case where the dimension p is large. Gaussian graphical models provide an important tool in describing conditional independence through presence or absence of the edges
Learning the Kernel Matrix with SemiDefinite Programming
, 2002
"... Kernelbased learning algorithms work by embedding the data into a Euclidean space, and then searching for linear relations among the embedded data points. The embedding is performed implicitly, by specifying the inner products between each pair of points in the embedding space. This information ..."
Abstract

Cited by 780 (22 self)
 Add to MetaCart
is contained in the socalled kernel matrix, a symmetric and positive definite matrix that encodes the relative positions of all points. Specifying this matrix amounts to specifying the geometry of the embedding space and inducing a notion of similarity in the input spaceclassical model selection
Advanced Computation of a Sparse Precision Matrix HADAP: A HadamardDantzig Estimation of a Sparse Precision Matrix
"... Abstract—Estimating large sparse precision matrices is an interesting and challenging problem in many fields of sciences, engineering, and humanities, thanks to advances in computing technologies. Recent applications often encounter high dimensionality with a limited number of data points leading ..."
Abstract
 Add to MetaCart
on the setting of the problem. In this work, we propose an innovative approach named HADAP for estimating the precision matrix by minimizing a criterion combining a relaxation of the gradientlog likelihood and a penalization of lasso type. We derive an efficient Alternating Direction Method of multipliers
Structured Precision Matrix Modelling for Speech Recognition
, 2006
"... Declaration This dissertation is the result of my own work and includes nothing which is the outcome of the work done in collaboration, except where stated. It has not been submitted in whole or part for a degree at any other university. The length of this thesis including footnotes and appendices i ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
ficiency, the covariance matrix associated with each Gaussian component is assumed diagonal and the probability of successive observations is assumed independent given the HMM state sequence. Consequently, the spectral (intraframe) and temporal (interframe) correlations are poorly modelled. This thesis investigates ways
Stochastic Perturbation Theory
, 1988
"... . In this paper classical matrix perturbation theory is approached from a probabilistic point of view. The perturbed quantity is approximated by a firstorder perturbation expansion, in which the perturbation is assumed to be random. This permits the computation of statistics estimating the variatio ..."
Abstract

Cited by 886 (35 self)
 Add to MetaCart
. In this paper classical matrix perturbation theory is approached from a probabilistic point of view. The perturbed quantity is approximated by a firstorder perturbation expansion, in which the perturbation is assumed to be random. This permits the computation of statistics estimating
An Extended Set of Fortran Basic Linear Algebra Subprograms
 ACM TRANSACTIONS ON MATHEMATICAL SOFTWARE
, 1986
"... This paper describes an extension to the set of Basic Linear Algebra Subprograms. The extensions are targeted at matrixvector operations which should provide for efficient and portable implementations of algorithms for high performance computers. ..."
Abstract

Cited by 526 (72 self)
 Add to MetaCart
This paper describes an extension to the set of Basic Linear Algebra Subprograms. The extensions are targeted at matrixvector operations which should provide for efficient and portable implementations of algorithms for high performance computers.
Optimal Linear Shrinkage Estimator for Large Dimensional Precision Matrix
"... In this work we construct an optimal shrinkage estimator for the precision matrix in high dimensions. We consider the general asymptotics when the number of variables p → ∞ and the sample size n → ∞ so that p/n → c ∈ (0,+∞). The precision matrix is estimated directly, without inverting the corresp ..."
Abstract
 Add to MetaCart
In this work we construct an optimal shrinkage estimator for the precision matrix in high dimensions. We consider the general asymptotics when the number of variables p → ∞ and the sample size n → ∞ so that p/n → c ∈ (0,+∞). The precision matrix is estimated directly, without inverting
Finding community structure in networks using the eigenvectors of matrices
, 2006
"... We consider the problem of detecting communities or modules in networks, groups of vertices with a higherthanaverage density of edges connecting them. Previous work indicates that a robust approach to this problem is the maximization of the benefit function known as “modularity ” over possible div ..."
Abstract

Cited by 500 (0 self)
 Add to MetaCart
divisions of a network. Here we show that this maximization process can be written in terms of the eigenspectrum of a matrix we call the modularity matrix, which plays a role in community detection similar to that played by the graph Laplacian in graph partitioning calculations. This result leads us to a
Results 1  10
of
795,681