Results 1  10
of
4,671
Information bottleneck for gaussian variables
 in Advances in Neural Information Processing Systems 16
, 2003
"... ∗ Both authors contributed equally The problem of extracting the relevant aspects of data was addressed through the information bottleneck (IB) method, by (soft) clustering one variable while preserving information about another relevance variable. An interesting question addressed in the current ..."
Abstract

Cited by 23 (2 self)
 Add to MetaCart
case of multivariate Gaussian variables. The obtained optimal representation is a noisy linear projection to eigenvectors of the normalized correlation matrix Σ xyΣ −1 x, which is also the basis obtained in Canonical Correlation Analysis. However, in Gaussian IB, the compression tradeoff parameter
Information Bottleneck for Gaussian Variables
"... Abstract The problem of extracting the relevant aspects of data was addressed through the information bottleneck (IB) method, by (soft) clustering one variable while preserving information about another relevance variable. An interesting question addressed in the current work is the extension of ..."
Abstract
 Add to MetaCart
variables. The obtained optimal representation is a noisy linear projection to eigenvectors of the normalized correlation matrix \Sigma xy\Sigma1x, which is also the basis obtained in Canonical Correlation Analysis. However, in Gaussian IB, the compression tradeoff parameter uniquely determines
Image denoising using a scale mixture of Gaussians in the wavelet domain
 IEEE TRANS IMAGE PROCESSING
, 2003
"... We describe a method for removing noise from digital images, based on a statistical model of the coefficients of an overcomplete multiscale oriented basis. Neighborhoods of coefficients at adjacent positions and scales are modeled as the product of two independent random variables: a Gaussian vecto ..."
Abstract

Cited by 513 (17 self)
 Add to MetaCart
We describe a method for removing noise from digital images, based on a statistical model of the coefficients of an overcomplete multiscale oriented basis. Neighborhoods of coefficients at adjacent positions and scales are modeled as the product of two independent random variables: a Gaussian
Rainfall modelling using a latent Gaussian variable
 In Modelling Longitudinal and Spatially Correlated Data: Methods, Applications, and Future Directions
, 1997
"... ABSTRACT A monotonic transformation is applied to hourly rainfall data to achieve marginal normality. This de nes a latent Gaussian variable, with zero rainfall corresponding to censored values below a threshold. Autocorrelations of the latent variable are estimated by maximum likelihood. The goodne ..."
Abstract

Cited by 9 (4 self)
 Add to MetaCart
ABSTRACT A monotonic transformation is applied to hourly rainfall data to achieve marginal normality. This de nes a latent Gaussian variable, with zero rainfall corresponding to censored values below a threshold. Autocorrelations of the latent variable are estimated by maximum likelihood
High dimensional graphs and variable selection with the Lasso
 ANNALS OF STATISTICS
, 2006
"... The pattern of zero entries in the inverse covariance matrix of a multivariate normal distribution corresponds to conditional independence restrictions between variables. Covariance selection aims at estimating those structural zeros from data. We show that neighborhood selection with the Lasso is a ..."
Abstract

Cited by 736 (22 self)
 Add to MetaCart
is a computationally attractive alternative to standard covariance selection for sparse highdimensional graphs. Neighborhood selection estimates the conditional independence restrictions separately for each node in the graph and is hence equivalent to variable selection for Gaussian linear models. We
Mixtures of Probabilistic Principal Component Analysers
, 1998
"... Principal component analysis (PCA) is one of the most popular techniques for processing, compressing and visualising data, although its effectiveness is limited by its global linearity. While nonlinear variants of PCA have been proposed, an alternative paradigm is to capture data complexity by a com ..."
Abstract

Cited by 532 (6 self)
 Add to MetaCart
maximumlikelihood framework, based on a specific form of Gaussian latent variable model. This leads to a welldefined mixture model for probabilistic principal component analysers, whose parameters can be determined using an EM algorithm. We discuss the advantages of this model in the context
Envelope and Phase Distribution of Two Correlated Gaussian Variables
"... Abstract—Probability density functions (pdf’s) are derived for the phase and amplitude (envelope) of the complex gain X+ jY (j = √−1), where X and Y are two correlated non zeromean Gaussian random variables. The pdf of the amplitude is derived as an infinite series, but reduces to a closedform exp ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Abstract—Probability density functions (pdf’s) are derived for the phase and amplitude (envelope) of the complex gain X+ jY (j = √−1), where X and Y are two correlated non zeromean Gaussian random variables. The pdf of the amplitude is derived as an infinite series, but reduces to a closed
Dynamic Bayesian Networks: Representation, Inference and Learning
, 2002
"... Modelling sequential data is important in many areas of science and engineering. Hidden Markov models (HMMs) and Kalman filter models (KFMs) are popular for this because they are simple and flexible. For example, HMMs have been used for speech recognition and biosequence analysis, and KFMs have bee ..."
Abstract

Cited by 770 (3 self)
 Add to MetaCart
random variable. DBNs generalize KFMs by allowing arbitrary probability distributions, not just (unimodal) linearGaussian. In this thesis, I will discuss how to represent many different kinds of models as DBNs, how to perform exact and approximate inference in DBNs, and how to learn DBN models from
A Bernsteintype inequality for stochastic processes of quadratic forms of Gaussian variables
, 2009
"... We introduce a Bernsteintype inequality which serves to uniformly control quadratic forms of gaussian variables. The latter can for example be used to derive sharp model selection criteria for linear estimation in linear regression and linear inverse problems via penalization, and we do not exclu ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
We introduce a Bernsteintype inequality which serves to uniformly control quadratic forms of gaussian variables. The latter can for example be used to derive sharp model selection criteria for linear estimation in linear regression and linear inverse problems via penalization, and we do
Approximation Problems with the Divergence Criterion for Gaussian Variables and Gaussian Processes
"... System identification for stationary Gaussian processes includes an approximation problem. Currently the subspace algorithm for this problem enjoys much attention. This algorithm is based on a transformation of a finite time series to canonical variable form followed by a truncation. There is no pro ..."
Abstract
 Add to MetaCart
System identification for stationary Gaussian processes includes an approximation problem. Currently the subspace algorithm for this problem enjoys much attention. This algorithm is based on a transformation of a finite time series to canonical variable form followed by a truncation
Results 1  10
of
4,671