Results 1  10
of
15
Large scale Bayesian inference and experimental design for sparse linear models
 Journal of Physics: Conference Series
"... Abstract. Many problems of lowlevel computer vision and image processing, such as denoising, deconvolution, tomographic reconstruction or superresolution, can be addressed by maximizing the posterior distribution of a sparse linear model (SLM). We show how higherorder Bayesian decisionmaking prob ..."
Abstract

Cited by 20 (2 self)
 Add to MetaCart
Abstract. Many problems of lowlevel computer vision and image processing, such as denoising, deconvolution, tomographic reconstruction or superresolution, can be addressed by maximizing the posterior distribution of a sparse linear model (SLM). We show how higherorder Bayesian decisionmaking problems, such as optimizing image acquisition in magnetic resonance scanners, can be addressed by querying the SLM posterior covariance, unrelated to the density’s mode. We propose a scalable algorithmic framework, with which SLM posteriors over full, highresolution images can be approximated for the first time, solving a variational optimization problem which is convex if and only if posterior mode finding is convex. These methods successfully drive the optimization of sampling trajectories for realworld magnetic resonance imaging through Bayesian experimental design, which has not been attempted before. Our methodology provides new insight into similarities and differences between sparse reconstruction and approximate Bayesian inference, and has important implications for compressive sensing of realworld images. Parts of this work have been presented at
Gaussian sampling by local perturbations
"... We present a technique for exact simulation of Gaussian Markov random fields (GMRFs), which can be interpreted as locally injecting noise to each Gaussian factor independently, followed by computing the mean/mode of the perturbed GMRF. Coupled with standard iterative techniques for the solution of s ..."
Abstract

Cited by 18 (1 self)
 Add to MetaCart
(Show Context)
We present a technique for exact simulation of Gaussian Markov random fields (GMRFs), which can be interpreted as locally injecting noise to each Gaussian factor independently, followed by computing the mean/mode of the perturbed GMRF. Coupled with standard iterative techniques for the solution of symmetric positive definite systems, this yields a very efficient sampling algorithm with essentially linear complexity in terms of speed and memory requirements, well suited to extremely large scale probabilistic models. Apart from synthesizing data under a Gaussian model, the proposed technique directly leads to an efficient unbiased estimator of marginal variances. Beyond Gaussian models, the proposed algorithm is also very useful for handling highly nonGaussian continuouslyvalued MRFs such as those arising in statistical image modeling or in the first layer of deep belief networks describing realvalued data, where the nonquadratic potentials coupling different sites can be represented as finite or infinite mixtures of Gaussians with the help of local or distributed latent mixture assignment variables. The Bayesian treatment of such models most naturally involves a block Gibbs sampler which alternately draws samples of the conditionally independent latent mixture assignments and the conditionally multivariate Gaussian continuous vector and we show that it can directly benefit from the proposed methods. 1
Convex variational Bayesian inference for large scale generalized linear models
 In ICML
, 2009
"... We show how variational Bayesian inference can be implemented for very large generalized linear models. Our relaxation is proven to be a convex problem for any logconcave model. We provide a generic double loop algorithm for solving this relaxation on models with arbitrary superGaussian potenti ..."
Abstract

Cited by 13 (6 self)
 Add to MetaCart
We show how variational Bayesian inference can be implemented for very large generalized linear models. Our relaxation is proven to be a convex problem for any logconcave model. We provide a generic double loop algorithm for solving this relaxation on models with arbitrary superGaussian potentials. By iteratively decoupling the criterion, most of the work can be done by solving large linear systems, rendering our algorithm orders of magnitude faster than previously proposed solvers for the same problem. We evaluate our method on problems of Bayesian active learning for large binary classification models, and show how to address settings with many candidates and sequential inclusion steps. 1.
Large Scale Variational Inference and Experimental Design for Sparse Generalized Linear Models
, 2008
"... Sparsity is a fundamental concept of modern statistics, and often the only general principle available at the moment to address novel learning applications with many more variables than observations. While much progress has been made recently in the theoretical understanding and algorithmics of spa ..."
Abstract

Cited by 9 (5 self)
 Add to MetaCart
Sparsity is a fundamental concept of modern statistics, and often the only general principle available at the moment to address novel learning applications with many more variables than observations. While much progress has been made recently in the theoretical understanding and algorithmics of sparse point estimation, higherorder problems such as covariance estimation or optimal data acquisition are seldomly addressed for sparsityfavouring models, and there are virtually no algorithms for large scale applications of these. We provide novel approximate Bayesian inference algorithms for sparse generalized linear models, that can be used with hundred thousands of variables, and run orders of magnitude faster than previous algorithms in domains where either apply. By analyzing our methods and establishing some novel convexity results, we settle a longstanding open question about variational Bayesian inference for continuous variable models: the Gaussian lower bound relaxation, which has been used previously for a range of models, is proved to be a convex optimization problem, if and only if the posterior mode is found by convex programming. Our algorithms reduce to the same computational primitives than commonly used sparse estimation methods do, but require Gaussian marginal variance estimation as well. We show how the Lanczos algorithm from numerical mathematics can be employed to compute the latter. We are interested in Bayesian experimental design here (which is mainly driven by efficient approximate inference), a powerful framework for optimizing measurement architectures of complex signals, such as natural images. Designs
Fast Convergent Algorithms for Expectation Propagation Approximate Bayesian Inference
"... We propose a novel algorithm to solve the expectation propagation relaxation of Bayesian inference for continuousvariable graphical models. In contrast to most previous algorithms, our method is provably convergent. By marrying convergent EP ideas from [15] with covariance decoupling techniques [23 ..."
Abstract

Cited by 9 (0 self)
 Add to MetaCart
We propose a novel algorithm to solve the expectation propagation relaxation of Bayesian inference for continuousvariable graphical models. In contrast to most previous algorithms, our method is provably convergent. By marrying convergent EP ideas from [15] with covariance decoupling techniques [23, 13], it runs at least an order of magnitude faster than the most common EP solver. 1
On the submodularity of linear experimental design. Unpublished Note
, 2009
"... Here, I review facts that are most probably known, namely that the information gain criterion used to drive experimental design in a linearGaussian model is submodular, so that a wellknown approximation guarantee holds for the sequential greedy algorithm. The criterion is equal to a certain mutual ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
(Show Context)
Here, I review facts that are most probably known, namely that the information gain criterion used to drive experimental design in a linearGaussian model is submodular, so that a wellknown approximation guarantee holds for the sequential greedy algorithm. The criterion is equal to a certain mutual information, which is not submodular in general. I point out the high potential relevance of obtaining approximation guarantees for nonlinear experimental design as well. 1 Submodularity of Linear Experimental Design Let u ∈ R n a latent vector of interest, X ∈ R M×n a (complete) design matrix, r = Xu, and y = r + ε, where u ∼ P (u) = N(0, I) and ε ∼ N(0, σ 2 I) independently. Given a subset I ⊂ {1,..., M}, we are interested in reconstructing u from measurements y I obtained with the design XI, · ∈ R I×n. The goal of experimental design is to choose a subset I, so that the posterior uncertainty in uy I is as small as possible, over all subsets of the same size. The criterion of interest is f(I): = H[P (u)] − Ey I
Gaussian Covariance and Scalable Variational Inference
"... We analyze computational aspects of variational approximate inference techniques for sparse linear models, which have to be understood to allow for large scale applications. Gaussian covariances play a key role, whose approximation is computationally hard. While most previous methods gain scalabilit ..."
Abstract

Cited by 5 (3 self)
 Add to MetaCart
(Show Context)
We analyze computational aspects of variational approximate inference techniques for sparse linear models, which have to be understood to allow for large scale applications. Gaussian covariances play a key role, whose approximation is computationally hard. While most previous methods gain scalability by not even representing most posterior dependencies, harmful factorization assumptions can be avoided by employing datadependent lowrank approximations instead. We provide theoretical and empirical insights into algorithmic and statistical consequences of lowrank covariance approximation errors on decision outcomes in nonlinear sequential Bayesian experimental design. 1.
Speeding up magnetic resonance image acquisition by Bayesian multislice adaptive compressed sensing. Supplemental Appendix
, 2010
"... We show how to sequentially optimize magnetic resonance imaging measurement designs over stacks of neighbouring image slices, by performing convex variational inference on a large scale nonGaussian linear dynamical system, tracking dominating directions of posterior covariance without imposing any ..."
Abstract

Cited by 4 (3 self)
 Add to MetaCart
(Show Context)
We show how to sequentially optimize magnetic resonance imaging measurement designs over stacks of neighbouring image slices, by performing convex variational inference on a large scale nonGaussian linear dynamical system, tracking dominating directions of posterior covariance without imposing any factorization constraints. Our approach can be scaled up to highresolution images by reductions to numerical mathematics primitives and parallelization on several levels. In a first study, designs are found that improve significantly on others chosen independently for each slice or drawn at random. 1
glmie: Generalised Linear Models Inference & Estimation Toolbox
"... Theglmie toolbox contains functionality for estimation and inference in generalised linear models over continuousvalued variables. Besides a variety of penalised least squares solvers for estimation, it offers inference based on (convex) variational bounds, on expectation propagation and on factor ..."
Abstract
 Add to MetaCart
(Show Context)
Theglmie toolbox contains functionality for estimation and inference in generalised linear models over continuousvalued variables. Besides a variety of penalised least squares solvers for estimation, it offers inference based on (convex) variational bounds, on expectation propagation and on factorial mean field. Scalable and efficient inference in fullyconnected undirected graphical models or Markov random fields with Gaussian and nonGaussian potentials is achieved by casting all the computations as matrix vector multiplications. We provide a wide choice of penalty functions for estimation, potential functions for inference and matrix classes with lazy evaluation for convenient modelling. We designed the glmie package to be simple, generic and easily expansible. Most of the code is written in Matlab including some MEX files to be fully compatible to both Matlab 7.x and GNU Octave 3.3.x. Large scale probabilistic classification as well as sparse linear modelling can be performed in a common algorithmical framework by theglmie toolkit.
Approved:
, 2014
"... The aim of this thesis is to investigate a GPUbased scalable image reconstruction algorithm for transmission tomography based on a Gaussian noise model for the log transformed and calibrated measurements. The proposed algorithm is based on sparse Bayesian learning (SBL) which promotes sparsity of t ..."
Abstract
 Add to MetaCart
(Show Context)
The aim of this thesis is to investigate a GPUbased scalable image reconstruction algorithm for transmission tomography based on a Gaussian noise model for the log transformed and calibrated measurements. The proposed algorithm is based on sparse Bayesian learning (SBL) which promotes sparsity of the imaged object by introducing additional latent variables, one for each pixel/voxel, and learning them from the data using an hierarchical Bayesian model. We address the computational bottleneck of SBL which arises in the computation of posterior variances. Two scalable methods for efficient estimation of variances were studied and tested: the first is based on a matrix probing technique; and the second method is based on a Monte Carlo estimator. Finally, we study adaptive data acquisition methods, where instead of using a standard scan around the object, the source locations are selected based on the learned information from previously available measurements, leading to fewer projections. iv