Results 1  10
of
47
Bayesian compressive sensing via belief propagation
 IEEE Trans. Signal Processing
, 2010
"... Compressive sensing (CS) is an emerging field based on the revelation that a small collection of linear projections of a sparse signal contains enough information for stable, subNyquist signal acquisition. When a statistical characterization of the signal is available, Bayesian inference can comple ..."
Abstract

Cited by 129 (19 self)
 Add to MetaCart
Compressive sensing (CS) is an emerging field based on the revelation that a small collection of linear projections of a sparse signal contains enough information for stable, subNyquist signal acquisition. When a statistical characterization of the signal is available, Bayesian inference can complement conventional CS methods based on linear programming or greedy algorithms. We perform approximate Bayesian inference using belief propagation (BP) decoding, which represents the CS encoding matrix as a graphical model. Fast encoding and decoding is provided using sparse encoding matrices, which also improve BP convergence by reducing the presence of loops in the graph. To decode a lengthN signal containing K large coefficients, our CSBP decoding algorithm uses O(K log(N)) measurements and O(N log 2 (N)) computation. Finally, sparse encoding matrices and the CSBP decoding algorithm can be modified to support a variety of signal models and measurement noise. 1
Bayesian inference and optimal design in the sparse linear model
 Workshop on Artificial Intelligence and Statistics
"... The linear model with sparsityfavouring prior on the coefficients has important applications in many different domains. In machine learning, most methods to date search for maximum a posteriori sparse solutions and neglect to represent posterior uncertainties. In this paper, we address problems of ..."
Abstract

Cited by 110 (12 self)
 Add to MetaCart
(Show Context)
The linear model with sparsityfavouring prior on the coefficients has important applications in many different domains. In machine learning, most methods to date search for maximum a posteriori sparse solutions and neglect to represent posterior uncertainties. In this paper, we address problems of Bayesian optimal design (or experiment planning), for which accurate estimates of uncertainty are essential. To this end, we employ expectation propagation approximate inference for the linear model with Laplace prior, giving new insight into numerical stability properties and proposing a robust algorithm. We also show how to estimate model hyperparameters by empirical Bayesian maximisation of the marginal likelihood, and propose ideas in order to scale up the method to very large underdetermined problems. We demonstrate the versatility of our framework on the application of gene regulatory network identification from microarray expression data, where both the Laplace prior and the active experimental design approach are shown to result in significant improvements. We also address the problem of sparse coding of natural images, and show how our framework can be used for compressive sensing tasks. Part of this work appeared in Seeger et al. (2007b). The gene network identification application appears in Steinke et al. (2007).
Exploiting structure in waveletbased Bayesian compressive sensing
, 2009
"... Bayesian compressive sensing (CS) is considered for signals and images that are sparse in a wavelet basis. The statistical structure of the wavelet coefficients is exploited explicitly in the proposed model, and therefore this framework goes beyond simply assuming that the data are compressible in a ..."
Abstract

Cited by 92 (14 self)
 Add to MetaCart
(Show Context)
Bayesian compressive sensing (CS) is considered for signals and images that are sparse in a wavelet basis. The statistical structure of the wavelet coefficients is exploited explicitly in the proposed model, and therefore this framework goes beyond simply assuming that the data are compressible in a wavelet basis. The structure exploited within the wavelet coefficients is consistent with that used in waveletbased compression algorithms. A hierarchical Bayesian model is constituted, with efficient inference via Markov chain Monte Carlo (MCMC) sampling. The algorithm is fully developed and demonstrated using several natural images, with performance comparisons to many stateoftheart compressivesensing inversion algorithms.
Fast Bayesian compressive sensing using Laplace priors
 in IEEE Int. Conf. on Acoustics, Speech, and Sig. Proc. (ICASSP09
, 2009
"... In this paper we model the components of the compressive sensing (CS) problem using the Bayesian framework by utilizing a hierarchical form of the Laplace prior to model sparsity of the unknown signal. This signal prior includes some of the existing models as special cases and achieves a high degree ..."
Abstract

Cited by 66 (11 self)
 Add to MetaCart
(Show Context)
In this paper we model the components of the compressive sensing (CS) problem using the Bayesian framework by utilizing a hierarchical form of the Laplace prior to model sparsity of the unknown signal. This signal prior includes some of the existing models as special cases and achieves a high degree of sparsity. We develop a constructive (greedy) algorithm resulting from this formulation where necessary parameters are estimated solely from the observation and therefore no userintervention is needed. We provide experimental results with synthetic 1D signals and images, and compare with the stateoftheart CS reconstruction algorithms demonstrating the superior performance of the proposed approach. Index Terms — Bayesian methods, compressive sensing, inverse problems, sparse Bayesian learning, relevance vector machine
Learning From Measurements in Exponential Families
"... Given a model family and a set of unlabeled examples, one could either label specific examples or state general constraints—both provide information about the desired model. In general, what is the most costeffective way to learn? To address this question, we introduce measurements, a general class ..."
Abstract

Cited by 54 (1 self)
 Add to MetaCart
(Show Context)
Given a model family and a set of unlabeled examples, one could either label specific examples or state general constraints—both provide information about the desired model. In general, what is the most costeffective way to learn? To address this question, we introduce measurements, a general class of mechanisms for providing information about a target model. We present a Bayesian decisiontheoretic framework, which allows us to both integrate diverse measurements and choose new measurements to make. We use a variational inference algorithm, which exploits exponential family duality. The merits of our approach are demonstrated on two sequence labeling tasks. 1.
Bayesian Robust Principal Component Analysis
, 2010
"... A hierarchical Bayesian model is considered for decomposing a matrix into lowrank and sparse components, assuming the observed matrix is a superposition of the two. The matrix is assumed noisy, with unknown and possibly nonstationary noise statistics. The Bayesian framework infers an approximate r ..."
Abstract

Cited by 40 (4 self)
 Add to MetaCart
A hierarchical Bayesian model is considered for decomposing a matrix into lowrank and sparse components, assuming the observed matrix is a superposition of the two. The matrix is assumed noisy, with unknown and possibly nonstationary noise statistics. The Bayesian framework infers an approximate representation for the noise statistics while simultaneously inferring the lowrank and sparseoutlier contributions; the model is robust to a broad range of noise levels, without having to change model hyperparameter settings. In addition, the Bayesian framework allows exploitation of additional structure in the matrix. For example, in video applications each row (or column) corresponds to a video frame, and we introduce a Markov dependency between consecutive rows in the matrix (corresponding to consecutive frames in the video). The properties of this Markov process are also inferred based on the observed matrix, while simultaneously denoising and recovering the lowrank and sparse components. We compare the Bayesian model to a stateoftheart optimizationbased implementation of robust PCA; considering several examples, we demonstrate competitive performance of the proposed model.
Large scale Bayesian inference and experimental design for sparse linear models
 Journal of Physics: Conference Series
"... Abstract. Many problems of lowlevel computer vision and image processing, such as denoising, deconvolution, tomographic reconstruction or superresolution, can be addressed by maximizing the posterior distribution of a sparse linear model (SLM). We show how higherorder Bayesian decisionmaking prob ..."
Abstract

Cited by 20 (2 self)
 Add to MetaCart
(Show Context)
Abstract. Many problems of lowlevel computer vision and image processing, such as denoising, deconvolution, tomographic reconstruction or superresolution, can be addressed by maximizing the posterior distribution of a sparse linear model (SLM). We show how higherorder Bayesian decisionmaking problems, such as optimizing image acquisition in magnetic resonance scanners, can be addressed by querying the SLM posterior covariance, unrelated to the density’s mode. We propose a scalable algorithmic framework, with which SLM posteriors over full, highresolution images can be approximated for the first time, solving a variational optimization problem which is convex if and only if posterior mode finding is convex. These methods successfully drive the optimization of sampling trajectories for realworld magnetic resonance imaging through Bayesian experimental design, which has not been attempted before. Our methodology provides new insight into similarities and differences between sparse reconstruction and approximate Bayesian inference, and has important implications for compressive sensing of realworld images. Parts of this work have been presented at
Optimization of kspace trajectories for compressed sensing by Bayesian experimental design
 MAGNETIC RESONANCE IN MEDICINE
, 2009
"... The optimization of kspace sampling for nonlinear sparse MRI reconstruction is phrased as Bayesian experimental design problem. Bayesian inference is approximated by a novel relaxation to standard signal processing primitives, resulting in an efficient optimization algorithm for Cartesian and spi ..."
Abstract

Cited by 16 (3 self)
 Add to MetaCart
(Show Context)
The optimization of kspace sampling for nonlinear sparse MRI reconstruction is phrased as Bayesian experimental design problem. Bayesian inference is approximated by a novel relaxation to standard signal processing primitives, resulting in an efficient optimization algorithm for Cartesian and spiral trajectories. On clinical resolution brain image data from a Siemens 3T scanner, automatically optimized trajectories lead to significantly improved images, compared to standard lowpass, equispaced or variable density randomized designs. Insights into the nonlinear design optimization problem for MR imaging are given.
Bayesian Experimental Design of Magnetic Resonance Imaging Sequences
"... We show how improved sequences for magnetic resonance imaging can be found through optimization of Bayesian design scores. Combining approximate Bayesian inference and natural image statistics with highperformance numerical computation, we propose the first Bayesian experimental design framework fo ..."
Abstract

Cited by 15 (9 self)
 Add to MetaCart
(Show Context)
We show how improved sequences for magnetic resonance imaging can be found through optimization of Bayesian design scores. Combining approximate Bayesian inference and natural image statistics with highperformance numerical computation, we propose the first Bayesian experimental design framework for this problem of high relevance to clinical and brain research. Our solution requires largescale approximate inference for dense, nonGaussian models. We propose a novel scalable variational inference algorithm, and show how powerful methods of numerical mathematics can be modified to compute primitives in our framework. Our approach is evaluated on raw data from a 3T MR scanner. 1
Convex variational Bayesian inference for large scale generalized linear models
 In ICML
, 2009
"... We show how variational Bayesian inference can be implemented for very large generalized linear models. Our relaxation is proven to be a convex problem for any logconcave model. We provide a generic double loop algorithm for solving this relaxation on models with arbitrary superGaussian potenti ..."
Abstract

Cited by 13 (6 self)
 Add to MetaCart
(Show Context)
We show how variational Bayesian inference can be implemented for very large generalized linear models. Our relaxation is proven to be a convex problem for any logconcave model. We provide a generic double loop algorithm for solving this relaxation on models with arbitrary superGaussian potentials. By iteratively decoupling the criterion, most of the work can be done by solving large linear systems, rendering our algorithm orders of magnitude faster than previously proposed solvers for the same problem. We evaluate our method on problems of Bayesian active learning for large binary classification models, and show how to address settings with many candidates and sequential inclusion steps. 1.