Results 1 
5 of
5
Bilinear Generalized Approximate Message Passing
, 2013
"... Abstract—We extend the generalized approximate message passing (GAMP) approach, originally proposed for highdimensional generalizedlinear regression in the context of compressive sensing, to the generalizedbilinear case, which enables its application to matrix completion, robust PCA, dictionary l ..."
Abstract

Cited by 7 (2 self)
 Add to MetaCart
(Show Context)
Abstract—We extend the generalized approximate message passing (GAMP) approach, originally proposed for highdimensional generalizedlinear regression in the context of compressive sensing, to the generalizedbilinear case, which enables its application to matrix completion, robust PCA, dictionary learning, and related matrixfactorization problems. In the first part of the paper, we derive our Bilinear GAMP (BiGAMP) algorithm as an approximation of the sumproduct belief propagation algorithm in the highdimensional limit, where centrallimit theorem arguments and Taylorseries approximations apply, and under the assumption of statistically independent matrix entries with known priors. In addition, we propose an adaptive damping mechanism that aids convergence under finite problem sizes, an expectationmaximization (EM)based method to automatically tune the parameters of the assumed priors, and two rankselection strategies. In the second part of the paper, we discuss the specializations of EMBiGAMP to the problems of matrix completion, robust PCA, and dictionary learning, and present the results of an extensive empirical study comparing EMBiGAMP to stateoftheart algorithms on each problem. Our numerical results, using both synthetic and realworld datasets, demonstrate that EMBiGAMP yields excellent reconstruction accuracy (often best in class) while maintaining competitive runtimes and avoiding the need to tune algorithmic parameters. I.
Nonnegative Principal Component Analysis: Message Passing Algorithms and Sharp Asymptotics
, 2014
"... Principal component analysis (PCA) aims at estimating the direction of maximal variability of a highdimensional dataset. A natural question is: does this task become easier, and estimation more accurate, when we exploit additional knowledge on the principal vector? We study the case in which the p ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
(Show Context)
Principal component analysis (PCA) aims at estimating the direction of maximal variability of a highdimensional dataset. A natural question is: does this task become easier, and estimation more accurate, when we exploit additional knowledge on the principal vector? We study the case in which the principal vector is known to lie in the positive orthant. Similar constraints arise in a number of applications, ranging from analysis of gene expression data to spike sorting in neural signal processing. In the unconstrained case, the estimation performances of PCA has been precisely characterized using random matrix theory, under a statistical model known as the ‘spiked model. ’ It is known that the estimation error undergoes a phase transition as the signaltonoise ratio crosses a certain threshold. Unfortunately, tools from random matrix theory have no bearing on the constrained problem. Despite this challenge, we develop an analogous characterization in the constrained case, within a onespike model. In particular: (i) We prove that the estimation error undergoes a similar phase transition, albeit at a different threshold in signaltonoise ratio that we determine exactly; (ii) We prove that –unlike in the unconstrained case – estimation error depends on the spike vector, and characterize the least favorable vectors; (iii) We show that a nonnegative principal component can be approximately computed –under the spiked model – in nearly linear time. This despite the fact that the problem is nonconvex and, in general, NPhard to solve exactly. 1
By
, 2015
"... In recent years, there have been massive increases in both the dimensionality and sample sizes of data due to everincreasing consumer demand coupled with relatively inexpensive sensing technologies. These highdimensional datasets bring challenges such as complexity, along with numerous opportuniti ..."
Abstract
 Add to MetaCart
In recent years, there have been massive increases in both the dimensionality and sample sizes of data due to everincreasing consumer demand coupled with relatively inexpensive sensing technologies. These highdimensional datasets bring challenges such as complexity, along with numerous opportunities. Though many signals of interest live in a highdimensional ambient space, they often have a much smaller inherent dimensionality which, if leveraged, lead to improved recoveries. For example, the notion of sparsity is a requisite in the compressive sensing (CS) field, which allows for accurate signal reconstruction from subNyquist sampled measurements given certain conditions. When recovering a sparse signal from noisy compressive linear measurements, the distribution of the signal’s nonzero coefficients can have a profound effect on recovery meansquared error (MSE). If this distribution is apriori known, then one could use computationally efficient approximate message passing (AMP) techniques that yield approximate minimum MSE (MMSE) estimates or critical points to the maxi
By
"... Recent developments in compressive sensing (CS) combined with increasing demands for effective highdimensional inference techniques across a variety of disciplines have motivated extensive research into algorithms exploiting various notions of parsimony, including sparsity and lowrank constraint ..."
Abstract
 Add to MetaCart
Recent developments in compressive sensing (CS) combined with increasing demands for effective highdimensional inference techniques across a variety of disciplines have motivated extensive research into algorithms exploiting various notions of parsimony, including sparsity and lowrank constraints. In this dissertation, we extend the generalized approximate message passing (GAMP) approach, originally proposed for highdimensional generalizedlinear regression in the context of CS, to handle several classes of bilinear inference problems. First, we consider a general form of noisy CS where there is uncertainty in the measurement matrix as well as in the measurements. Matrix uncertainty is motivated by practical cases in which there are imperfections or unknown calibration parameters in the signal acquisition hardware. While previous work has focused on analyzing and extending classical CS algorithms like the LASSO and Dantzig selector for this problem setting, we propose a new algorithm called Matrix Uncertain GAMP (MUGAMP) whose goal is minimization of meansquared error of the signal estimates in the presence of these uncertainties, with
Contributions
, 2013
"... Recover lowrank matrix Z from noisecorrupted incomplete observations Y = PΩ ..."
Abstract
 Add to MetaCart
Recover lowrank matrix Z from noisecorrupted incomplete observations Y = PΩ