Results 1  10
of
94
A unified framework for highdimensional analysis of Mestimators with decomposable regularizers
"... ..."
RASL: Robust Alignment by Sparse and Lowrank Decomposition for Linearly Correlated Images
, 2010
"... This paper studies the problem of simultaneously aligning a batch of linearly correlated images despite gross corruption (such as occlusion). Our method seeks an optimal set of image domain transformations such that the matrix of transformed images can be decomposed as the sum of a sparse matrix of ..."
Abstract

Cited by 161 (6 self)
 Add to MetaCart
(Show Context)
This paper studies the problem of simultaneously aligning a batch of linearly correlated images despite gross corruption (such as occlusion). Our method seeks an optimal set of image domain transformations such that the matrix of transformed images can be decomposed as the sum of a sparse matrix of errors and a lowrank matrix of recovered aligned images. We reduce this extremely challenging optimization problem to a sequence of convex programs that minimize the sum of ℓ1norm and nuclear norm of the two component matrices, which can be efficiently solved by scalable convex optimization techniques with guaranteed fast convergence. We verify the efficacy of the proposed robust alignment algorithm with extensive experiments with both controlled and uncontrolled real data, demonstrating higher accuracy and efficiency than existing methods over a wide range of realistic misalignments and corruptions.
On the linear convergence of the alternating direction method of multipliers
, 2013
"... ..."
(Show Context)
Robust Matrix Decomposition with Sparse Corruptions
"... Abstract—Suppose a given observation matrix can be decomposed as the sum of a lowrank matrix and a sparse matrix, and the goal is to recover these individual components from the observed sum. Such additive decompositions have applications in a variety of numerical problems including system identifi ..."
Abstract

Cited by 47 (4 self)
 Add to MetaCart
(Show Context)
Abstract—Suppose a given observation matrix can be decomposed as the sum of a lowrank matrix and a sparse matrix, and the goal is to recover these individual components from the observed sum. Such additive decompositions have applications in a variety of numerical problems including system identification, latent variable graphical modeling, and principal components analysis. We study conditions under which recovering such a decomposition is possible via a combination of ℓ1 norm and trace norm minimization. We are specifically interested in the question of how many sparse corruptions are allowed so that convex programming can still achieve accurate recovery, and we obtain stronger recovery guarantees than previous studies. Moreover, we do not assume that the spatial pattern of corruptions is random, which stands in contrast to related analyses under such assumptions via matrix completion. Index Terms—Matrix decompositions, sparsity, lowrank, outliers
Bayesian Robust Principal Component Analysis
, 2010
"... A hierarchical Bayesian model is considered for decomposing a matrix into lowrank and sparse components, assuming the observed matrix is a superposition of the two. The matrix is assumed noisy, with unknown and possibly nonstationary noise statistics. The Bayesian framework infers an approximate r ..."
Abstract

Cited by 42 (4 self)
 Add to MetaCart
A hierarchical Bayesian model is considered for decomposing a matrix into lowrank and sparse components, assuming the observed matrix is a superposition of the two. The matrix is assumed noisy, with unknown and possibly nonstationary noise statistics. The Bayesian framework infers an approximate representation for the noise statistics while simultaneously inferring the lowrank and sparseoutlier contributions; the model is robust to a broad range of noise levels, without having to change model hyperparameter settings. In addition, the Bayesian framework allows exploitation of additional structure in the matrix. For example, in video applications each row (or column) corresponds to a video frame, and we introduce a Markov dependency between consecutive rows in the matrix (corresponding to consecutive frames in the video). The properties of this Markov process are also inferred based on the observed matrix, while simultaneously denoising and recovering the lowrank and sparse components. We compare the Bayesian model to a stateoftheart optimizationbased implementation of robust PCA; considering several examples, we demonstrate competitive performance of the proposed model.
Simultaneously Structured Models with Application to Sparse and Lowrank Matrices
, 2014
"... The topic of recovery of a structured model given a small number of linear observations has been wellstudied in recent years. Examples include recovering sparse or groupsparse vectors, lowrank matrices, and the sum of sparse and lowrank matrices, among others. In various applications in signal p ..."
Abstract

Cited by 41 (5 self)
 Add to MetaCart
The topic of recovery of a structured model given a small number of linear observations has been wellstudied in recent years. Examples include recovering sparse or groupsparse vectors, lowrank matrices, and the sum of sparse and lowrank matrices, among others. In various applications in signal processing and machine learning, the model of interest is known to be structured in several ways at the same time, for example, a matrix that is simultaneously sparse and lowrank. Often norms that promote each individual structure are known, and allow for recovery using an orderwise optimal number of measurements (e.g., `1 norm for sparsity, nuclear norm for matrix rank). Hence, it is reasonable to minimize a combination of such norms. We show that, surprisingly, if we use multiobjective optimization with these norms, then we can do no better, orderwise, than an algorithm that exploits only one of the present structures. This result suggests that to fully exploit the multiple structures, we need an entirely new convex relaxation, i.e. not one that is a function of the convex relaxations used for each structure. We then specialize our results to the case of sparse and lowrank matrices. We show that a nonconvex formulation of the problem can recover the model from very few measurements, which is on the order of the degrees of freedom of the matrix, whereas the convex problem obtained from a combination of the `1 and nuclear norms requires many more measurements. This proves an orderwise gap between the performance of the convex and nonconvex recovery problems in this case. Our framework applies to arbitrary structureinducing norms as well as to a wide range of measurement ensembles. This allows us to give performance bounds for problems such as sparse phase retrieval and lowrank tensor completion.
Sparse Bayesian methods for lowrank matrix estimation. arXiv:1102.5288v1 [stat.ML
, 2011
"... Abstract—Recovery of lowrank matrices has recently seen significant ..."
Abstract

Cited by 28 (11 self)
 Add to MetaCart
(Show Context)
Abstract—Recovery of lowrank matrices has recently seen significant
Dynamic anomalography: Tracking network anomalies via sparsity and low rank
, 2013
"... In the backbone of largescale networks, origintodestination (OD) traffic flows experience abrupt unusual changes known as traffic volume anomalies, which can result in congestion and limit the extent to which enduser quality of service requirements are met. As a means of maintaining seamless en ..."
Abstract

Cited by 24 (10 self)
 Add to MetaCart
(Show Context)
In the backbone of largescale networks, origintodestination (OD) traffic flows experience abrupt unusual changes known as traffic volume anomalies, which can result in congestion and limit the extent to which enduser quality of service requirements are met. As a means of maintaining seamless enduser experience in dynamic environments, as well as for ensuring network security, this paper deals with a crucial network monitoring task termed dynamic anomalography. Given link traffic measurements (noisy superpositions of unobserved OD flows) periodically acquired by backbone routers, the goal is to construct an estimated map of anomalies in real time, and thus summarize the network ‘health state ’ along both the flow and time dimensions. Leveraging the low intrinsicdimensionality of OD flows and the sparse nature of anomalies, a novel online estimator is proposed based on an exponentiallyweighted leastsquares criterion regularized with the sparsitypromotingnorm of the anomalies, and the nuclear norm of the nominal traffic matrix. After recasting the nonseparable nuclear norm into a form amenable to online optimization, a realtime algorithm for dynamic anomalography is developed and its convergence established under simplifying technical assumptions. For operational conditions where computational complexity reductions are at a premium, a lightweight stochastic gradient algorithm based on Nesterov’s acceleration technique is developed as well. Comprehensive numerical tests with both synthetic and real network data corroborate the effectiveness of the proposed online algorithms and their tracking capabilities, and demonstrate that they outperform stateoftheart approaches developed to diagnose traffic anomalies.
Recursive robust pca or recursive sparse recovery in large but structured noise
 in IEEE Intl. Symp. on Information Theory (ISIT
, 2013
"... This Dissertation is brought to you for free and open access by the Graduate College at Digital Repository @ Iowa State University. It has been accepted for inclusion in Graduate Theses and Dissertations by an authorized administrator of Digital Repository @ Iowa State University. For more informati ..."
Abstract

Cited by 22 (17 self)
 Add to MetaCart
This Dissertation is brought to you for free and open access by the Graduate College at Digital Repository @ Iowa State University. It has been accepted for inclusion in Graduate Theses and Dissertations by an authorized administrator of Digital Repository @ Iowa State University. For more information, please contact
Recovery of lowrank plus compressed sparse matrices with application to unveiling traffic anomalies
 IEEE TRANS. INFO. THEORY
, 2013
"... Given the noiseless superposition of a lowrank matrix plus the product of a known fat compression matrix times a sparse matrix, the goal of this paper is to establish deterministic conditions under which exact recovery of the lowrank and sparse components becomes possible. This fundamental identif ..."
Abstract

Cited by 21 (5 self)
 Add to MetaCart
Given the noiseless superposition of a lowrank matrix plus the product of a known fat compression matrix times a sparse matrix, the goal of this paper is to establish deterministic conditions under which exact recovery of the lowrank and sparse components becomes possible. This fundamental identifiability issue arises with traffic anomaly detection in backbone networks, and subsumes compressed sensing as well as the timely lowrank plus sparse matrix recovery tasks encountered in matrix decomposition problems. Leveraging the ability of and nuclear norms to recover sparse and lowrank matrices, a convex program is formulated to estimate the unknowns. Analysis and simulations confirm that the said convex program can recover the unknowns for sufficiently lowrank and sparse enough components, along with a compression matrix possessing an isometry property when restricted to operate on sparse vectors. When the lowrank, sparse, and compression matrices are drawn from certain random ensembles, it is established that exact recovery is possible with high probability. Firstorder algorithms are developed to solve the nonsmooth convex optimization problem with provable iteration complexity guarantees. Insightful tests with synthetic and real network data corroborate the effectiveness of the novel approach in unveiling traffic anomalies across flows and time, and its ability to outperform existing alternatives.