Results 11  20
of
42
A Variational Approach for Sparse Component Estimation and LowRank Matrix Recovery
"... We propose a variational Bayesian based algorithm for the estimation of the sparse component of an outliercorrupted lowrank matrix, when linearly transformed composite data are observed. The model constitutes a generalization of robust principal component analysis. The problem considered herein is ..."
Abstract

Cited by 3 (3 self)
 Add to MetaCart
We propose a variational Bayesian based algorithm for the estimation of the sparse component of an outliercorrupted lowrank matrix, when linearly transformed composite data are observed. The model constitutes a generalization of robust principal component analysis. The problem considered herein is applicable in various practical scenarios, such as foreground detection in blurred and noisy video sequences and detection of network anomalies among others. The proposed algorithm models the lowrank matrix and the sparse component using a hierarchical Bayesian framework, and employs a variational approach for inference of the unknowns. The effectiveness of the proposed algorithm is demonstrated using real life experiments, and its performance improvement over regularization based approaches is shown. Index Terms—Bayesian inference, variational approach, robust principal component analysis, foreground detection, network anomaly detection improvement of the proposed algorithm over its regularization based counterpart. This paper is organized as follows. In Section II we present the general data model and several areas of applications. A brief overview of the related work in each of these areas is also provided. In Section III we introduce the proposed hierarchical Bayesian model. Details of the variational inference procedure are provided in Section IV. Numerical examples are presented in Section V. Finally we draw conclusion remarks in Section VI. Notation: Matrices and vectors are denoted by uppercase and lowercase boldface letters, respectively. vec(·), diag(·) and Tr(·) are vectorization, diagonalization and trace operators, respectively. Given a matrix X, we denote as xi·, x·j and Xij its ith row, jth column and (i, j) th element, respectively.
Sparse additive matrix factorization for robust PCA and its generalization
 In Proceedings of Fourth Asian Conference on Machine Learning
"... Principal component analysis (PCA) can be regarded as approximating a data matrix with a lowrank one by imposing sparsity on its singular values, and its robust variant further captures sparse noise. In this paper, we extend such sparse matrix learning methods, and propose a novel unified framework ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
Principal component analysis (PCA) can be regarded as approximating a data matrix with a lowrank one by imposing sparsity on its singular values, and its robust variant further captures sparse noise. In this paper, we extend such sparse matrix learning methods, and propose a novel unified framework called sparse additive matrix factorization (SAMF). SAMF systematically induces various types of sparsity by the socalled modelinduced regularization in the Bayesian framework. We propose an iterative algorithm called the mean update (MU) for the variational Bayesian approximation to SAMF, which gives the global optimal solution for a large subset of parameters in each step. We demonstrate the usefulness of our method on artificial data and the foreground/background video separation.
Pushing the limits of affine rank minimization by adapting probabilistic PCA.
 In Int. Conf.
, 2015
"... Abstract Many applications require recovering a matrix of minimal rank within an affine constraint set, with matrix completion a notable special case. Because the problem is NPhard in general, it is common to replace the matrix rank with the nuclear norm, which acts as a convenient convex surrogat ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
(Show Context)
Abstract Many applications require recovering a matrix of minimal rank within an affine constraint set, with matrix completion a notable special case. Because the problem is NPhard in general, it is common to replace the matrix rank with the nuclear norm, which acts as a convenient convex surrogate. While elegant theoretical conditions elucidate when this replacement is likely to be successful, they are highly restrictive and convex algorithms fail when the ambient rank is too high or when the constraint set is poorly structured. Nonconvex alternatives fare somewhat better when carefully tuned; however, convergence to locally optimal solutions remains a continuing source of failure. Against this backdrop we derive a deceptively simple and parameterfree probabilistic PCAlike algorithm that is capable, over a wide battery of empirical tests, of successful recovery even at the theoretical limit where the number of measurements equals the degrees of freedom in the unknown lowrank matrix. Somewhat surprisingly, this is possible even when the affine constraint set is highly illconditioned. While proving general recovery guarantees remains evasive for nonconvex algorithms, Bayesianinspired or otherwise, we nonetheless show conditions whereby the underlying cost function has a unique stationary point located at the global optimum; no existing cost function we are aware of satisfies this property. The algorithm has also been successfully deployed on a computer vision application involving image rectification and a standard collaborative filtering benchmark.
Exploring Algorithmic Limits of Matrix Rank Minimization under Affine Constraints
"... ar ..."
(Show Context)
Robust Principal Component Analysis with Complex Noise
"... The research on robust principal component analysis (RPCA) has been attracting much attention recently. The original RPCA model assumes sparse noise, and use the L1norm to characterize the error term. In practice, however, the noise is much more complex and it is not appropriate to simply use a ce ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
The research on robust principal component analysis (RPCA) has been attracting much attention recently. The original RPCA model assumes sparse noise, and use the L1norm to characterize the error term. In practice, however, the noise is much more complex and it is not appropriate to simply use a certain Lpnorm for noise modeling. We propose a generative RPCA model under the Bayesian framework by modeling data noise as a mixture of Gaussians (MoG). The MoG is a universal approximator to continuous distributions and thus our model is able to fit a wide range of noises such as Laplacian, Gaussian, sparse noises and any combinations of them. A variational Bayes algorithm is presented to infer the posterior of the proposed model. All involved parameters can be recursively updated in closed form. The advantage of our method is demonstrated by extensive experiments on synthetic data, face modeling and background subtraction. 1.
Sparse additive text models with low rank background
 In Advances in Neural Information Processing Systems
, 2013
"... The sparse additive model for text modeling involves the sumofexp computing, whose cost is consuming for large scales. Moreover, the assumption of equal background across all classes/topics may be too strong. This paper extends to propose sparse additive model with low rank background (SAMLRB) a ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
The sparse additive model for text modeling involves the sumofexp computing, whose cost is consuming for large scales. Moreover, the assumption of equal background across all classes/topics may be too strong. This paper extends to propose sparse additive model with low rank background (SAMLRB) and obtains simple yet efficient estimation. Particularly, employing a double majorization bound, we approximate loglikelihood into a quadratic lowerbound without the logsumexp terms. The constraints of low rank and sparsity are then simply embodied by nuclear norm and ℓ1norm regularizers. Interestingly, we find that the optimization task of SAMLRB can be transformed into the same form as in Robust PCA. Consequently, parameters of supervised SAMLRB can be efficiently learned using an existing algorithm for Robust PCA based on accelerated proximal gradient. Besides the supervised case, we extend SAMLRB to favor unsupervised and multifaceted scenarios. Experiments on three real data demonstrate the effectiveness and efficiency of SAMLRB, compared with a few stateoftheart models. 1
Automatic recognition of offensive team formation in american football plays
 In CVPR Workshops (CVsports
"... Compared to security surveillance and military applications, where automated action analysis is prevalent, the sports domain is extremely underserved. Most existing software packages for sports video analysis require manual annotation of important events in the video. American football is the mo ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
Compared to security surveillance and military applications, where automated action analysis is prevalent, the sports domain is extremely underserved. Most existing software packages for sports video analysis require manual annotation of important events in the video. American football is the most popular sport in the United States, however most game analysis is still done manually. Line of scrimmage and offensive team formation recognition are two statistics that must be tagged by American Football coaches when watching and evaluating past play video clips, a process which takes many man hours per week. These two statistics are also the building blocks for more highlevel analysis such as play strategy inference and automatic statistic generation. In this paper, we propose a novel framework where given an American football play clip, we automatically identify the video frame in which the offensive team lines in formation (formation frame), the line of scrimmage for that play, and the type of player formation the offensive team takes on. The proposed framework achieves 95 % accuracy in detecting the formation frame, 98 % accuracy in detecting the line of scrimmage, and up to 67 % accuracy in classifying the offensive team’s formation. To validate our framework, we compiled a large dataset comprising more than 800 playclips of standard and high definition resolution from realworld football games. This dataset will be made publicly available for future comparison. 1.
ROBUST SPECTRAL UNMIXING FOR ANOMALY DETECTION
"... This paper is concerned with a joint Bayesian formulation for determining the endmembers and abundances of hyperspectral images along with sparse outliers which can lead to estimation errors unless accounted for. We present an inference method that generalizes previous work and provides a MCMC est ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
This paper is concerned with a joint Bayesian formulation for determining the endmembers and abundances of hyperspectral images along with sparse outliers which can lead to estimation errors unless accounted for. We present an inference method that generalizes previous work and provides a MCMC estimate of the posterior distribution. The proposed method is compared empirically to stateoftheart algorithms, showing lower reconstruction and detection errors. 1.
A PseudoBayesian Algorithm for Robust PCA
"... Abstract Commonly used in many applications, robust PCA represents an algorithmic attempt to reduce the sensitivity of classical PCA to outliers. The basic idea is to learn a decomposition of some data matrix of interest into low rank and sparse components, the latter representing unwanted outliers ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract Commonly used in many applications, robust PCA represents an algorithmic attempt to reduce the sensitivity of classical PCA to outliers. The basic idea is to learn a decomposition of some data matrix of interest into low rank and sparse components, the latter representing unwanted outliers. Although the resulting problem is typically NPhard, convex relaxations provide a computationallyexpedient alternative with theoretical support. However, in practical regimes performance guarantees break down and a variety of nonconvex alternatives, including Bayesianinspired models, have been proposed to boost estimation quality. Unfortunately though, without additional a priori knowledge none of these methods can significantly expand the critical operational range such that exact principal subspace recovery is possible. Into this mix we propose a novel pseudoBayesian algorithm that explicitly compensates for design weaknesses in many existing nonconvex approaches leading to stateoftheart performance with a sound analytical foundation.