Results 1  10
of
43
RASL: Robust Alignment by Sparse and Lowrank Decomposition for Linearly Correlated Images
, 2010
"... This paper studies the problem of simultaneously aligning a batch of linearly correlated images despite gross corruption (such as occlusion). Our method seeks an optimal set of image domain transformations such that the matrix of transformed images can be decomposed as the sum of a sparse matrix of ..."
Abstract

Cited by 161 (6 self)
 Add to MetaCart
(Show Context)
This paper studies the problem of simultaneously aligning a batch of linearly correlated images despite gross corruption (such as occlusion). Our method seeks an optimal set of image domain transformations such that the matrix of transformed images can be decomposed as the sum of a sparse matrix of errors and a lowrank matrix of recovered aligned images. We reduce this extremely challenging optimization problem to a sequence of convex programs that minimize the sum of ℓ1norm and nuclear norm of the two component matrices, which can be efficiently solved by scalable convex optimization techniques with guaranteed fast convergence. We verify the efficacy of the proposed robust alignment algorithm with extensive experiments with both controlled and uncontrolled real data, demonstrating higher accuracy and efficiency than existing methods over a wide range of realistic misalignments and corruptions.
Sparse Bayesian methods for lowrank matrix estimation. arXiv:1102.5288v1 [stat.ML
, 2011
"... Abstract—Recovery of lowrank matrices has recently seen significant ..."
Abstract

Cited by 28 (11 self)
 Add to MetaCart
(Show Context)
Abstract—Recovery of lowrank matrices has recently seen significant
Fast Singular Value Thresholding without Singular Value Decomposition
"... Singularvaluethresholding(SVT)is abasic subroutineinmanypopularnumerical schemes for solving nuclearnormminimization thatarises fromlowrankmatrixrecoveryproblemssuchasmatrixcompletion. The conventional approach for SVT is first to find the singular value decomposition (SVD) and then to shrink the s ..."
Abstract

Cited by 13 (2 self)
 Add to MetaCart
(Show Context)
Singularvaluethresholding(SVT)is abasic subroutineinmanypopularnumerical schemes for solving nuclearnormminimization thatarises fromlowrankmatrixrecoveryproblemssuchasmatrixcompletion. The conventional approach for SVT is first to find the singular value decomposition (SVD) and then to shrink the singular values. However, such an approach is timeconsuming under some circumstances, especially when the rank of the resulting matrix is not significantly low compared to its dimension. In this paper, we propose a fast algorithm for directly computing SVT for general dense matrices without usingSVDs. Ouralgorithm isbasedonmatrixNewtoniteration for matrixfunctions, andtheconvergence is theoretically guaranteed. Numerical experiments show that our proposed algorithm is more efficient than the SVDbased approaches for general dense matrices. 1
A TwoStage Reconstruction Approach for Seeing Through Water
"... Several attempts have been lately proposed to tackle the problem of recovering the original image of an underwater scene using a sequence distorted by water waves. The main drawback of the state of the art [18] is that it heavily depends on modelling the waves, which in fact is illposed since the a ..."
Abstract

Cited by 10 (2 self)
 Add to MetaCart
(Show Context)
Several attempts have been lately proposed to tackle the problem of recovering the original image of an underwater scene using a sequence distorted by water waves. The main drawback of the state of the art [18] is that it heavily depends on modelling the waves, which in fact is illposed since the actual behavior of the waves along with the imaging process are complicated and include several noise components; therefore, their results are not satisfactory. In this paper, we revisit the problem by proposing a datadriven twostage approach, each stage is targeted toward a certain type of noise. The first stage leverages the temporal mean of the sequence to overcome the structured turbulence of the waves through an iterative robust registration algorithm. The result of the first stage is a high quality mean and a better structured sequence; however, the sequence still contains unstructured sparse noise. Thus, we employ a second stage at which we extract the sparse errors from the sequence through rank minimization. Our method converges faster, and drastically outperforms state of the art on all testing sequences even only after the first stage. 1.
Linearized alternating direction method with parallel splitting and adaptive penalty for separable convex programs in machine learning
 In ACML
, 2013
"... Abstract Many problems in statistics and machine learning (e.g., probabilistic graphical model, feature extraction, clustering and classification, etc) can be (re)formulated as linearly constrained separable convex programs. The traditional alternating direction method (ADM) or its linearized versi ..."
Abstract

Cited by 6 (3 self)
 Add to MetaCart
(Show Context)
Abstract Many problems in statistics and machine learning (e.g., probabilistic graphical model, feature extraction, clustering and classification, etc) can be (re)formulated as linearly constrained separable convex programs. The traditional alternating direction method (ADM) or its linearized version (LADM) is for the twovariable case and cannot be naively generalized to solve the multivariable case. In this paper, we propose LADM with parallel splitting and adaptive penalty (LADMPSAP) to solve multivariable separable convex programs efficiently. When all the component objective functions have bounded subgradients, we obtain convergence results that are stronger than those of ADM and LADM, e.g., allowing the penalty parameter to be unbounded and proving the sufficient and necessary conditions for global convergence. We further propose a simple optimality measure and reveal the convergence rate of LADMPSAP in an ergodic sense. For programs with extra convex set constraints, we devise a practical version of LADMPSAP for faster convergence. LADMPSAP is particularly suitable for sparse representation and lowrank recovery problems because its subproblems have closed form solutions and the sparsity and lowrankness of the iterates can be preserved during the iteration. It is also highly parallelizable and hence fits for parallel or distributed computing. Numerical experiments testify to the speed and accuracy advantages of LADMPSAP.
Fast algorithms for structured robust principal component analysis
 In 2012 IEEE CVPR
"... A large number of problems arising in computer vision can be reduced to the problem of minimizing the nuclear norm of a matrix, subject to additional structural and sparsity constraints on its elements. Examples of relevant applications include, among others, robust tracking in the presence of outli ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
(Show Context)
A large number of problems arising in computer vision can be reduced to the problem of minimizing the nuclear norm of a matrix, subject to additional structural and sparsity constraints on its elements. Examples of relevant applications include, among others, robust tracking in the presence of outliers, manifold embedding, event detection, inpainting and tracklet matching across occlusion. In principle, these problems can be reduced to a convex semidefinite optimization form and solved using interior point methods. However, the poor scaling properties of these methods limit the use of this approach to relatively small sized problems. The main result of this paper shows that structured nuclear norm minimization problems can be efficiently solved by using an iterative Augmented Lagrangian Type (ALM) method that only requires performing at each iteration a combination of matrix thresholding and matrix inversion steps. As we illustrate in the paper with several examples, the proposed algorithm results in a substantial reduction of computational time and memory requirements when compared against interiorpoint methods, opening up the possibility of solving realistic, large sized problems. 1.
Strongly convex programming for exact matrix completion and robust principal component analysis
 IMAGING
"... The common task in matrix completion (MC) and robust principle component analysis (RPCA) is to recover a lowrank matrix from a given data matrix. These problems gained great attention from various areas in applied sciences recently, especially after the publication of the pioneering works of Candès ..."
Abstract

Cited by 4 (3 self)
 Add to MetaCart
(Show Context)
The common task in matrix completion (MC) and robust principle component analysis (RPCA) is to recover a lowrank matrix from a given data matrix. These problems gained great attention from various areas in applied sciences recently, especially after the publication of the pioneering works of Candès et al.. One fundamental result in MC and RPCA is that nuclear norm based convex optimizations lead to the exact lowrank matrix recovery under suitable conditions. In this paper, we extend this result by showing that strongly convex optimizations can guarantee the exact lowrank matrix recovery as well. The result in this paper not only provides sufficient conditions under which the strongly convex models lead to the exact lowrank matrix recovery, but also guides us on how to choose suitable parameters in practical algorithms.
A Cyclic Weighted Median Method for L1 LowRank Matrix Factorization with Missing Entries
, 2013
"... A challenging problem in machine learning, information retrieval and computer vision research is how to recover a lowrank representation of the given data in the presence of outliers and missing entries. The L1norm lowrank matrix factorization (LRMF) has been a popular approach to solving this pr ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
(Show Context)
A challenging problem in machine learning, information retrieval and computer vision research is how to recover a lowrank representation of the given data in the presence of outliers and missing entries. The L1norm lowrank matrix factorization (LRMF) has been a popular approach to solving this problem. However, L1norm LRMF is difficult to achieve due to its nonconvexity and nonsmoothness, and existing methods are often inefficient and fail to converge to a desired solution. In this paper we propose a novel cyclic weighted median (CWM) method, which is intrinsically a coordinate decent algorithm, for L1norm LRMF. The CWM method minimizes the objective by solving a sequence of scalar minimization subproblems, each of which is convex and can be easily solved by the weighted median filter. The extensive experimental results validate that the CWM method outperforms stateofthearts in terms of both accuracy and computational efficiency.
Two steps multitemporal nonlocal means for SAR images,” Geoscience and Remote Sensing
 IEEE Transactions on
, 2014
"... This paper presents a denoising approach for multitemporal Synthetic aperture radar (SAR) images based on NonLocal Means (NLM) method. To exploit redundancy existing in multitemporal images, we develop a new strategy of NLM for multitemporal data. Instead of directly overspreading the NLM oper ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
(Show Context)
This paper presents a denoising approach for multitemporal Synthetic aperture radar (SAR) images based on NonLocal Means (NLM) method. To exploit redundancy existing in multitemporal images, we develop a new strategy of NLM for multitemporal data. Instead of directly overspreading the NLM operator from one image to temporal images, a two steps weighted average is proposed in this paper. The first step is a maximum likelihood estimate with binary weights on temporal pixels and the second step is iterative NL means on spatial pixels. Experiments in this paper illustrate that the proposed method can effectively exploit image redundancy and denoise multitemporal images.
LOWRANK MATRIX COMPLETION BY VARIATIONAL SPARSE BAYESIAN LEARNING
"... There has been a significant interest in the recovery of lowrank matrices from an incomplete of measurements, due to both theoretical and practical developments demonstrating the wide applicability of the problem. A number of methods have been developed for this recovery problem, however, a princip ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
There has been a significant interest in the recovery of lowrank matrices from an incomplete of measurements, due to both theoretical and practical developments demonstrating the wide applicability of the problem. A number of methods have been developed for this recovery problem, however, a principled method for choosing the unknown target rank is generally missing. In this paper, we present a recovery algorithm based on sparse Bayesian learning (SBL) and automatic relevance determination principles. Starting from a matrix factorization formulation and enforcing the lowrank constraint in the estimates as a sparsity constraint, we develop an approach that is very effective in determining the correct rank while providing high recovery performance. We provide empirical results and comparisons with current stateoftheart methods that illustrate the potential of this approach. Index Terms — Lowrank matrix completion, Bayesian methods, automatic relevance determination.