Results 1  10
of
55
Fixedrank representation for unsupervised visual learning
"... Subspace clustering and feature extraction are two of the most commonly used unsupervised learning techniques in computer vision and pattern recognition. Stateoftheart techniques for subspace clustering make use of recent advances in sparsity and rank minimization. However, existing techniques a ..."
Abstract

Cited by 14 (1 self)
 Add to MetaCart
(Show Context)
Subspace clustering and feature extraction are two of the most commonly used unsupervised learning techniques in computer vision and pattern recognition. Stateoftheart techniques for subspace clustering make use of recent advances in sparsity and rank minimization. However, existing techniques are computationally expensive and may result in degenerate solutions that degrade clustering performance in the case of insufficient data sampling. To partially solve these problems, and inspired by existing work on matrix factorization, this paper proposes fixedrank representation (FRR) as a unified framework for unsupervised visual learning. FRR is able to reveal the structure of multiple subspaces in closedform when the data is noiseless. Furthermore, we prove that under some suitable conditions, even with insufficient observations, FRR can still reveal the true subspace memberships. To achieve robustness to outliers and noise, a sparse regularizer is introduced into the FRR framework. Beyond subspace clustering, FRR can be used for unsupervised feature extraction. As a nontrivial byproduct, a fast numerical solver is developed for FRR. Experimental results on both synthetic data and real applications validate our theoretical analysis and demonstrate the benefits of FRR for unsupervised visual learning. 1.
Repairing Sparse LowRank Texture
"... Abstract. In this paper, we show how to harness both lowrank and sparse structures in regular or near regular textures for image completion. Our method leverages the new convex optimization for lowrank and sparse signal recovery and can automatically correctly repair the global structure of a co ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
(Show Context)
Abstract. In this paper, we show how to harness both lowrank and sparse structures in regular or near regular textures for image completion. Our method leverages the new convex optimization for lowrank and sparse signal recovery and can automatically correctly repair the global structure of a corrupted texture, even without precise information about the regions to be completed. Through extensive simulations, we show our method can complete and repair textures corrupted by errors with both random and contiguous supports better than existing lowrank matrix recovery methods. Through experimental comparisons with existing image completion systems (such as Photoshop) our method demonstrate significant advantage over local patch based texture synthesis techniques in dealing with large corruption, nonuniform texture, and large perspective deformation.
Noisy Depth Maps Fusion for Multiview Stereo via Matrix Completion
, 2011
"... This paper introduces a general framework to fuse noisy point clouds from multiview images of the same object. We solve this classical vision problem using a newly emerging signal processing technique known as matrix completion. With this framework, we construct the initial incomplete matrix from ..."
Abstract

Cited by 6 (4 self)
 Add to MetaCart
This paper introduces a general framework to fuse noisy point clouds from multiview images of the same object. We solve this classical vision problem using a newly emerging signal processing technique known as matrix completion. With this framework, we construct the initial incomplete matrix from the observed point clouds by all the cameras, with the invisible points by any camera denoted as unknown entries. The observed points corresponding to the same object point are put into the same row. When properly completed, the recovered matrix should have rank one, since all the columns describe the same object. Therefore, an intuitive approach to complete the matrix is by minimizing its rank subject to consistency with observed entries. In order to improve the fusion accuracy, we propose a general noisy matrix completion method called Logsum Penalty Completion (LPC), which is particularly effective in removing outliers. Based on the MajorizationMinimization algorithm (MM), the nonconvex LPC problem is effectively solved by a sequence of convex optimizations. Experimental results on both point cloud fusion and MVS reconstructions verify the effectiveness of the proposed framework and the LPC algorithm.
Learning Structured Lowrank Representations for Image Classification
"... An approach to learn a structured lowrank representation for image classification is presented. We use a supervised learning method to construct a discriminative and reconstructive dictionary. By introducing an ideal regularization term, we perform lowrank matrix recovery for contaminated training ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
(Show Context)
An approach to learn a structured lowrank representation for image classification is presented. We use a supervised learning method to construct a discriminative and reconstructive dictionary. By introducing an ideal regularization term, we perform lowrank matrix recovery for contaminated training data from all categories simultaneously without losing structural information. A discriminative lowrank representation for images with respect to the constructed dictionary is obtained. With semantic structure information and strong identification capability, this representation is good for classification tasks even using a simple linear multiclassifier. Experimental results demonstrate the effectiveness of our approach. 1.
Linearized alternating direction method with parallel splitting and adaptive penalty for separable convex programs in machine learning
 In ACML
, 2013
"... Abstract Many problems in statistics and machine learning (e.g., probabilistic graphical model, feature extraction, clustering and classification, etc) can be (re)formulated as linearly constrained separable convex programs. The traditional alternating direction method (ADM) or its linearized versi ..."
Abstract

Cited by 6 (3 self)
 Add to MetaCart
(Show Context)
Abstract Many problems in statistics and machine learning (e.g., probabilistic graphical model, feature extraction, clustering and classification, etc) can be (re)formulated as linearly constrained separable convex programs. The traditional alternating direction method (ADM) or its linearized version (LADM) is for the twovariable case and cannot be naively generalized to solve the multivariable case. In this paper, we propose LADM with parallel splitting and adaptive penalty (LADMPSAP) to solve multivariable separable convex programs efficiently. When all the component objective functions have bounded subgradients, we obtain convergence results that are stronger than those of ADM and LADM, e.g., allowing the penalty parameter to be unbounded and proving the sufficient and necessary conditions for global convergence. We further propose a simple optimality measure and reveal the convergence rate of LADMPSAP in an ergodic sense. For programs with extra convex set constraints, we devise a practical version of LADMPSAP for faster convergence. LADMPSAP is particularly suitable for sparse representation and lowrank recovery problems because its subproblems have closed form solutions and the sparsity and lowrankness of the iterates can be preserved during the iteration. It is also highly parallelizable and hence fits for parallel or distributed computing. Numerical experiments testify to the speed and accuracy advantages of LADMPSAP.
Riemannian Pursuit for Big Matrix Recovery
"... Low rank matrix recovery is a fundamental task in many realworld applications. The performance of existing methods, however, deteriorates significantly when applied to illconditioned or largescale matrices. In this paper, we therefore propose an efficient method, called Riemannian Pursuit (RP), ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
Low rank matrix recovery is a fundamental task in many realworld applications. The performance of existing methods, however, deteriorates significantly when applied to illconditioned or largescale matrices. In this paper, we therefore propose an efficient method, called Riemannian Pursuit (RP), that aims to address these two problems simultaneously. Our method consists of a sequence of fixedrank optimization problems. Each subproblem, solved by a nonlinear Riemannian conjugate gradient method, aims to correct the solution in the most important subspace of increasing size. Theoretically, RP converges linearly under mild conditions and experimental results show that it substantially outperforms existing methods when applied to largescale and illconditioned matrices. 1.
Provable Subspace Clustering: When LRR meets SSC
"... Sparse Subspace Clustering (SSC) and LowRank Representation (LRR) are both considered as the stateoftheart methods for subspace clustering. The two methods are fundamentally similar in that both are convex optimizations exploiting the intuition of “SelfExpressiveness”. The main difference is t ..."
Abstract

Cited by 5 (3 self)
 Add to MetaCart
(Show Context)
Sparse Subspace Clustering (SSC) and LowRank Representation (LRR) are both considered as the stateoftheart methods for subspace clustering. The two methods are fundamentally similar in that both are convex optimizations exploiting the intuition of “SelfExpressiveness”. The main difference is that SSC minimizes the vector `1 norm of the representation matrix to induce sparsity while LRR minimizes nuclear norm (aka trace norm) to promote a lowrank structure. Because the representation matrix is often simultaneously sparse and lowrank, we propose a new algorithm, termed LowRank Sparse Subspace Clustering (LRSSC), by combining SSC and LRR, and develops theoretical guarantees of when the algorithm succeeds. The results reveal interesting insights into the strength and weakness of SSC and LRR and demonstrate how LRSSC can take the advantages of both methods in preserving the “SelfExpressiveness Property ” and “Graph Connectivity ” at the same time. 1
Robust Subspace Clustering via Thresholding Ridge Regression
"... Given a data set from a union of multiple linear subspaces, a robust subspace clustering algorithm fits each group of data points with a lowdimensional subspace and then clusters these data even though they are grossly corrupted or sampled from the union of dependent subspaces. Under the framewor ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
Given a data set from a union of multiple linear subspaces, a robust subspace clustering algorithm fits each group of data points with a lowdimensional subspace and then clusters these data even though they are grossly corrupted or sampled from the union of dependent subspaces. Under the framework of spectral clustering, recent works using sparse representation, low rank representation and their extensions achieve robust clustering results by formulating the errors (e.g., corruptions) into their objective functions so that the errors can be removed from the inputs. However, these approaches have suffered from the limitation that the structure of the errors should be known as the prior knowledge. In this paper, we present a new method of robust subspace clustering by eliminating the effect of the errors from the projection space (representation) rather than from the input space. We firstly prove that `1, `2, and `∞normbased linear projection spaces share the property of intrasubspace projection dominance, i.e., the coefficients over intrasubspace data points are larger than those over intersubspace data points. Based on this property, we propose a robust and efficient subspace clustering algorithm, called Thresholding Ridge Regression (TRR). TRR calculates the `2normbased coefficients of a given data set and performs a hard thresholding operator; and then the coefficients are used to build a similarity graph for clustering. Experimental studies show that TRR outperforms the stateoftheart methods with respect to clustering quality, robustness, and timesaving.
Robust Multimodal Graph Matching: Sparse Coding Meets Graph Matching Marcelo Fiori
"... Graph matching is a challenging problem with very important applications in a wide range of fields, from image and video analysis to biological and biomedical problems. We propose a robust graph matching algorithm inspired in sparsityrelated techniques. We cast the problem, resembling group or coll ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
(Show Context)
Graph matching is a challenging problem with very important applications in a wide range of fields, from image and video analysis to biological and biomedical problems. We propose a robust graph matching algorithm inspired in sparsityrelated techniques. We cast the problem, resembling group or collaborative sparsity formulations, as a nonsmooth convex optimization problem that can be efficiently solved using augmented Lagrangian techniques. The method can deal with weighted or unweighted graphs, as well as multimodal data, where different graphs represent different types of data. The proposed approach is also naturally integrated with collaborative graph inference techniques, solving general network inference problems where the observed variables, possibly coming from different modalities, are not in correspondence. The algorithm is tested and compared with stateoftheart graph matching techniques in both synthetic and real graphs. We also present results on multimodal graphs and applications to collaborative inference of brain connectivity from alignmentfree functional magnetic resonance imaging (fMRI) data. The code is publicly available. 1
A Nearly Unbiased Matrix Completion Approach
"... Abstract. Lowrank matrix completion is an important theme both theoretically and practically. However, the stateoftheart methods based on convex optimization usually lead to a certain amount of deviation from the original matrix. To perfectly recover a data matrix from a sampling of its entrie ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
(Show Context)
Abstract. Lowrank matrix completion is an important theme both theoretically and practically. However, the stateoftheart methods based on convex optimization usually lead to a certain amount of deviation from the original matrix. To perfectly recover a data matrix from a sampling of its entries, we consider a nonconvex alternative to approximate the matrix rank. In particular, we minimize a matrix γnorm under a set of linear constraints. Accordingly, we derive a shrinkage operator, which is nearly unbiased in comparison with the wellknown soft shrinkage operator. Furthermore, we devise two algorithms, nonconvex soft imputation (NCSI) and nonconvex alternative direction method of multipliers (NCADMM), to fulfil the numerical estimation. Experimental results show that these algorithms outperform existing matrix completion methods in accuracy. Moreover, the NCADMM is as efficient as the current stateoftheart algorithms. 1