Results 1  10
of
17
Rank/Norm Regularization with ClosedForm Solutions: Application to Subspace Clustering
"... When data is sampled from an unknown subspace, principal component analysis (PCA) provides an effective way to estimate the subspace and hence reduce the dimension of the data. At the heart of PCA is the EckartYoungMirsky theorem, which characterizes the best rank k approximation of a matrix. In t ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
When data is sampled from an unknown subspace, principal component analysis (PCA) provides an effective way to estimate the subspace and hence reduce the dimension of the data. At the heart of PCA is the EckartYoungMirsky theorem, which characterizes the best rank k approximation of a matrix. In this paper, we prove a generalization of the EckartYoungMirsky theorem under all unitarily invariant norms. Using this result, we obtain closedform solutions for a set of rank/norm regularized problems, and derive closedform solutions for a general class of subspace clustering problems (where data is modelled by unions of unknown subspaces). From these results we obtain new theoretical insights and promising experimental results. 1
Optimal CUR matrix decompositions
 In Proceedings of the 46th Annual ACM Symposium on Theory of Computing (STOC
, 2014
"... ar ..."
(Show Context)
Towards a theory of generic Principal Component Analysis
 J. MULTIVARIATE ANALYSIS
"... In this paper, we consider a technique called the generic Principal Component Analysis (PCA) which is based on an extension and rigorous justification of the standard PCA. The generic PCA is treated as the best weighted linear estimator of a given rank under the condition that the associated covaria ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
In this paper, we consider a technique called the generic Principal Component Analysis (PCA) which is based on an extension and rigorous justification of the standard PCA. The generic PCA is treated as the best weighted linear estimator of a given rank under the condition that the associated covariance matrix is singular. As a result, the generic PCA is constructed in terms of the pseudoinverse matrices that imply a development of the special technique. In particular, we give a solution of the new lowrank matrix approximation problem that provides a basis for the generic PCA. Theoretical aspects of the generic PCA are carefully studied.
Fast low rank approximations of matrices and tensors ∗
, 2007
"... In many applications, it is of interest to approximate data, given by m × n matrix A, by a matrix B of at most rank k, which is much smaller than m and n. The best approximation is given by singular value decomposition, which is too time consuming for very large m and n. We present here an optimal l ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
(Show Context)
In many applications, it is of interest to approximate data, given by m × n matrix A, by a matrix B of at most rank k, which is much smaller than m and n. The best approximation is given by singular value decomposition, which is too time consuming for very large m and n. We present here an optimal least squares algorithm for computing a krank approximation to the data consisting of m × n matrix A by reading a number of rows and columns of A. This algorithm allows to update krank approximations if one allows to read additional rows and columns of A. Furthermore, this algorithm applies also to tensors.
An Efficient Approach for Computing Optimal LowRank Regularized Inverse Matrices
, 2014
"... Standard regularization methods that are used to compute solutions to illposed inverse problems require knowledge of the forward model. In many reallife applications, the forward model is not known, but training data is readily available. In this paper, we develop a new framework that uses traini ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
(Show Context)
Standard regularization methods that are used to compute solutions to illposed inverse problems require knowledge of the forward model. In many reallife applications, the forward model is not known, but training data is readily available. In this paper, we develop a new framework that uses training data, as a substitute for knowledge of the forward model, to compute an optimal lowrank regularized inverse matrix directly, allowing for very fast computation of a regularized solution. We consider a statistical framework based on Bayes and empirical Bayes risk minimization to analyze theoretical properties of the problem. We propose an efficient rank update approach for computing an optimal lowrank regularized inverse matrix for various error measures. Numerical experiments demonstrate the benefits and potential applications of our approach to problems in signal and image processing.
Randomized Approximation of the Gram Matrix: Exact Computation and Probabilistic Bounds
 SIAM J. Matrix Anal. Appl
"... ar ..."
FAST LOW RANK APPROXIMATIONS OF MATRICES AND TENSORS ∗
"... Abstract. In many applications such as data compression, imaging or genomic data analysis, it is important to approximate a given m × n matrix A by a matrix B of rank at most k which is much smaller than m and n. The best rank k approximation can be determined via the singular value decomposition wh ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
Abstract. In many applications such as data compression, imaging or genomic data analysis, it is important to approximate a given m × n matrix A by a matrix B of rank at most k which is much smaller than m and n. The best rank k approximation can be determined via the singular value decomposition which, however, has prohibitively high computational complexity and storage requirements for very large m and n. We present an optimal least squares algorithm for computing a rank k approximation to an m×n matrix A by reading only a limited number of rows and columns of A. The algorithm has complexity O(k 2 max(m, n)) and allows to iteratively improve given rank k approximations by reading additional rows and columns of A. We also show how this approach can be extended to tensors and present numerical results.
A Counterexample for the Validity of Using Nuclear Norm as a Convex Surrogate of Rank
"... Abstract. Rank minimization has attracted a lot of attention due to its robustness in data recovery. To overcome the computational difficulty, rank is often replaced with nuclear norm. For several rank minimization problems, such a replacement has been theoretically proven to be valid, i.e., the sol ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
Abstract. Rank minimization has attracted a lot of attention due to its robustness in data recovery. To overcome the computational difficulty, rank is often replaced with nuclear norm. For several rank minimization problems, such a replacement has been theoretically proven to be valid, i.e., the solution to nuclear norm minimization problem is also the solution to rank minimization problem. Although it is easy to believe that such a replacement may not always be valid, no concrete example has ever been found. We argue that such a validity checking cannot be done by numerical computation and show, by analyzing the noiseless latent low rank representation (LatLRR) model, that even for very simple rank minimization problems the validity may still break down. As a byproduct, we find that the solution to the nuclear norm minimization formulation of LatLRR is nonunique. Hence the results of LatLRR reported in the literature may be questionable. 1
Information Technology, Engineering and the Environment ~ 1 ~ Index
, 2013
"... Complex human and technological issues, integrated and practical environmentally sustainable solutions, sustainable development in the natural and built environments ..."
Abstract
 Add to MetaCart
Complex human and technological issues, integrated and practical environmentally sustainable solutions, sustainable development in the natural and built environments
3Tensor Space F m1×m2×m3
"... • Algebraic geometry & tensor rank • Maximal tensor rank • Max. & gen. rank upper estimates • Theoretical bounds & explanation ..."
Abstract
 Add to MetaCart
• Algebraic geometry & tensor rank • Maximal tensor rank • Max. & gen. rank upper estimates • Theoretical bounds & explanation