Results 1 
5 of
5
Minimal Shrinkage for Noisy Data Recovery Using Schattenp Norm Objective
"... Abstract. Noisy data recovery is an important problem in machine learning field, which has widely applications for collaborative prediction, recommendation systems, etc. One popular model is to use trace norm model for noisy data recovery. However, it is ignored that the reconstructed data could be ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
(Show Context)
Abstract. Noisy data recovery is an important problem in machine learning field, which has widely applications for collaborative prediction, recommendation systems, etc. One popular model is to use trace norm model for noisy data recovery. However, it is ignored that the reconstructed data could be shrank (i.e., singular values could be greatly suppressed). In this paper, we present novel noisy data recovery models, which replaces the standard rank constraint (i.e., trace norm) using Schattenp Norm. The proposed model is attractive due to its suppression on the shrinkage of singular values at smaller parameter p. We analyze the optimal solution of proposed models, and characterize the rank of optimal solution. Efficient algorithms are presented, the convergences of which are rigorously proved. Extensive experiment results on 6 noisy datasets demonstrate the good performance of proposed minimum shrinkage models. 1
Pushing the limits of affine rank minimization by adapting probabilistic PCA.
 In Int. Conf.
, 2015
"... Abstract Many applications require recovering a matrix of minimal rank within an affine constraint set, with matrix completion a notable special case. Because the problem is NPhard in general, it is common to replace the matrix rank with the nuclear norm, which acts as a convenient convex surrogat ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
(Show Context)
Abstract Many applications require recovering a matrix of minimal rank within an affine constraint set, with matrix completion a notable special case. Because the problem is NPhard in general, it is common to replace the matrix rank with the nuclear norm, which acts as a convenient convex surrogate. While elegant theoretical conditions elucidate when this replacement is likely to be successful, they are highly restrictive and convex algorithms fail when the ambient rank is too high or when the constraint set is poorly structured. Nonconvex alternatives fare somewhat better when carefully tuned; however, convergence to locally optimal solutions remains a continuing source of failure. Against this backdrop we derive a deceptively simple and parameterfree probabilistic PCAlike algorithm that is capable, over a wide battery of empirical tests, of successful recovery even at the theoretical limit where the number of measurements equals the degrees of freedom in the unknown lowrank matrix. Somewhat surprisingly, this is possible even when the affine constraint set is highly illconditioned. While proving general recovery guarantees remains evasive for nonconvex algorithms, Bayesianinspired or otherwise, we nonetheless show conditions whereby the underlying cost function has a unique stationary point located at the global optimum; no existing cost function we are aware of satisfies this property. The algorithm has also been successfully deployed on a computer vision application involving image rectification and a standard collaborative filtering benchmark.
Exploring Algorithmic Limits of Matrix Rank Minimization under Affine Constraints
"... ar ..."
(Show Context)
A PseudoBayesian Algorithm for Robust PCA
"... Abstract Commonly used in many applications, robust PCA represents an algorithmic attempt to reduce the sensitivity of classical PCA to outliers. The basic idea is to learn a decomposition of some data matrix of interest into low rank and sparse components, the latter representing unwanted outliers ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract Commonly used in many applications, robust PCA represents an algorithmic attempt to reduce the sensitivity of classical PCA to outliers. The basic idea is to learn a decomposition of some data matrix of interest into low rank and sparse components, the latter representing unwanted outliers. Although the resulting problem is typically NPhard, convex relaxations provide a computationallyexpedient alternative with theoretical support. However, in practical regimes performance guarantees break down and a variety of nonconvex alternatives, including Bayesianinspired models, have been proposed to boost estimation quality. Unfortunately though, without additional a priori knowledge none of these methods can significantly expand the critical operational range such that exact principal subspace recovery is possible. Into this mix we propose a novel pseudoBayesian algorithm that explicitly compensates for design weaknesses in many existing nonconvex approaches leading to stateoftheart performance with a sound analytical foundation.