Results 1  10
of
12
Homogeneous Penalizers and Constraints in Convex Image Restoration
, 2012
"... Recently convex optimization models were successfully applied for solving various problems in image analysis and restoration. In this paper, we are interested in relations between convex constrained optimization problems of the form argmin{Φ(x) subject to Ψ(x) ≤ τ} and their penalized counterparts ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
Recently convex optimization models were successfully applied for solving various problems in image analysis and restoration. In this paper, we are interested in relations between convex constrained optimization problems of the form argmin{Φ(x) subject to Ψ(x) ≤ τ} and their penalized counterparts argmin{Φ(x) + λΨ(x)}. We recall general results on the topic by the help of an epigraphical projection. Then we deal with the special setting Ψ: = ‖L · ‖ with L ∈ Rm,n and Φ: = ϕ(H ·), where H ∈ Rn,n and ϕ: Rn → R∪{+∞} meet certain requirements which are often fulfilled in image processing models. In this case we prove by incorporating the dual problems that there exists a bijective function such that the solutions of the constrained problem coincide with those of the penalized problem if and only if τ and λ are in the graph of this function. We illustrate the relation between τ and λ for various problems arising in image processing. In particular, we point out the relation to the Pareto frontier for joint sparsity problems. We demonstrate the performance of the constrained model in restoration tasks of images corrupted by Poisson noise with the Idivergence as data fitting term ϕ and in inpainting models with the constrained nuclear norm. Such models can be useful if we have a priori knowledge on the image rather than on the noise level.
Fast randomized singular value thresholding for nuclear norm minimization
 In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition
, 2015
"... iskweon77 ..."
(Show Context)
computed
"... principle component analysis based fourdimensional ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
principle component analysis based fourdimensional
An Algorithm for Fast Constrained Nuclear Norm Minimization and Applications to Systems Identification
"... Abstract — This paper presents a novel algorithm for efficiently minimizing the nuclear norm of a matrix subject to structural and semidefinite constraints. It requires performing only thresholding and eigenvalue decomposition steps and converges Qsuperlinearly to the optimum. Thus, this algorithm ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Abstract — This paper presents a novel algorithm for efficiently minimizing the nuclear norm of a matrix subject to structural and semidefinite constraints. It requires performing only thresholding and eigenvalue decomposition steps and converges Qsuperlinearly to the optimum. Thus, this algorithm offers substantial advantages, both in terms of memory requirements and computational time over conventional semidefinite programming solvers. These advantages are illustrated using as an example the problem of finding the lowest order system that interpolates a collection of noisy measurements. I.
Fast
"... interiorpoint inference in highdimensional sparse, penalized statespace models ..."
Abstract
 Add to MetaCart
(Show Context)
interiorpoint inference in highdimensional sparse, penalized statespace models
1Active Subspace: Towards Scalable LowRank Learning
"... We address the scalability issues in lowrank matrix learning problems. Usually, these problems resort to solving nuclear norm regularized optimization problems (NNROPs), which often suffer from high computational complexities if based on existing solvers, especially under largescale settings. Base ..."
Abstract
 Add to MetaCart
We address the scalability issues in lowrank matrix learning problems. Usually, these problems resort to solving nuclear norm regularized optimization problems (NNROPs), which often suffer from high computational complexities if based on existing solvers, especially under largescale settings. Based on the fact that the optimal solution matrix to an NNROP is often lowrank, we revisit the classic mechanism of lowrank matrix factorization, based on which we present an active subspace algorithm for efficiently solving NNROPs by transforming largescale NNROPs into smallscale problems. The transformation is achieved by factorizing the largesize solution matrix into the product of a smallsize orthonormal matrix (active subspace) and another smallsize matrix. Although such a transformation generally leads to nonconvex problems, we show that suboptimal solution can be found by the augmented Lagrange alternating direction method. For the robust PCA (RPCA) (Candès et al., 2009) problem, which is a typical example of NNROPs, theoretical results verify suboptimality of the solution produced by our algorithm. For the general NNROPs, we empirically show that our algorithm significantly reduces the computational complexity without loss of optimality. 1
LowRank Matrix Completion
, 2013
"... While datasets are frequently represented as matrices, realword data is imperfect and entries are often missing. In many cases, the data are very sparse and the matrix must be filled in before any subsequent work can be done. This optimization problem, known as matrix completion, can be made welld ..."
Abstract
 Add to MetaCart
(Show Context)
While datasets are frequently represented as matrices, realword data is imperfect and entries are often missing. In many cases, the data are very sparse and the matrix must be filled in before any subsequent work can be done. This optimization problem, known as matrix completion, can be made welldefined by assuming the matrix to be low rank. The resulting rankminimization problem is NPhard, but it has recently been shown that the rank constraint can be replaced with a nuclear norm constraint and, with high probability, the global minimum of the problem will not change. Because this nuclear norm problem is convex and can be optimized efficiently, there has been a significant amount of research over the past few years to develop optimization algorithms that perform well. In this report, we review several methods for lowrank matrix completion. The first paper we review presents an iterative algorithm to
1Covariance Estimation in High Dimensions via Kronecker Product Expansions
, 2013
"... This paper presents a new method for estimating high dimensional covariance matrices. The method, permuted rankpenalized leastsquares (PRLS), is based on a Kronecker product series expansion of the true covariance matrix. Assuming an i.i.d. Gaussian random sample, we establish high dimensional rat ..."
Abstract
 Add to MetaCart
This paper presents a new method for estimating high dimensional covariance matrices. The method, permuted rankpenalized leastsquares (PRLS), is based on a Kronecker product series expansion of the true covariance matrix. Assuming an i.i.d. Gaussian random sample, we establish high dimensional rates of convergence to the true covariance as both the number of samples and the number of variables go to infinity. For covariance matrices of low separation rank, our results establish that PRLS has significantly faster convergence than the standard sample covariance matrix (SCM) estimator. The convergence rate captures a fundamental tradeoff between estimation error and approximation error, thus providing a scalable covariance estimation framework in terms of separation rank, similar to low rank approximation of covariance matrices [1]. The MSE convergence rates generalize the high dimensional rates recently obtained for the ML Flipflop algorithm [2], [3] for Kronecker product covariance estimation. We show that a class of block Toeplitz covariance matrices is approximatable by low separation rank and give bounds on the minimal separation rank r that ensures a given level of bias. Simulations are presented to validate the theoretical bounds. As a real world application, we illustrate the utility of the proposed Kronecker covariance estimator for spatiotemporal linear least squares prediction of multivariate wind speed measurements. Index Terms Structured covariance estimation, penalized least squares, Kronecker product decompositions, high dimensional convergence rates, meansquare error, multivariate prediction.