#### DMCA

## An accelerated proximal gradient algorithm for nuclear norm regularized least squares problems (2009)

Citations: | 183 - 10 self |

### Citations

5400 | Convex Analysis
- Rockafellar
- 1972
(Show Context)
Citation Context ...(7) as a special case. Specifically, the minimization problem we consider has the form: min F (X) := f(X) + P (X), (8) X∈ℜm×n where P : ℜ m×n → (−∞, ∞] is a proper, convex, lower semicontinuous (lsc) =-=[39]-=- function and f is convex smooth (i.e., continuously differentiable) on an open subset of ℜ m×n containing domP = {X | P (X) < ∞}. We assume that domP is closed and ∇f is Lipschitz continuous on domP ... |

4203 | Regression shrinkage and selection via the lasso
- Tibshirani
- 1996
(Show Context)
Citation Context ...ℓ1-regularized linear least squares problem (also known as the basis pursuit de-noising problem) [12]: min x∈ℜn 1 2 ‖Ax − b‖22 + µ‖x‖1, (5) where µ is a given positive parameter; or the Lasso problem =-=[40]-=-: min x∈ℜ n { ‖Ax − b‖ 2 2 : ‖x‖1 ≤ t } , (6) where t is a given positive parameter. It is not hard to see that the problem (5) is equivalent to (6) in the sense that a solution of (5) is also that of... |

3609 | Compressed sensing
- Donoho
- 2006
(Show Context)
Citation Context ...Ax = b } , (3) where ‖x‖0 denotes the number of nonzero components in the vector x, A ∈ ℜ p×n , and min x∈ℜ n { ‖x‖1 : Ax = b } . (4) The problem (4) has attracted much interest in compressed sensing =-=[8, 9, 10, 14, 15]-=- and is also known as the basis pursuit problem. Recently, Recht et al. [38] established analogous theoretical results in the compressed sensing literature for the pair (1) and (2). In the basis pursu... |

2717 | Atomic decomposition by basis pursuit - Chen, Donoho, et al. - 1999 |

2621 | Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information
- Candès, Romberg, et al.
- 2006
(Show Context)
Citation Context ...Ax = b } , (3) where ‖x‖0 denotes the number of nonzero components in the vector x, A ∈ ℜ p×n , and min x∈ℜ n { ‖x‖1 : Ax = b } . (4) The problem (4) has attracted much interest in compressed sensing =-=[8, 9, 10, 14, 15]-=- and is also known as the basis pursuit problem. Recently, Recht et al. [38] established analogous theoretical results in the compressed sensing literature for the pair (1) and (2). In the basis pursu... |

1505 | Near optimal signal recovery from random projections: Universal encoding strategies?,”
- Candès, Tao
- 2006
(Show Context)
Citation Context ...Ax = b } , (3) where ‖x‖0 denotes the number of nonzero components in the vector x, A ∈ ℜ p×n , and min x∈ℜ n { ‖x‖1 : Ax = b } . (4) The problem (4) has attracted much interest in compressed sensing =-=[8, 9, 10, 14, 15]-=- and is also known as the basis pursuit problem. Recently, Recht et al. [38] established analogous theoretical results in the compressed sensing literature for the pair (1) and (2). In the basis pursu... |

1056 | A fast iterative shrinkage-thresholding algorithm for linear inverse problems
- Beck, Teboulle
- 2009
(Show Context)
Citation Context ...≤ Lf‖X − Y ‖F ∀X, Y ∈ domP, (9) 3for some positive scalar Lf. The problem (7) is a special case of (8) with f(X) = 1 2 ‖A(X)− b‖ 2 2 and P (X) = µ‖X‖∗ with domP = ℜ m×n . Recently, Beck and Teboulle =-=[4]-=- proposed a fast iterative shrinkage-thresholding algorithm (abbreviated FISTA) to solve (8) for the vector case where n = 1 and domP = ℜ m , targeting particularly (5) arising in signal/image process... |

873 | Exact matrix completion via convex optimization. Foundations of Computational mathematics,
- Candès, Recht
- 2009
(Show Context)
Citation Context ...is is a special case of the affine rank minimization (1) with A(X) = XΩ, where XΩ is the vector in ℜ |Ω| obtained from X by selecting those elements whose indices are in Ω. Recently, Candés and Recht =-=[11]-=- proved that a random low-rank matrix can be recovered exactly with high probability from a rather small random sample of its entries, and it can be done by solving an aforementioned convex relaxation... |

776 |
An algorithmic framework for performing collaborative filtering
- Herlocker, Konstan, et al.
- 1999
(Show Context)
Citation Context ...l experiments on real matrix completion problems In this section, we consider matrix completion problems based on some real data sets, namely, the Jester joke data set [23] and the MovieLens data set =-=[25]-=-. The Jester joke data set contains 4.1 million ratings for 100 jokes from 73421 users and is available on the website http://www.ieor.berkeley.edu/~goldberg/jester-data/. The whole data is stored in ... |

562 | Guaranteed minimum-rank solutions of linear matrix equations via nuclear norm minimization
- Recht, Fazel, et al.
- 2010
(Show Context)
Citation Context ...p×n , and min x∈ℜ n { ‖x‖1 : Ax = b } . (4) The problem (4) has attracted much interest in compressed sensing [8, 9, 10, 14, 15] and is also known as the basis pursuit problem. Recently, Recht et al. =-=[38]-=- established analogous theoretical results in the compressed sensing literature for the pair (1) and (2). In the basis pursuit problem (4), b is a vector of measurements of the signal x obtained by us... |

555 | A singular value thresholding algorithm for matrix completion
- Cai, Candes, et al.
- 2008
(Show Context)
Citation Context ...+ p 2 (m + n) 2 + p 3 ) and the memory requirement grows like O((m + n) 2 + p 2 ). 3.2 Review of existing algorithms for matrix completion In this section, we review two recently developed algorithms =-=[6, 30]-=- for solving the matrix completion problem. First of all, we consider the following minimization problem: min X∈ℜm×n τ 2 ‖X − G‖2F + µ‖X‖∗, (18) where G is a given matrix in ℜm×n . If G = Y − τ −1A∗ (... |

521 | Smooth minimization of non-smooth functions
- Nesterov
(Show Context)
Citation Context ...d promising numerical results for wavelet-based image deblurring. This algorithm is in the class of accelerated proximal gradient algorithms that were studied by Nesterov, Nemirovski, and others; see =-=[32, 33, 34, 36, 43]-=- and references therein. These accelerated proximal gradient algorithms have an attractive iteration complexity of O(1/ √ ɛ) for achieving ɛ-optimality; see Section 2. We extend Beck and Teboulle’s al... |

377 | Eigentaste: A constant time collaborative filtering algorithm
- Goldberg, Roeder, et al.
(Show Context)
Citation Context ...0 1.02e+03 2.89e-02 4.4 Numerical experiments on real matrix completion problems In this section, we consider matrix completion problems based on some real data sets, namely, the Jester joke data set =-=[23]-=- and the MovieLens data set [25]. The Jester joke data set contains 4.1 million ratings for 100 jokes from 73421 users and is available on the website http://www.ieor.berkeley.edu/~goldberg/jester-dat... |

371 | Sparse reconstruction by separable approximation
- Wright, Nowak, et al.
- 2009
(Show Context)
Citation Context ...int Sτ(G) can be found in closed form, which is an advantage of algorithms using (12) to update the current point (i.e., αk = 1 for all k) or compute a direction for large-scale optimization problems =-=[24, 44, 46, 47]-=-. When the Algorithm 1, with fixed constants τ k > 0, tk = 1, and αk = 1 for all k, is applied to the problem (5), i.e., (8) with f(X) = 1 2 ‖AX − b‖2 2, P (X) = µ‖X‖1 and n = 1 (hence X ∈ ℜ m ), it i... |

352 | An EM algorithm for wavelet-based image restoration
- Figueiredo, Nowak
(Show Context)
Citation Context ...ient vector Wu can be sparsely approximated, and it is usually formulated as a linear least squares problem involving a penalty on the term ‖Wu‖1. The synthesis based approach was first introduced in =-=[22, 25, 26, 27, 28]-=-. In this approach, the underlying image u is assumed to be synthesized from a sparse coefficient vector x with u =W Tx, and it is usually formulated as a linear least squares problem involving a pena... |

338 |
Problem complexity and method efficiency in optimization. Wiley-Interscience series in discrete mathematics.
- Nemirovskii, Yudin
- 1983
(Show Context)
Citation Context ...d promising numerical results for wavelet-based image deblurring. This algorithm is in the class of accelerated proximal gradient algorithms that were studied by Nesterov, Nemirovski, and others; see =-=[32, 33, 34, 36, 43]-=- and references therein. These accelerated proximal gradient algorithms have an attractive iteration complexity of O(1/ √ ɛ) for achieving ɛ-optimality; see Section 2. We extend Beck and Teboulle’s al... |

298 |
A method for solving the convex programming problem with convergence rate
- Nesterov
- 1983
(Show Context)
Citation Context ...d promising numerical results for wavelet-based image deblurring. This algorithm is in the class of accelerated proximal gradient algorithms that were studied by Nesterov, Nemirovski, and others; see =-=[32, 33, 34, 36, 43]-=- and references therein. These accelerated proximal gradient algorithms have an attractive iteration complexity of O(1/ √ ɛ) for achieving ɛ-optimality; see Section 2. We extend Beck and Teboulle’s al... |

287 |
Matrix Rank Minimization with Applications.
- Fazel
- 2002
(Show Context)
Citation Context ...n embedding [42]. In general, this affine rank minimization problem (1) is an NP-hard nonconvex optimization problem. A recent convex relaxation of this affine rank minimization problem introduced in =-=[19]-=- minimizes the nuclear norm over the same constraints: min X∈ℜ m×n { ‖X‖∗ : A(X) = b } . (2) The nuclear norm is the best convex approximation of the rank function over the unit ball of matrices. A pa... |

274 | A rank minimization heuristic with application to minimum order system approximation.
- Fazel, Hindi, et al.
- 2001
(Show Context)
Citation Context ...} , (1) where A : ℜm×n → ℜp is a linear map and b ∈ ℜp . We denote the adjoint of A by A∗ . The problem (1) has appeared in the literature of diverse fields including machine learning [1, 3], control =-=[17, 18, 31]-=-, and Euclidean embedding [42]. In general, this affine rank minimization problem (1) is an NP-hard nonconvex optimization problem. A recent convex relaxation of this affine rank minimization problem ... |

216 | Fast Monte Carlo algorithms for matrices I-III: computing a compressed approximate matrix decompositon,
- Drineas, Kannan, et al.
- 2006
(Show Context)
Citation Context ...n each iteration of the FPC and SVT algorithms lies in computing the SVD of Gk . In [30], Ma et al. uses a fast Monte Carlo algorithm such as the Linear Time SVD algorithm developed by Drineas et al. =-=[16]-=- to reduce the time for computing the SVD. In addition, they compute only the predetermined svk largest singular values and corresponding singular vectors to further reduce the computational time at e... |

215 |
Simultaneous cartoon and texture image inpainting using morphological component analysis
- Elad, Starck, et al.
- 2005
(Show Context)
Citation Context ...e. Therefore, there are two formulations for the sparse approximation of the underlying images, namely analysis based and synthesis based approaches. The analysis based approach was first proposed in =-=[23, 24]-=-. In this approach, we assume that the analyzed coefficient vector Wu can be sparsely approximated, and it is usually formulated as a linear least squares problem involving a penalty on the term ‖Wu‖1... |

202 | Z.: Framelets: MRA-based Constructions of Wavelet Frames
- Daubechies, Han, et al.
- 2003
(Show Context)
Citation Context ...epresenting u. In this paper, the tight frame systemW used is generated from piecewise linear B-spline framelet constructed via the unitary extension principle in [40]. We refer interested readers to =-=[21, 40]-=- and the references therein for the general wavelet frame theory and its corresponding construction. The details in the construction of W from a given wavelet tight frame system can be found in, for e... |

196 | Fixed point and Bregman iterative methods for matrix rank minimization
- Ma, Goldfarb, et al.
- 2011
(Show Context)
Citation Context ...ized linear least squares problem (7) and introduce three techniques to accelerate the convergence 4of our algorithm. In section 4, we compare our algorithm with a fixed point continuation algorithm =-=[30]-=- for solving (7) on randomly generated matrix completion problems with moderate dimensions. We also present numerical results for solving a set of large-scale randomly generated matrix completion prob... |

181 | Quantitative robust uncertainty principles and optimally sparse decompositions
- Candes, Romberg
- 2006
(Show Context)
Citation Context |

180 |
On accelerated proximal gradient methods for convex-concave optimization.
- Tseng
- 2008
(Show Context)
Citation Context |

169 | NESTA: A fast and accurate first-order method for sparse recovery - Becker, Bobin, et al. |

160 |
A coordinated gradient descent method for nonsmooth separable minimizatin
- Tseng, Yun
(Show Context)
Citation Context ...int Sτ(G) can be found in closed form, which is an advantage of algorithms using (12) to update the current point (i.e., αk = 1 for all k) or compute a direction for large-scale optimization problems =-=[24, 44, 46, 47]-=-. When the Algorithm 1, with fixed constants τ k > 0, tk = 1, and αk = 1 for all k, is applied to the problem (5), i.e., (8) with f(X) = 1 2 ‖AX − b‖2 2, P (X) = µ‖X‖1 and n = 1 (hence X ∈ ℜ m ), it i... |

141 | Introductory Lectures on Convex Optimization, - Nesterov - 2004 |

122 | Templates for convex cone problems with applications to sparse signal recovery.
- Becker, Candès, et al.
- 2011
(Show Context)
Citation Context ...k − x∗0‖ ≥ σ, ∀k. Since the sequence {x∗ αk } is bounded, there is a convergent subsequence that must converge to x∗0 by the discussions above. This leads to a contradiction. Remark 1 It was shown in =-=[3, 29, 44]-=- that there exists an α∗ > 0 such that, for all α ≤ α∗, the unique solution of the following `2-regularized `1-minimization problem: minx∈<m {‖x‖1+ α2 ‖x‖2 : Ax = b}, is also the solution of the follo... |

111 | An accelerated gradient method for trace norm minimization.
- Ji, Ye
- 2009
(Show Context)
Citation Context ... incorporating linesearch-like, continuation, and truncation techniques to accelerate the convergence. We should mention that the FISTA algorithm of Beck and Teboulle in [4] has also been extended in =-=[26]-=- to the problem (8) with P (X) = µ‖X‖∗. But as the authors of [26] noted, our algorithms were developed independently of theirs. In addition, the numerical experiments in [26] focused on the problem (... |

104 | Split Bregman methods and frame based image restoration, Multiscale Modeling and Simulation: A
- Cai, Osher, et al.
(Show Context)
Citation Context ... restoration approaches, e.g., ROF and nonlocal PDE models. The split Bregman iteration is further used to develop a fast algorithm for the analysis based approach in frame-based image restoration in =-=[12]-=-. While the balanced approach in frame-based image restoration gives satisfactory simulation results, as shown in [6, 7, 8, 13, 14, 15, 16, 17], when solved by a variant of 3 the proximal forward-back... |

96 | Iterative methods for Image Deblurring”,
- Biemond, Lagendijk, et al.
- 1990
(Show Context)
Citation Context ...in (2). If α = 0 then (5) reduces to (2). If α > 0, then the objective function is strictly convex and so the optimal solution is unique. The term α2 ‖x‖2 is also known as the Tikhonov regularization =-=[4]-=- which has been used to handle the ill-conditioning of the operator A in image deconvolution. As we will show later in the paper, in the limit when α ↓ 0, the minimizer of the modified model (5) will ... |

96 | Linearized Bregman iterations for compressed sensing
- Cai, Osher, et al.
- 2009
(Show Context)
Citation Context ...e of frame-based image restoration as shown in [6, 7, 8, 13, 14, 15, 16, 17]. Recently, the linearized Bregman iteration was proposed for solving the `1-minimization problems in compressed sensing by =-=[10, 38, 43]-=- and the nuclear norm minimization in matrix completion by [5]. The linearized Bregman iteration was then used to develop a fast algorithm for the synthesis based approach for frame-based image deblur... |

90 |
Defrise M., De Mol C.: An Iterative Thresholding Algorithm for Linear Inverse Problems with a Sparsity Constraint
- Daubechies
- 2004
(Show Context)
Citation Context ...λ/L(gk). Step 4. Compute tk+1 = 1+ √ 1+4(tk)2 2 . When the APG algorithm with tk = 1 for all k is applied to solve the problem (2), it is the popular iterative shrinkage/thresholding (IST) algorithms =-=[20, 27, 28, 31]-=- and it is also the proximal forward-backward splitting (PFBS) algorithm developed in [6, 7, 8, 9, 13, 14, 15, 16, 17] for the balanced approach in frame-based image restoration. The IST and PFBS algo... |

88 | Bregmanized nonlocal regularization for deconvolution and sparse reconstruction
- Zhang, Burger, et al.
- 2010
(Show Context)
Citation Context ... then used to develop a fast algorithm for the synthesis based approach for frame-based image deblurring in [11]. Furthermore, the split Bregman iteration proposed in [30] was shown to be powerful in =-=[30, 45]-=- when it is applied to various PDE based image restoration approaches, e.g., ROF and nonlocal PDE models. The split Bregman iteration is further used to develop a fast algorithm for the analysis based... |

87 | A framelet-based image inpainting algorithm
- Cai, Chan, et al.
(Show Context)
Citation Context ... references therein for the general wavelet frame theory and its corresponding construction. The details in the construction of W from a given wavelet tight frame system can be found in, for example, =-=[6, 7, 8, 9, 13, 14, 15, 16, 17]-=-. In the case of redundant tight frame systems, the mapping from the image u to its coefficients is not one-to-one, i.e., the representation of u in the frame domain is not unique. Therefore, there ar... |

70 |
A bound optimization approach to wavelet-based image deconvolution
- Figueiredo, Nowak
(Show Context)
Citation Context ... 1 2 ‖AX − b‖2 2, P (X) = µ‖X‖1 and n = 1 (hence X ∈ ℜ m ), it is the popular iterative shrinkage/thresholding (IST) algorithms that have been developed and analyzed independently by many researchers =-=[13, 20, 21, 24]-=-. When P ≡ 0 in the problem (8), Algorithm 1 with tk = 1 for all k reduces to the standard gradient algorithm. For the gradient algorithm, it is known that the sequence of function values F (Xk) can c... |

70 |
Bregman iterative algorithms for `1-minimization with applications to compressed sensing.
- Yin, Osher, et al.
- 2008
(Show Context)
Citation Context ...e of frame-based image restoration as shown in [6, 7, 8, 13, 14, 15, 16, 17]. Recently, the linearized Bregman iteration was proposed for solving the `1-minimization problems in compressed sensing by =-=[10, 38, 43]-=- and the nuclear norm minimization in matrix completion by [5]. The linearized Bregman iteration was then used to develop a fast algorithm for the synthesis based approach for frame-based image deblur... |

66 |
Neighborliness of randomly projected simplices in high dimensions
- Donoho, Tanner
(Show Context)
Citation Context |

61 | On the rank minimization problem over a positive semidefinite linear matrix inequality. - Mesbahi, Papavassilopoulos - 1997 |

60 |
The split Bregman algorithm for L1 regularized problems
- Goldstein, Osher
(Show Context)
Citation Context ... linearized Bregman iteration was then used to develop a fast algorithm for the synthesis based approach for frame-based image deblurring in [11]. Furthermore, the split Bregman iteration proposed in =-=[30]-=- was shown to be powerful in [30, 45] when it is applied to various PDE based image restoration approaches, e.g., ROF and nonlocal PDE models. The split Bregman iteration is further used to develop a ... |

56 | Z.: Wavelet Algorithms for High-resolution Image Reconstruction
- Chan, Chan, et al.
- 2003
(Show Context)
Citation Context ...wly. On the other hand, we show in this section that the APG algorithm adapted here gets an -optimal solution in O( √ L/) iterations. 6 Thus the APG algorithm accelerates the PFBS algorithm used in =-=[6, 7, 8, 9, 13, 14, 15, 16, 17]-=- for the balanced approach in frame-based image restoration. We first prove that the problem (2) (i.e. (5) with α = 0) has an optimal solution and the problem (5) with α > 0 has a unique optimal solut... |

56 | Inpainting and zooming using sparse representations
- Fadili, Starck, et al.
- 2009
(Show Context)
Citation Context ...ient vector Wu can be sparsely approximated, and it is usually formulated as a linear least squares problem involving a penalty on the term ‖Wu‖1. The synthesis based approach was first introduced in =-=[22, 25, 26, 27, 28]-=-. In this approach, the underlying image u is assumed to be synthesized from a sparse coefficient vector x with u =W Tx, and it is usually formulated as a linear least squares problem involving a pena... |

54 |
Z.: Tight Frame: The Efficient Way for High-resolution Image Reconstruction
- Chan, Riemenschneider, et al.
- 2004
(Show Context)
Citation Context ... references therein for the general wavelet frame theory and its corresponding construction. The details in the construction of W from a given wavelet tight frame system can be found in, for example, =-=[6, 7, 8, 9, 13, 14, 15, 16, 17]-=-. In the case of redundant tight frame systems, the mapping from the image u to its coefficients is not one-to-one, i.e., the representation of u in the frame domain is not unique. Therefore, there ar... |

51 |
Fixed-point continuation for l1-minimization: methodology and convergence,
- HALE, YIN, et al.
- 2008
(Show Context)
Citation Context ...ant of (5) or (6), provided that the matrix A satisfies certain restricted isometry property. Many algorithms have been proposed to solve (5) and (6), targeting particularly large-scale problems; see =-=[24, 40, 46]-=- and references therein. Just like the basis pursuit problem, the data in a matrix completion problem may be contaminated with noise, and there may not exist low-rank matrices that satisfy the affine ... |

50 |
Dimension reduction and coefficient estimation in multivariate linear regression.
- Yuan, Ekici, et al.
- 2007
(Show Context)
Citation Context ...wever, its applicability goes beyond matrix completions. For example, the problem (7) arises naturally in simultaneous dimension reduction and coefficient estimation in multivariate linear regression =-=[45]-=-. It also appears in multi-class classification and multi-task learning; see [37] and the references therein. In this paper, we will develop an accelerated proximal gradient method for a general uncon... |

42 | Rank minimization under LMI constraints: A framework for output feedback problems.
- Ghaoui, Gahinet
- 1993
(Show Context)
Citation Context ...} , (1) where A : ℜm×n → ℜp is a linear map and b ∈ ℜp . We denote the adjoint of A by A∗ . The problem (1) has appeared in the literature of diverse fields including machine learning [1, 3], control =-=[17, 18, 31]-=-, and Euclidean embedding [42]. In general, this affine rank minimization problem (1) is an NP-hard nonconvex optimization problem. A recent convex relaxation of this affine rank minimization problem ... |

42 | Deconvolution: A wavelet frame approach
- Chai, Shen
(Show Context)
Citation Context ... references therein for the general wavelet frame theory and its corresponding construction. The details in the construction of W from a given wavelet tight frame system can be found in, for example, =-=[6, 7, 8, 9, 13, 14, 15, 16, 17]-=-. In the case of redundant tight frame systems, the mapping from the image u to its coefficients is not one-to-one, i.e., the representation of u in the frame domain is not unique. Therefore, there ar... |

42 |
Fixed-point continuation for `1-minimization: Methodology and convergence.
- Yin, Hale, et al.
- 2008
(Show Context)
Citation Context ...λ/L(gk). Step 4. Compute tk+1 = 1+ √ 1+4(tk)2 2 . When the APG algorithm with tk = 1 for all k is applied to solve the problem (2), it is the popular iterative shrinkage/thresholding (IST) algorithms =-=[20, 27, 28, 31]-=- and it is also the proximal forward-backward splitting (PFBS) algorithm developed in [6, 7, 8, 9, 13, 14, 15, 16, 17] for the balanced approach in frame-based image restoration. The IST and PFBS algo... |

41 |
Low-rank matrix factorization with attributes.
- Abernethy, Bach, et al.
- 2006
(Show Context)
Citation Context ...k(X) : A(X) = b } , (1) where A : ℜm×n → ℜp is a linear map and b ∈ ℜp . We denote the adjoint of A by A∗ . The problem (1) has appeared in the literature of diverse fields including machine learning =-=[1, 3]-=-, control [17, 18, 31], and Euclidean embedding [42]. In general, this affine rank minimization problem (1) is an NP-hard nonconvex optimization problem. A recent convex relaxation of this affine rank... |

41 | Iteratively solving linear inverse problems under general convex constraints
- Daubechies, Teschke, et al.
- 2007
(Show Context)
Citation Context ...ient vector Wu can be sparsely approximated, and it is usually formulated as a linear least squares problem involving a penalty on the term ‖Wu‖1. The synthesis based approach was first introduced in =-=[22, 25, 26, 27, 28]-=-. In this approach, the underlying image u is assumed to be synthesized from a sparse coefficient vector x with u =W Tx, and it is usually formulated as a linear least squares problem involving a pena... |

39 |
An Introduction to Iterative Toeplitz Solvers,
- Chan, Jin
- 2007
(Show Context)
Citation Context ... operator, and it is a Toeplitz-like or block-Toeplitz-like matrix with a suitable boundary condition. Hence A can be efficiently approximated by a circulant matrix or a fast transform based matrix C =-=[18, 32]-=-. In this paper, we use convolution matrices with circular or Neumann boundary conditions to approximate A, and D = (CCT )−1 is usually a good approximation for (AAT )−1. In order for the approximatio... |

36 | 2010 Analysis and generalizations of the linearized Bregman method
- Yin
(Show Context)
Citation Context ...k − x∗0‖ ≥ σ, ∀k. Since the sequence {x∗ αk } is bounded, there is a convergent subsequence that must converge to x∗0 by the discussions above. This leads to a contradiction. Remark 1 It was shown in =-=[3, 29, 44]-=- that there exists an α∗ > 0 such that, for all α ≤ α∗, the unique solution of the following `2-regularized `1-minimization problem: minx∈<m {‖x‖1+ α2 ‖x‖2 : Ax = b}, is also the solution of the follo... |

34 | Linearized bregman iterations for frame-based image deblurring
- Cai, Osher, et al.
(Show Context)
Citation Context ... nuclear norm minimization in matrix completion by [5]. The linearized Bregman iteration was then used to develop a fast algorithm for the synthesis based approach for frame-based image deblurring in =-=[11]-=-. Furthermore, the split Bregman iteration proposed in [30] was shown to be powerful in [30, 45] when it is applied to various PDE based image restoration approaches, e.g., ROF and nonlocal PDE models... |

32 | Framework for kernel regularization with application to protein clustering. In
- Fan, Keles, et al.
- 2005
(Show Context)
Citation Context ...X〉 : X ∈ S n } + , (10) where Sn + is the cone of n×n symmetric positive semidefinite matrices, and I is the identity matrix. An example of (10) comes from regularized kernel estimation in statistics =-=[28]-=-. The problem (10) is also a special case of (8) with f(X) = 1 2‖A(X) − b‖22 + µ〈I, X〉 and { n 0 if X ∈ S P (X) = +; ∞ else. Note that the term 〈I, X〉 is actually the nuclear norm ‖X‖∗ when X ∈ Sn +, ... |

31 |
A generalized proximal point algorithm for certain non-convex minimization problems.
- Fukushima, Mine
- 1981
(Show Context)
Citation Context ...I, X〉 is smooth on ℜn×n and so we are able to compute Sτ k(Gk ). In addition, since Sτ k(Gk ) ∈ Sn +, we have Xk ∈ Sn + if αk ≤ 1 for all k. For the vector case where n = 1 in (8), Fukushima and Mine =-=[22]-=- studied a proximal gradient descent method using (11) to compute a descent direction (i.e., Algorithm 1 with tk = 1 for all k) with stepsize αk chosen by an Armijo-type rule. If P is separable, the m... |

30 | Restoration of chopped and nodded images by framelets
- Cai, Chan, et al.
(Show Context)
Citation Context |

27 | Convergence analysis of tight framelet approach for missing data recovery
- Cai, Chan, et al.
(Show Context)
Citation Context ...wly. On the other hand, we show in this section that the APG algorithm adapted here gets an -optimal solution in O( √ L/) iterations. 6 Thus the APG algorithm accelerates the PFBS algorithm used in =-=[6, 7, 8, 9, 13, 14, 15, 16, 17]-=- for the balanced approach in frame-based image restoration. We first prove that the problem (2) (i.e. (5) with α = 0) has an optimal solution and the problem (5) with α > 0 has a unique optimal solut... |

26 | Exact regularization of convex programs.
- Friedlander, Tseng
- 2007
(Show Context)
Citation Context ...k − x∗0‖ ≥ σ, ∀k. Since the sequence {x∗ αk } is bounded, there is a convergent subsequence that must converge to x∗0 by the discussions above. This leads to a contradiction. Remark 1 It was shown in =-=[3, 29, 44]-=- that there exists an α∗ > 0 such that, for all α ≤ α∗, the unique solution of the following `2-regularized `1-minimization problem: minx∈<m {‖x‖1+ α2 ‖x‖2 : Ax = b}, is also the solution of the follo... |

25 |
On an approach to the construction of optimal methods of minimization of smooth convex functions.
- Nesterov
- 1988
(Show Context)
Citation Context |

21 | Convex Optimization Methods for Dimension Reduction and Coefficient Estimation in Multivariate Linear Regression. - Lu, Monteiro, et al. - 2008 |

18 | A framelet algorithm for enhancing video stills
- Chan, Shen, et al.
(Show Context)
Citation Context ...wly. On the other hand, we show in this section that the APG algorithm adapted here gets an -optimal solution in O( √ L/) iterations. 6 Thus the APG algorithm accelerates the PFBS algorithm used in =-=[6, 7, 8, 9, 13, 14, 15, 16, 17]-=- for the balanced approach in frame-based image restoration. We first prove that the problem (2) (i.e. (5) with α = 0) has an optimal solution and the problem (5) with α > 0 has a unique optimal solut... |

17 |
Developments and Applications of Block Toeplitz Iterative Solvers
- Jin
- 2002
(Show Context)
Citation Context ... operator, and it is a Toeplitz-like or block-Toeplitz-like matrix with a suitable boundary condition. Hence A can be efficiently approximated by a circulant matrix or a fast transform based matrix C =-=[18, 32]-=-. In this paper, we use convolution matrices with circular or Neumann boundary conditions to approximate A, and D = (CCT )−1 is usually a good approximation for (AAT )−1. In order for the approximatio... |

15 |
Analysis versus synthesis
- Elad, Milanfar, et al.
(Show Context)
Citation Context ...e. Therefore, there are two formulations for the sparse approximation of the underlying images, namely analysis based and synthesis based approaches. The analysis based approach was first proposed in =-=[23, 24]-=-. In this approach, we assume that the analyzed coefficient vector Wu can be sparsely approximated, and it is usually formulated as a linear least squares problem involving a penalty on the term ‖Wu‖1... |

13 | Framelet based deconvolution - Cai, Shen - 2010 |

11 | Sparse representations and Bayesian image inpainting
- Fadili, Starck
- 2005
(Show Context)
Citation Context |

7 | A coordinate gradient descent method for ℓ1-regularized convex minimization
- Yun, Toh
(Show Context)
Citation Context ...ant of (5) or (6), provided that the matrix A satisfies certain restricted isometry property. Many algorithms have been proposed to solve (5) and (6), targeting particularly large-scale problems; see =-=[24, 40, 46]-=- and references therein. Just like the basis pursuit problem, the data in a matrix completion problem may be contaminated with noise, and there may not exist low-rank matrices that satisfy the affine ... |

4 |
Simultaneous cartoon and texture
- Cai, Chan, et al.
- 2010
(Show Context)
Citation Context |

3 |
An EM algorithm for wavelet-bassed image restoration
- Figueiredo, Nowak
(Show Context)
Citation Context ... 1 2 ‖AX − b‖2 2, P (X) = µ‖X‖1 and n = 1 (hence X ∈ ℜ m ), it is the popular iterative shrinkage/thresholding (IST) algorithms that have been developed and analyzed independently by many researchers =-=[13, 20, 21, 24]-=-. When P ≡ 0 in the problem (8), Algorithm 1 with tk = 1 for all k reduces to the standard gradient algorithm. For the gradient algorithm, it is known that the sequence of function values F (Xk) can c... |

2 |
Uncovering shated structures in multiclass classification
- Amit, Fink, et al.
- 2007
(Show Context)
Citation Context ...k(X) : A(X) = b } , (1) where A : ℜm×n → ℜp is a linear map and b ∈ ℜp . We denote the adjoint of A by A∗ . The problem (1) has appeared in the literature of diverse fields including machine learning =-=[1, 3]-=-, control [17, 18, 31], and Euclidean embedding [42]. In general, this affine rank minimization problem (1) is an NP-hard nonconvex optimization problem. A recent convex relaxation of this affine rank... |

2 |
De Friese, C De Mol, An iterative thresholding algorithm for linear inverse problems with a sparsity constraint
- Daubechies, M
- 2004
(Show Context)
Citation Context ... 1 2 ‖AX − b‖2 2, P (X) = µ‖X‖1 and n = 1 (hence X ∈ ℜ m ), it is the popular iterative shrinkage/thresholding (IST) algorithms that have been developed and analyzed independently by many researchers =-=[13, 20, 21, 24]-=-. When P ≡ 0 in the problem (8), Algorithm 1 with tk = 1 for all k reduces to the standard gradient algorithm. For the gradient algorithm, it is known that the sequence of function values F (Xk) can c... |

2 | Trace norm regularization
- Pong, Tseng, et al.
- 2009
(Show Context)
Citation Context ... (7) arises naturally in simultaneous dimension reduction and coefficient estimation in multivariate linear regression [45]. It also appears in multi-class classification and multi-task learning; see =-=[37]-=- and the references therein. In this paper, we will develop an accelerated proximal gradient method for a general unconstrained nonsmooth convex minimization problem which includes (7) as a special ca... |

2 |
SDPT3 - a MATLAB software package for semedefinite programming, Optimization Methods and Software 11
- Toh, Todd, et al.
- 1999
(Show Context)
Citation Context ...ies, and it can be done by solving an aforementioned convex relaxation (2) of (1), i.e., min X∈ℜ m×n { ‖X‖∗ : Xij = Mij, (i, j) ∈ Ω } . (16) In [11], the convex relaxation (16) was solved using SDPT3 =-=[41]-=-, which is one of the most advanced semidefinite programming solvers. The problem (16) can be reformulated as a semidefinite program as follows; see [38] for details: min X,W1,W2 1 2 (〈W1, Im〉 + 〈W2, ... |

2 | Gradient based method for cone programming with application to large-scale compressed sensing
- Lu
- 2008
(Show Context)
Citation Context ...d algorithms. The APG algorithms of Nesterov, and Beck and Teboulle, have been adapted to solve `1-regularized linear least squares problems arising in signal/image processing [1], compressed sensing =-=[2, 33]-=- and nuclear norm regularized linear least squares problems [41]. Compared to projected gradient and proximal forward-backward splitting algorithms, APG algorithms have an attractive iteration complex... |

1 |
Matrix completion with noise, preprint
- Candés, Plan
- 2009
(Show Context)
Citation Context ...ents, and yet the errors are all smaller than the noise level (nf = 0.1) in the given data. The errors obtained here are consistent with (actually more accurate) the theoretical result established in =-=[7]-=-. Table 3:Numerical results on random matrix completion problems without noise. Unknown M Results n p r p/dr µ iter #sv time error 1000 119406 10 6 1.44e-02 38 10 2.66e+00 2.94e-04 389852 50 4 5.39e-0... |

1 |
Affine systems in L2(<d): the analysis of the analysis operator
- Ron, Shen
- 1997
(Show Context)
Citation Context ...e called the canonical coefficients representing u. In this paper, the tight frame systemW used is generated from piecewise linear B-spline framelet constructed via the unitary extension principle in =-=[40]-=-. We refer interested readers to [21, 40] and the references therein for the general wavelet frame theory and its corresponding construction. The details in the construction of W from a given wavelet ... |