Results 1 - 10
of
1,559
Gradient projection for sparse reconstruction: Application to compressed sensing and other inverse problems
- IEEE JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING
, 2007
"... Many problems in signal processing and statistical inference involve finding sparse solutions to under-determined, or ill-conditioned, linear systems of equations. A standard approach consists in minimizing an objective function which includes a quadratic (squared ℓ2) error term combined with a spa ..."
Abstract
-
Cited by 539 (17 self)
- Add to MetaCart
Many problems in signal processing and statistical inference involve finding sparse solutions to under-determined, or ill-conditioned, linear systems of equations. A standard approach consists in minimizing an objective function which includes a quadratic (squared ℓ2) error term combined with a
The Dantzig selector: statistical estimation when p is much larger than n
, 2005
"... In many important statistical applications, the number of variables or parameters p is much larger than the number of observations n. Suppose then that we have observations y = Ax + z, where x ∈ R p is a parameter vector of interest, A is a data matrix with possibly far fewer rows than columns, n ≪ ..."
Abstract
-
Cited by 879 (14 self)
- Add to MetaCart
≪ p, and the zi’s are i.i.d. N(0, σ 2). Is it possible to estimate x reliably based on the noisy data y? To estimate x, we introduce a new estimator—we call the Dantzig selector—which is solution to the ℓ1-regularization problem min ˜x∈R p ‖˜x‖ℓ1 subject to ‖A T r‖ℓ ∞ ≤ (1 + t −1) √ 2 log p · σ
Multiple kernel learning, conic duality, and the SMO algorithm
- In Proceedings of the 21st International Conference on Machine Learning (ICML
, 2004
"... While classical kernel-based classifiers are based on a single kernel, in practice it is often desirable to base classifiers on combinations of multiple kernels. Lanckriet et al. (2004) considered conic combinations of kernel matrices for the support vector machine (SVM), and showed that the optimiz ..."
Abstract
-
Cited by 445 (31 self)
- Add to MetaCart
; moreover, the sequential minimal optimization (SMO) techniques that are essential in large-scale implementations of the SVM cannot be applied because the cost function is non-differentiable. We propose a novel dual formulation of the QCQP as a second-order cone programming problem, and show how to exploit
Sparse Reconstruction by Separable Approximation
, 2007
"... Finding sparse approximate solutions to large underdetermined linear systems of equations is a common problem in signal/image processing and statistics. Basis pursuit, the least absolute shrinkage and selection operator (LASSO), wavelet-based deconvolution and reconstruction, and compressed sensing ..."
Abstract
-
Cited by 373 (38 self)
- Add to MetaCart
(CS) are a few well-known areas in which problems of this type appear. One standard approach is to minimize an objective function that includes a quadratic (ℓ2) error term added to a sparsity-inducing (usually ℓ1) regularizer. We present an algorithmic framework for the more general problem
Renormalization in quantum field theory and the Riemann-Hilbert problem. II. The β-function, diffeomorphisms and the renormalization group
- Comm. Math. Phys
"... We show that renormalization in quantum field theory is a special instance of a general mathematical procedure of multiplicative extraction of finite values based on the Riemann–Hilbert problem. Given a loop γ(z), |z | = 1 of elements of a complex Lie group G the general procedure is given by evalu ..."
Abstract
-
Cited by 332 (39 self)
- Add to MetaCart
We show that renormalization in quantum field theory is a special instance of a general mathematical procedure of multiplicative extraction of finite values based on the Riemann–Hilbert problem. Given a loop γ(z), |z | = 1 of elements of a complex Lie group G the general procedure is given
Convergence of a block coordinate descent method for nondifferentiable minimization
- J. OPTIM THEORY APPL
, 2001
"... We study the convergence properties of a (block) coordinate descent method applied to minimize a nondifferentiable (nonconvex) function f(x1,...,xN) with certain separability and regularity properties. Assuming that f is continuous on a compact level set, the subsequence convergence of the iterate ..."
Abstract
-
Cited by 298 (3 self)
- Add to MetaCart
We study the convergence properties of a (block) coordinate descent method applied to minimize a nondifferentiable (nonconvex) function f(x1,...,xN) with certain separability and regularity properties. Assuming that f is continuous on a compact level set, the subsequence convergence
A fast algorithm for sparse reconstruction based on shrinkage, subspace optimization and continuation
- SIAM Journal on Scientific Computing
, 2010
"... Abstract. We propose a fast algorithm for solving the ℓ1-regularized minimization problem minx∈R n µ‖x‖1 + ‖Ax − b ‖ 2 2 for recovering sparse solutions to an undetermined system of linear equations Ax = b. The algorithm is divided into two stages that are performed repeatedly. In the first stage a ..."
Abstract
-
Cited by 54 (8 self)
- Add to MetaCart
Abstract. We propose a fast algorithm for solving the ℓ1-regularized minimization problem minx∈R n µ‖x‖1 + ‖Ax − b ‖ 2 2 for recovering sparse solutions to an undetermined system of linear equations Ax = b. The algorithm is divided into two stages that are performed repeatedly. In the first stage a
Stochastic global optimization
, 2008
"... Stochastic global optimization methods are methods for solving a global optimization prob-lem incorporating probabilistic (stochastic) elements, either in the problem data (the objective function, the constraints, etc.), or in the algorithm itself, or in both. Global optimization is a very important ..."
Abstract
-
Cited by 289 (6 self)
- Add to MetaCart
much easier and more efficiently than the deterministic algorithms. The problem of global minimization. Consider a general minimization problem f(x)→minx∈X with objective function f(·) and feasible region X. Let x ∗ be a global minimizer of f(·); that is, x ∗ is a point in X such that f(x∗) = f
Sparse Greedy Matrix Approximation for Machine Learning
, 2000
"... In kernel based methods such as Regularization Networks large datasets pose signi- cant problems since the number of basis functions required for an optimal solution equals the number of samples. We present a sparse greedy approximation technique to construct a compressed representation of the ..."
Abstract
-
Cited by 222 (10 self)
- Add to MetaCart
], or Gaussian Processes [Williams, 1998] are based on kernel methods. Given an m-sample f(x 1 ; y 1 ); : : : ; (x m ; y m )g of patterns x i 2 X and target values y i 2 Y these algorithms minimize the regularized risk functional min f2H R reg [f ] = 1 m m X i=1 c(x i ; y i ; f(x i )) + 2 kfk 2 H
Iteratively reweighted algorithms for compressive sensing
- in 33rd International Conference on Acoustics, Speech, and Signal Processing (ICASSP
, 2008
"... The theory of compressive sensing has shown that sparse signals can be reconstructed exactly from many fewer measurements than traditionally believed necessary. In [1], it was shown empirically that using ℓ p minimization with p < 1 can do so with fewer measurements than with p = 1. In this paper ..."
Abstract
-
Cited by 185 (8 self)
- Add to MetaCart
The theory of compressive sensing has shown that sparse signals can be reconstructed exactly from many fewer measurements than traditionally believed necessary. In [1], it was shown empirically that using ℓ p minimization with p < 1 can do so with fewer measurements than with p = 1
Results 1 - 10
of
1,559