Results 1  10
of
511
The Dantzig selector: statistical estimation when p is much larger than n
, 2005
"... In many important statistical applications, the number of variables or parameters p is much larger than the number of observations n. Suppose then that we have observations y = Ax + z, where x ∈ R p is a parameter vector of interest, A is a data matrix with possibly far fewer rows than columns, n ≪ ..."
Abstract

Cited by 879 (14 self)
 Add to MetaCart
≪ p, and the zi’s are i.i.d. N(0, σ 2). Is it possible to estimate x reliably based on the noisy data y? To estimate x, we introduce a new estimator—we call the Dantzig selector—which is solution to the ℓ1regularization problem min ˜x∈R p ‖˜x‖ℓ1 subject to ‖A T r‖ℓ ∞ ≤ (1 + t −1) √ 2 log p · σ
SIMULTANEOUS ANALYSIS OF LASSO AND DANTZIG SELECTOR
 SUBMITTED TO THE ANNALS OF STATISTICS
, 2007
"... We exhibit an approximate equivalence between the Lasso estimator and Dantzig selector. For both methods we derive parallel oracle inequalities for the prediction risk in the general nonparametric regression model, as well as bounds on the ℓp estimation loss for 1 ≤ p ≤ 2 in the linear model when th ..."
Abstract

Cited by 472 (11 self)
 Add to MetaCart
We exhibit an approximate equivalence between the Lasso estimator and Dantzig selector. For both methods we derive parallel oracle inequalities for the prediction risk in the general nonparametric regression model, as well as bounds on the ℓp estimation loss for 1 ≤ p ≤ 2 in the linear model when
On the LASSO and Dantzig selector equivalence
 in 44th Annual Conference on Information Sciences and Systems
, 2010
"... Abstract—Recovery of sparse signals from noisy observations is a problem that arises in many information processing contexts. LASSO and the Dantzig selector (DS) are two wellknown schemes used to recover highdimensional sparse signals from linear observations. This paper presents some results on t ..."
Abstract

Cited by 7 (1 self)
 Add to MetaCart
Abstract—Recovery of sparse signals from noisy observations is a problem that arises in many information processing contexts. LASSO and the Dantzig selector (DS) are two wellknown schemes used to recover highdimensional sparse signals from linear observations. This paper presents some results
MultiStage Dantzig Selector
"... We consider the following sparse signal recovery (or feature selection) problem: given a design matrix X ∈ Rn×m (m ≫ n) and a noisy observation vector y ∈ Rn satisfying y = Xβ ∗ + ɛ where ɛ is the noise vector following a Gaussian distribution N(0, σ2I), how to recover the signal (or parameter vecto ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
vector) β ∗ when the signal is sparse? The Dantzig selector has been proposed for sparse signal recovery with strong theoretical guarantees. In this paper, we propose a multistage Dantzig selector method, which iteratively refines the target signal β ∗. We show that if X obeys a certain condition
Dantzig selector homotopy with dynamic measurements
"... The Dantzig selector is a near ideal estimator for recovery of sparse signals from linear measurements in the presence of noise. It is a convex optimization problem which can be recast into a linear program (LP) for real data, and solved using some LP solver. In this paper we present an alternative ..."
Abstract
 Add to MetaCart
The Dantzig selector is a near ideal estimator for recovery of sparse signals from linear measurements in the presence of noise. It is a convex optimization problem which can be recast into a linear program (LP) for real data, and solved using some LP solver. In this paper we present an alternative
Transductive versions of the LASSO and the Dantzig Selector
, 2009
"... We consider the linear regression problem, where the number p of covariates is possibly larger than the number n of observations (xi, yi)i≤i≤n, under sparsity assumptions. On the one hand, several methods have been successfully proposed to perform this task, for example the LASSO in [Tib96] or the D ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
] or the Dantzig Selector in [CT07]. On the other hand, consider new values (xi)n+1≤i≤m. If one wants to estimate the corresponding yi’s, one should think of a specific estimator devoted to this task, referred in [Vap98] as a ”transductive ” estimator. This estimator may differ from an estimator designed
The Constrained Dantzig Selector with Enhanced Consistency *
, 2016
"... Abstract The Dantzig selector has received popularity for many applications such as compressed sensing and sparse modeling, thanks to its computational efficiency as a linear programming problem and its nice sampling properties. Existing results show that it can recover sparse signals mimicking the ..."
Abstract
 Add to MetaCart
Abstract The Dantzig selector has received popularity for many applications such as compressed sensing and sparse modeling, thanks to its computational efficiency as a linear programming problem and its nice sampling properties. Existing results show that it can recover sparse signals mimicking
Feature selection based on mutual information: Criteria of maxdependency, maxrelevance, and minredundancy
 IEEE TRANS. PATTERN ANALYSIS AND MACHINE INTELLIGENCE
, 2005
"... Feature selection is an important problem for pattern classification systems. We study how to select good features according to the maximal statistical dependency criterion based on mutual information. Because of the difficulty in directly implementing the maximal dependency condition, we first der ..."
Abstract

Cited by 571 (8 self)
 Add to MetaCart
Feature selection is an important problem for pattern classification systems. We study how to select good features according to the maximal statistical dependency criterion based on mutual information. Because of the difficulty in directly implementing the maximal dependency condition, we first
Results 1  10
of
511