Results 1  10
of
190
Autofocus for digital Fresnel holograms by use of a Fresneletsparsity criterion
 J. Opt. Soc. Am. A
, 2004
"... We propose a robust autofocus method for reconstructing digital Fresnel holograms. The numerical reconstruction involves simulating the propagation of a complex wavefront to the appropriate distance. Since the latter value is difficult to determine manually, it is desirable to rely on an automatic p ..."
Abstract

Cited by 22 (2 self)
 Add to MetaCart
procedure for finding the optimal distance to achieve high quality reconstructions. Our algorithm maximizes a sharpness metric related to the sparsity of the signal’s expansion in distancedependent waveletlike Fresnelet bases. We show results from simulations and experimental situations that confirm its
An EM Algorithm for WaveletBased Image Restoration
, 2002
"... This paper introduces an expectationmaximization (EM) algorithm for image restoration (deconvolution) based on a penalized likelihood formulated in the wavelet domain. Regularization is achieved by promoting a reconstruction with lowcomplexity, expressed in terms of the wavelet coecients, taking a ..."
Abstract

Cited by 352 (22 self)
 Add to MetaCart
advantage of the well known sparsity of wavelet representations. Previous works have investigated waveletbased restoration but, except for certain special cases, the resulting criteria are solved approximately or require very demanding optimization methods. The EM algorithm herein proposed combines
Boosting with Structural Sparsity
"... We derive generalizations of AdaBoost and related gradientbased coordinate descent methods that incorporate sparsitypromoting penalties for the norm of the predictor that is being learned. The end result is a family of coordinate descent algorithms that integrate forward feature induction and back ..."
Abstract

Cited by 25 (2 self)
 Add to MetaCart
and backpruning through regularization and give an automatic stopping criterion for feature induction. We study penalties based on the ℓ1, ℓ2, and ℓ ∞ norms of the predictor and introduce mixednorm penalties that build upon the initial penalties. The mixednorm regularizers facilitate structural sparsity
Mirror averaging with sparsity priors
 SUBMITTED TO THE BERNOULLI
, 2012
"... We consider the problem of aggregating the elements of a possibly infinite dictionary for building a decision procedure that aims at minimizing a given criterion. Along with the dictionary, an independent identically distributed training sample is available, on which the performance of a given proce ..."
Abstract

Cited by 9 (0 self)
 Add to MetaCart
We consider the problem of aggregating the elements of a possibly infinite dictionary for building a decision procedure that aims at minimizing a given criterion. Along with the dictionary, an independent identically distributed training sample is available, on which the performance of a given
Cryptanalysis of block ciphers with overdefined systems of equations
, 2002
"... Abstract. Several recently proposed ciphers, for example Rijndael and Serpent, are built with layers of small Sboxes interconnected by linear keydependent layers. Their security relies on the fact, that the classical methods of cryptanalysis (e.g. linear or differential attacks) are based on proba ..."
Abstract

Cited by 253 (22 self)
 Add to MetaCart
the sparsity of the equations and their specific structure. The XSL attack uses only relations true with probability 1, and thus the security does not have to grow exponentially in the number of rounds. XSL has a parameter P, and from our estimations is seems that P should be a constant or grow very slowly
A Sparse Signal Reconstruction Perspective for Source Localization With Sensor Arrays
, 2005
"... We present a source localization method based on a sparse representation of sensor measurements with an overcomplete basis composed of samples from the array manifold. We enforce sparsity by imposing penalties based on the 1norm. A number of recent theoretical results on sparsifying properties of ..."
Abstract

Cited by 231 (6 self)
 Add to MetaCart
We present a source localization method based on a sparse representation of sensor measurements with an overcomplete basis composed of samples from the array manifold. We enforce sparsity by imposing penalties based on the 1norm. A number of recent theoretical results on sparsifying properties
Stepwise Regression under Factor Sparsity using a new Stopping Criterion
, 1998
"... This paper considers variable selection in regression when both linear and higher order terms, such as cross product and quadratic terms, are taken into account. The total number of terms is then often too large to perform an exhaustive search. One approach, that enables a fairly systematic search w ..."
Abstract

Cited by 3 (3 self)
 Add to MetaCart
within reasonable time, is to divide the factor space into smaller subsets, and then perform stepwise regression on each of these subsets. The subsets are here defined to include all linear and higher order terms for the different combinations of factors. A new stopping criterion in stepwise regression
Convergent SDPRelaxations in Polynomial Optimization with Sparsity
 SIAM Journal on Optimization
"... Abstract. We consider a polynomial programming problem P on a compact semialgebraic set K ⊂ R n, described by m polynomial inequalities gj(X) ≥ 0, and with criterion f ∈ R[X]. We propose a hierarchy of semidefinite relaxations in the spirit those of Waki et al. [9]. In particular, the SDPrelaxati ..."
Abstract

Cited by 58 (16 self)
 Add to MetaCart
Abstract. We consider a polynomial programming problem P on a compact semialgebraic set K ⊂ R n, described by m polynomial inequalities gj(X) ≥ 0, and with criterion f ∈ R[X]. We propose a hierarchy of semidefinite relaxations in the spirit those of Waki et al. [9]. In particular, the SDP
On Sparsity, Redundancy and Quality of Frame Representations
 IEEE Int. Symposium on Information Theory (ISIT
, 2007
"... Abstract — We consider approximations of signals by the elements of a frame in a complex vector space of dimension N and formulate both the noiseless and the noisy sparse representation problems. The noiseless representation problem is to find sparse representations of a signal r given that such rep ..."
Abstract

Cited by 7 (1 self)
 Add to MetaCart
≤ 0.5, thus improving on a result of Candes and Tao [4]. The noisy sparse representation problem is to find sparse representations of a signal r satisfying a distortion criterion. In this case, we establish a lower bound on the tradeoff between the sparsity of the representation, the underlying
Sparsity vs. Statistical Independence from a BestBasis Viewpoint
"... We examine the similarity and difference between sparsity and statistical independence in image representations in a very concrete setting: use the best basis algorithm to select the sparsest basis and the least statisticallydependent basis from basis dictionaries for a given dataset. In order to u ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
for most of our examples with minor differences; 2) Sparsity is more computationally and conceptually feasible as a basis selection criterion than the statistical independence, particularly for data compression; 3) The sparsity criterion can and should be adapted to individual realization rather than
Results 1  10
of
190