Results 11  20
of
73
Compressed Sensing Performance Bounds Under Poisson Noise
"... Abstract—This paper describes performance bounds for compressed sensing (CS) where the underlying sparse or compressible (sparsely approximable) signal is a vector of nonnegative intensities whose measurements are corrupted by Poisson noise. In this setting, standard CS techniques cannot be applied ..."
Abstract

Cited by 27 (6 self)
 Add to MetaCart
(Show Context)
Abstract—This paper describes performance bounds for compressed sensing (CS) where the underlying sparse or compressible (sparsely approximable) signal is a vector of nonnegative intensities whose measurements are corrupted by Poisson noise. In this setting, standard CS techniques cannot be applied directly for several reasons. First, the usual signalindependent and/or bounded noise models do not apply to Poisson noise, which is nonadditive and signaldependent. Second, the CS matrices typically considered are not feasible in real optical systems because they do not adhere to important constraints, such as nonnegativity and photon flux preservation. Third, the typical `2 0 `1 minimization leads to overfitting in the highintensity regions and oversmoothing in the lowintensity areas. In this paper, we describe how a feasible positivityand fluxpreserving sensing matrix can be constructed, and then analyze the performance of a CS reconstruction approach for Poisson data that minimizes an objective function consisting of a negative Poisson log likelihood term and a penalty term which measures signal sparsity. We show that, as the overall intensity of the underlying signal increases, an upper bound on the reconstruction error decays at an appropriate rate (depending on the compressibility of the signal), but that for a fixed signal intensity, the error bound actually grows with the number of measurements or sensors. This surprising fact is both proved theoretically and justified based on physical intuition. Index Terms—Complexity regularization, compressive sampling, nonparametric estimation, photonlimited imaging, sparsity. I.
Minimax optimal level set estimation
 IN PROC. SPIE, WAVELETS XI, 31 JULY  4
, 2005
"... This paper describes a new methodology and associated theoretical analysis for rapid and accurate extraction of level sets of a multivariate function from noisy data. The identification of the boundaries of such sets is an important theoretical problem with applications for digital elevation maps, ..."
Abstract

Cited by 23 (4 self)
 Add to MetaCart
(Show Context)
This paper describes a new methodology and associated theoretical analysis for rapid and accurate extraction of level sets of a multivariate function from noisy data. The identification of the boundaries of such sets is an important theoretical problem with applications for digital elevation maps, medical imaging, and pattern recognition. This problem is significantly different from classical segmentation because level set boundaries may not correspond to singularities or edges in the underlying function; as a result, segmentation methods which rely upon detecting boundaries would be potentially ineffective in this regime. This issue is addressed in this paper through a novel error metric sensitive to both the error in the location of the level set estimate and the deviation of the function from the critical level. Hoeffding’s inequality is used to derive a novel regularization
Adaptive Hausdorff Estimation of Density Level Sets
, 2007
"... Consider the problem of estimating the γlevel set G ∗ γ = {x: f(x) ≥ γ} of an unknown ddimensional density function f based on n independent observations X1,..., Xn from the density. This problem has been addressed under global error criteria related to the symmetric set difference. However, in c ..."
Abstract

Cited by 20 (0 self)
 Add to MetaCart
(Show Context)
Consider the problem of estimating the γlevel set G ∗ γ = {x: f(x) ≥ γ} of an unknown ddimensional density function f based on n independent observations X1,..., Xn from the density. This problem has been addressed under global error criteria related to the symmetric set difference. However, in certain applications such as anomaly detection and clustering, a more uniform mode of convergence is desirable to ensure that the estimated set is close to the target set everywhere. The Hausdorff error criterion provides this degree of uniformity and hence is more appropriate in such situations. It is known that the minimax optimal rate of convergence for the Hausdorff error is (n/log n) −1/(d+2α) for level sets with Lipschitz boundaries, where the parameter α characterizes the regularity of the density around the level of interest. However, the estimators proposed in previous work achieve this rate for very restricted classes of sets (e.g. the boundary fragment and starshaped sets) that effectively reduce the set estimation problem to a function estimation problem. This characterization precludes the existence of multiple connected components, which is fundamental to many applications such as clustering. Also, all previous work assumes knowledge of the density regularity as characterized by the parameter α. In this paper, we present a procedure that is adaptive to unknown regularity conditions and achieves near minimax optimal rates of Hausdorff error convergence for a class of level sets with very general shapes and multiple connected components at arbitrary orientations. 1
Fast multiresolution photonlimited image reconstruction
 in Proc. IEEE Int. Sym. Biomedical Imaging — ISBI ’04
, 2004
"... The techniques described in this paper allow multiscale photonlimited image reconstruction methods to be implemented with significantly less computational complexity than previously possible. Methods such as multiscale Haar estimation, wedgelets, and platelets are all promising techniques in the co ..."
Abstract

Cited by 19 (8 self)
 Add to MetaCart
(Show Context)
The techniques described in this paper allow multiscale photonlimited image reconstruction methods to be implemented with significantly less computational complexity than previously possible. Methods such as multiscale Haar estimation, wedgelets, and platelets are all promising techniques in the context of Poisson data, but the computational burden they impose makes them impractical for many applications which involve iterative algorithms, such as deblurring and tomographic reconstruction. With the advent of the proposed implementation techniques, hereditary translationinvariant Haar waveletbased estimates can be calculated in O (N log N) operations and wedgelet and platelet estimates can be computed in O � N 7/6 � operations, where N is the number of pixels; these complexities are comparable to those of standard wavelet denoising (O (N)) and translationinvariant wavelet denoising (O (N log N)). Fast translationinvariant Haar denoising for Poisson data is accomplished by deriving the relationship between maximum penalized likelihood tree pruning decisions and the undecimated wavelet transform coefficients. Fast wedgelet and platelet methods are accomplished with a coarsetofine technique which detects possible boundary locations before performing wedgelet or platelet fits. 1 PHOTONLIMITED IMAGING
Estimating the intensity of a random measure by histogram type estimators
, 2006
"... The purpose of this paper is to estimate the intensity of some random measure N on a set X by a piecewise constant function on a finite partition of X. Given a (possibly large) family M of candidate partitions, we build a piecewise constant estimator (histogram) on each of them and then use the data ..."
Abstract

Cited by 18 (1 self)
 Add to MetaCart
The purpose of this paper is to estimate the intensity of some random measure N on a set X by a piecewise constant function on a finite partition of X. Given a (possibly large) family M of candidate partitions, we build a piecewise constant estimator (histogram) on each of them and then use the data to select one estimator in the family. Choosing the square of a Hellingertype distance as our loss function, we show that each estimator built on a given partition satisfies an analogue of the classical squared bias plus variance risk bound. Moreover, the selection procedure leads to a final estimator satisfying some oracletype inequality, with, as usual, a possible loss corresponding to the complexity of the family M. When this complexity is not too high, the selected estimator has a risk bounded, up to a universal constant, by the smallest risk bound obtained for the estimators in the family. For suitable choices of the family of partitions, we deduce uniform risk bounds over various classes of intensities. Our approach applies to the estimation of the intensity of an inhomogenous Poisson process, among other counting processes, or the estimation of the mean of a random vector with nonnegative components.
Coarsetofine manifold learning
 in Proc. IEEE Int. Conf. Acoust., Speech, Signal Processing — ICASSP ’04
, 2004
"... In this paper we consider a sequential, coarsetofine estimation of a piecewise constant function with smooth boundaries. Accurate detection and localization of the boundary (a manifold) is the key aspect of this problem. In general, algorithms capable of achieving optimal performance require exhau ..."
Abstract

Cited by 17 (5 self)
 Add to MetaCart
(Show Context)
In this paper we consider a sequential, coarsetofine estimation of a piecewise constant function with smooth boundaries. Accurate detection and localization of the boundary (a manifold) is the key aspect of this problem. In general, algorithms capable of achieving optimal performance require exhaustive searches over large dictionaries that grow exponentially with the dimension of the observation domain. The computational burden of the search hinders the use of such techniques in practice, and motivates our work. We consider a sequential, coarsetofine approach that involves first examining the data on a coarse grid, and then refining the analysis and approximation in regions of interest. Our estimators involve an almost lineartime (in two dimensions) sequential search over the dictionary, and converge at the same nearoptimal rate as estimators based on exhaustive searches. Specifically, for two dimensions, our algorithm requires O(n 7/6) operations for an npixel image, much less than the traditional wedgelet approaches, which require O(n 11/6) operations. 1
MULTIARMED BANDIT PROBLEMS
"... Multiarmed bandit (MAB) problems are a class of sequential resource allocation problems concerned with allocating one or more resources among several alternative (competing) projects. Such problems are paradigms of a fundamental conflict between making decisions (allocating resources) that yield ..."
Abstract

Cited by 16 (0 self)
 Add to MetaCart
Multiarmed bandit (MAB) problems are a class of sequential resource allocation problems concerned with allocating one or more resources among several alternative (competing) projects. Such problems are paradigms of a fundamental conflict between making decisions (allocating resources) that yield
ESTIMATOR SELECTION WITH RESPECT TO HELLINGERTYPE RISKS
, 2009
"... We observe a random measure N and aim at estimating its intensity s. This statistical framework allows to deal simultaneously with the problems of estimating a density, the marginals of a multivariate distribution, the mean of a random vector with nonnegative components and the intensity of a Poiss ..."
Abstract

Cited by 13 (5 self)
 Add to MetaCart
We observe a random measure N and aim at estimating its intensity s. This statistical framework allows to deal simultaneously with the problems of estimating a density, the marginals of a multivariate distribution, the mean of a random vector with nonnegative components and the intensity of a Poisson process. Our estimation strategy is based on estimator selection. Given a family of estimators of s based on the observation of N, we propose a selection rule, based on N as well, in view of selecting among these. Little assumption is made on the collection of estimators. The procedure offers the possibility to perform model selection and also to select among estimators associated to different model selection strategies. Besides, it provides an alternative to the Testimators as studied recently in Birgé (2006). For illustration, we consider the problems of estimation and (complete) variable selection in various regression settings.
Skellam shrinkage: Waveletbased intensity estimation for inhomogeneous Poisson data
"... The ubiquity of integrating detectors in imaging and other applications implies that a variety of realworld data are well modeled as Poisson random variables whose means are in turn proportional to an underlying vectorvalued signal of interest. In this article, we first show how the socalled Skel ..."
Abstract

Cited by 10 (7 self)
 Add to MetaCart
(Show Context)
The ubiquity of integrating detectors in imaging and other applications implies that a variety of realworld data are well modeled as Poisson random variables whose means are in turn proportional to an underlying vectorvalued signal of interest. In this article, we first show how the socalled Skellam distribution arises from the fact that Haar wavelet and filterbank transform coefficients corresponding to measurements of this type are distributed as sums and differences of Poisson counts. We then provide two main theorems on Skellam shrinkage, one showing the nearoptimality of shrinkage in the Bayesian setting and the other providing for unbiased risk estimation in a frequentist context. These results serve to yield new estimators in the Haar transform domain, including an unbiased risk estimate for shrinkage of HaarFisz variancestabilized data, along with accompanying lowcomplexity algorithms for inference. We conclude with a simulation study demonstrating the efficacy of our Skellam