Results 1  10
of
99
A review of curvelets and recent applications
 IEEE Signal Processing Magazine
, 2009
"... Multiresolution methods are deeply related to image processing, biological and computer vision, scientific computing, etc. The curvelet transform is a multiscale directional transform which allows an almost optimal nonadaptive sparse representation of objects with edges. It has generated increasing ..."
Abstract

Cited by 128 (10 self)
 Add to MetaCart
(Show Context)
Multiresolution methods are deeply related to image processing, biological and computer vision, scientific computing, etc. The curvelet transform is a multiscale directional transform which allows an almost optimal nonadaptive sparse representation of objects with edges. It has generated increasing interest in the community of applied mathematics and signal processing over the past years. In this paper, we present a review on the curvelet transform, including its history beginning from wavelets, its logical relationship to other multiresolution multidirectional methods like contourlets and shearlets, its basic theory and discrete algorithm. Further, we consider recent applications in image/video processing, seismic exploration, fluid mechanics, simulation of partial different equations, and compressed sensing.
Dictionaries for Sparse Representation Modeling
"... Sparse and redundant representation modeling of data assumes an ability to describe signals as linear combinations of a few atoms from a prespecified dictionary. As such, the choice of the dictionary that sparsifies the signals is crucial for the success of this model. In general, the choice of a p ..."
Abstract

Cited by 109 (4 self)
 Add to MetaCart
Sparse and redundant representation modeling of data assumes an ability to describe signals as linear combinations of a few atoms from a prespecified dictionary. As such, the choice of the dictionary that sparsifies the signals is crucial for the success of this model. In general, the choice of a proper dictionary can be done using one of two ways: (i) building a sparsifying dictionary based on a mathematical model of the data, or (ii) learning a dictionary to perform best on a training set. In this paper we describe the evolution of these two paradigms. As manifestations of the first approach, we cover topics such as wavelets, wavelet packets, contourlets, and curvelets, all aiming to exploit 1D and 2D mathematical models for constructing effective dictionaries for signals and images. Dictionary learning takes a different route, attaching the dictionary to a set of examples it is supposed to serve. From the seminal work of Field and Olshausen, through the MOD, the KSVD, the Generalized PCA and others, this paper surveys the various options such training has to offer, up to the most recent contributions and structures.
Backcasting: adaptive sampling for sensor networks
 In Proc. Information Processing in Sensor Networks
, 2004
"... Wireless sensor networks provide an attractive approach to spatially monitoring environments. Wireless technology makes these systems relatively flexible, but also places heavy demands on energy consumption for communications. This raises a fundamental tradeoff: using higher densities of sensors pr ..."
Abstract

Cited by 90 (3 self)
 Add to MetaCart
(Show Context)
Wireless sensor networks provide an attractive approach to spatially monitoring environments. Wireless technology makes these systems relatively flexible, but also places heavy demands on energy consumption for communications. This raises a fundamental tradeoff: using higher densities of sensors provides more measurements, higher resolution and better accuracy, but requires more communications and processing. This paper proposes a new approach, called “backcasting, ” which can significantly reduce communications and energy consumption while maintaining high accuracy. Backcasting operates by first having a small subset of the wireless sensors communicate their information to a fusion center. This provides an initial estimate of the environment being sensed, and guides the allocation of additional network resources. Specifically, the fusion center backcasts information based on the initial estimate to the network at large, selectively activating additional sensor nodes in order to achieve a target error level. The key idea is that the initial estimate can detect correlations in the environment, indicating that many sensors may not need to be activated by the fusion center. Thus, adaptive sampling can save energy compared to dense, nonadaptive sampling. This method is theoretically analyzed in the context of field estimation and it is shown that the energy savings can be quite significant compared to conventional
Boundary Estimation in Sensor Networks: Theory and Methods
 IN IPSN
, 2003
"... Sensor networks have emerged as a fundamentally new tool for monitoring spatially distributed phenomena. This paper investigates a strategy by which sensor nodes detect and estimate nonlocalized phenomena such as "boundaries" and "edges" (e.g., temperature gradients, variatio ..."
Abstract

Cited by 83 (3 self)
 Add to MetaCart
Sensor networks have emerged as a fundamentally new tool for monitoring spatially distributed phenomena. This paper investigates a strategy by which sensor nodes detect and estimate nonlocalized phenomena such as "boundaries" and "edges" (e.g., temperature gradients, variations in illumination or contamination levels). A general class of boundaries, with mild regularity assumptions, is considered, and theoretical bounds on the achievable performance of sensor network based boundary estimation are established. A hierarchical boundary estimation algorithm is proposed that achieves a nearoptimal balance between meansquared error and energy consumption.
Estimating Inhomogeneous Fields Using Wireless Sensor Networks
 JSAC
"... Sensor networks have emerged as a fundamentally new tool for monitoring spatial phenomena. This paper describes a theory and methodology for estimating inhomogeneous, twodimensional fields using wireless sensor networks. Inhomogeneous fields are composed of two or more homogeneous (smoothly varyi ..."
Abstract

Cited by 65 (10 self)
 Add to MetaCart
(Show Context)
Sensor networks have emerged as a fundamentally new tool for monitoring spatial phenomena. This paper describes a theory and methodology for estimating inhomogeneous, twodimensional fields using wireless sensor networks. Inhomogeneous fields are composed of two or more homogeneous (smoothly varying) regions separated by boundaries. The boundaries, which correspond to abrupt spatial changes in the field, are nonparametric onedimensional curves. The sensors make noisy measurements of the field, and the goal is to obtain an accurate estimate of the field at some desired destination (typically remote from the sensor network). The presence of boundaries makes this problem especially challenging. There are two key questions: 1. Given n sensors, how accurately can the field be estimated? 2. How much energy will be consumed by the communications required to obtain an accurate estimate at the destination? Theoretical upper and lower bounds on the estimation error and energy consumption are given. A practical strategy for estimation and communication is presented. The strategy, based on a hierarchical datahandling and communication architecture, provides a nearoptimal balance of accuracy and energy consumption.
Restoration of Poissonian images using alternating direction optimization
 IEEE Trans. Image Process
, 2010
"... Abstract—Much research has been devoted to the problem of restoring Poissonian images, namely for medical and astronomical applications. However, the restoration of these images using stateoftheart regularizers (such as those based upon multiscale representations or total variation) is still an a ..."
Abstract

Cited by 53 (5 self)
 Add to MetaCart
(Show Context)
Abstract—Much research has been devoted to the problem of restoring Poissonian images, namely for medical and astronomical applications. However, the restoration of these images using stateoftheart regularizers (such as those based upon multiscale representations or total variation) is still an active research area, since the associated optimization problems are quite challenging. In this paper, we propose an approach to deconvolving Poissonian images, which is based upon an alternating direction optimization method. The standard regularization [or maximum a posteriori (MAP)] restoration criterion, which combines the Poisson loglikelihood with a (nonsmooth) convex regularizer (logprior), leads to hard optimization problems: the loglikelihood is nonquadratic and nonseparable, the regularizer is nonsmooth, and there is a nonnegativity constraint. Using standard convex analysis tools, we present sufficient conditions for existence and uniqueness of solutions of these optimization problems, for several types of regularizers: totalvariation, framebased analysis, and framebased synthesis. We attack these problems with an instance of the alternating direction method of multipliers (ADMM), which belongs to the family of augmented Lagrangian algorithms. We study sufficient conditions for convergence and show that these are satisfied, either under totalvariation or framebased (analysis and synthesis) regularization. The resulting algorithms are shown to outperform alternative stateoftheart methods, both in terms of speed and restoration accuracy. Index Terms—Alternating direction methods, augmented Lagrangian, convex optimization, image deconvolution, image restoration, Poisson images. I.
Waveletdomain approximation and compression of piecewise smooth images
 IEEE Trans. Image Processing
, 2006
"... Inherent to photographlike images are two types of structures: large smooth regions and geometrically smooth edge contours separating those regions. Over the past years, efficient representations and algorithms have been developed that take advantage of each of these types of structure independentl ..."
Abstract

Cited by 44 (4 self)
 Add to MetaCart
Inherent to photographlike images are two types of structures: large smooth regions and geometrically smooth edge contours separating those regions. Over the past years, efficient representations and algorithms have been developed that take advantage of each of these types of structure independently: quadtree models for 2D wavelets are wellsuited for uniformly smooth images (C 2 everywhere), while quadtreeorganized wedgelet approximations are appropriate for purely geometrical images (containing nothing but C 2 contours). This paper shows how to combine the wavelet and wedgelet representations in order to take advantage of both types of structure simultaneously. We show that the asymptotic approximation and ratedistortion performance of a waveletwedgelet representation on piecewise smooth images mirrors the performance of both wavelets (for uniformly smooth images) and wedgelets (for purely geometrical images). We also discuss an efficient algorithm for fitting the waveletwedgelet representation to an image; the convenient quadtree structure of the combined representation enables new algorithms such as the recent WSFQ geometric image coder. 1.
Wavelets, Ridgelets, and Curvelets for Poisson Noise Removal
"... Abstract—In order to denoise Poisson count data, we introduce a variance stabilizing transform (VST) applied on a filtered discrete Poisson process, yielding a near Gaussian process with asymptotic constant variance. This new transform, which can be deemed as an extension of the Anscombe transform t ..."
Abstract

Cited by 42 (2 self)
 Add to MetaCart
(Show Context)
Abstract—In order to denoise Poisson count data, we introduce a variance stabilizing transform (VST) applied on a filtered discrete Poisson process, yielding a near Gaussian process with asymptotic constant variance. This new transform, which can be deemed as an extension of the Anscombe transform to filtered data, is simple, fast, and efficient in (very) lowcount situations. We combine this VST with the filter banks of wavelets, ridgelets and curvelets, leading to multiscale VSTs (MSVSTs) and nonlinear decomposition schemes. By doing so, the noisecontaminated coefficients of these MSVSTmodified transforms are asymptotically normally distributed with known variances. A classical hypothesistesting framework is adopted to detect the significant coefficients, and a sparsitydriven iterative scheme reconstructs properly the final estimate. A range of examples show the power of this MSVST approach for recovering important structures of various morphologies in (very) lowcount images. These results also demonstrate that the MSVST approach is competitive relative to many existing denoising methods. Index Terms—Curvelets, filtered Poisson process, multiscale variance stabilizing transform, Poisson intensity estimation, ridgelets, wavelets. I.
Sparse poisson intensity reconstruction algorithms
 in Proc. IEEE Work. Stat. Signal Processing (SSP
, 2009
"... The observations in many applications consist of counts of discrete events, such as photons hitting a dector, which cannot be effectively modeled using an additive bounded or Gaussian noise model, and instead require a Poisson noise model. As a result, accurate reconstruction of a spatially or tempo ..."
Abstract

Cited by 40 (8 self)
 Add to MetaCart
(Show Context)
The observations in many applications consist of counts of discrete events, such as photons hitting a dector, which cannot be effectively modeled using an additive bounded or Gaussian noise model, and instead require a Poisson noise model. As a result, accurate reconstruction of a spatially or temporally distributed phenomenon (f) from Poisson data (y) cannot be accomplished by minimizing a conventional ℓ2 − ℓ1 objective function. The problem addressed in this paper is the estimation of f from y in an inverse problem setting, where (a) the number of unknowns may potentially be larger than the number of observations and (b) f admits a sparse approximation in some basis. The optimization formulation considered in this paper uses a negative Poisson loglikelihood objective function with nonnegativity constraints (since Poisson intensities are naturally nonnegative). This paper describes computational methods for solving the constrained sparse Poisson inverse problem. In particular, the proposed approach incorporates key ideas of using quadratic separable approximations to the objective function at each iteration and computationally efficient partitionbased multiscale estimation methods. Index Terms—Photonlimited imaging, Poisson noise, wavelets, convex optimization, sparse approximation, compressed sensing
Compressive Acquisition of Dynamic Scenes
"... Compressive sensing (CS) is a new approach for the acquisition and recovery of sparse signals and images that enables sampling rates significantly below the classical Nyquist rate. Despite significant progress in the theory and methods of CS, little headway has been made in compressive video acquis ..."
Abstract

Cited by 37 (10 self)
 Add to MetaCart
Compressive sensing (CS) is a new approach for the acquisition and recovery of sparse signals and images that enables sampling rates significantly below the classical Nyquist rate. Despite significant progress in the theory and methods of CS, little headway has been made in compressive video acquisition and recovery. Video CS is complicated by the ephemeral nature of dynamic events, which makes direct extensions of standard CS imaging architectures and signal models infeasible. In this paper, we develop a new framework for video CS for dynamic textured scenes that models the evolution of the scene as a linear dynamical system (LDS). This reduces the video recovery problem to first estimating the model parameters of the LDS from compressive measurements, from which the image frames are then reconstructed. We exploit the lowdimensional dynamic parameters (the state sequence) and highdimensional static parameters (the observation matrix) of the LDS to devise a novel compressive measurement strategy that measures only the dynamic part of the scene at each instant and accumulates measurements over time to estimate the static parameters. This enables us to considerably lower the compressive measurement rate considerably. We validate our approach with a range of experiments including classification experiments that highlight the effectiveness of the proposed approach.