Results 1  10
of
65
Compressive Wireless Sensing
, 2006
"... Compressive Sampling is an emerging theory that is based on the fact that a relatively small number of random projections of a signal can contain most of its salient information. In this paper, we introduce the concept of Compressive Wireless Sensing for sensor networks in which a fusion center retr ..."
Abstract

Cited by 109 (4 self)
 Add to MetaCart
Compressive Sampling is an emerging theory that is based on the fact that a relatively small number of random projections of a signal can contain most of its salient information. In this paper, we introduce the concept of Compressive Wireless Sensing for sensor networks in which a fusion center retrieves signal field information from an ensemble of spatially distributed sensor nodes. Energy and bandwidth are scarce resources in sensor networks and the relevant metrics of interest in our context are 1) the latency involved in information retrieval; and 2) the associated powerdistortion tradeoff. It is generally recognized that given sufficient prior knowledge about the sensed data (e.g., statistical characterization, homogeneity etc.), there exist schemes that have very favorable powerdistortionlatency tradeoffs. We propose a distributed matched sourcechannel communication scheme, based in part on recent results in compressive sampling theory, for estimation of sensed data at the fusion center and analyze, as a function of number of sensor nodes, the tradeoffs between power, distortion and latency. Compressive wireless sensing is a universal scheme in the sense that it requires no prior knowledge about the sensed data. This universality, however, comes at the cost of optimality (in terms of a less favorable powerdistortionlatency tradeoff) and we quantify this cost relative to the case when sufficient prior information about the sensed data is assumed.
Backcasting: adaptive sampling for sensor networks
 In Proc. Information Processing in Sensor Networks
, 2004
"... Wireless sensor networks provide an attractive approach to spatially monitoring environments. Wireless technology makes these systems relatively flexible, but also places heavy demands on energy consumption for communications. This raises a fundamental tradeoff: using higher densities of sensors pr ..."
Abstract

Cited by 90 (3 self)
 Add to MetaCart
(Show Context)
Wireless sensor networks provide an attractive approach to spatially monitoring environments. Wireless technology makes these systems relatively flexible, but also places heavy demands on energy consumption for communications. This raises a fundamental tradeoff: using higher densities of sensors provides more measurements, higher resolution and better accuracy, but requires more communications and processing. This paper proposes a new approach, called “backcasting, ” which can significantly reduce communications and energy consumption while maintaining high accuracy. Backcasting operates by first having a small subset of the wireless sensors communicate their information to a fusion center. This provides an initial estimate of the environment being sensed, and guides the allocation of additional network resources. Specifically, the fusion center backcasts information based on the initial estimate to the network at large, selectively activating additional sensor nodes in order to achieve a target error level. The key idea is that the initial estimate can detect correlations in the environment, indicating that many sensors may not need to be activated by the fusion center. Thus, adaptive sampling can save energy compared to dense, nonadaptive sampling. This method is theoretically analyzed in the context of field estimation and it is shown that the energy savings can be quite significant compared to conventional
Information fusion for wireless sensor networks: methods, models, and classifications,”
 Article ID 1267073,
, 2007
"... ..."
Call and Response: Experiments in Sampling the Environment
 IN PROCEEDINGS OF THE 2ND INTERNATIONAL
, 2004
"... Monitoring of environmental phenomena with embedded networked sensing confronts the challenges of both unpredictable variability in the spatial distribution of phenomena, coupled with demands for a high spatial sampling rate in three dimensions. For example, low distortion mapping of critical solar ..."
Abstract

Cited by 64 (12 self)
 Add to MetaCart
Monitoring of environmental phenomena with embedded networked sensing confronts the challenges of both unpredictable variability in the spatial distribution of phenomena, coupled with demands for a high spatial sampling rate in three dimensions. For example, low distortion mapping of critical solar radiation properties in forest environments may require twodimensional spatial sampling rates of greater than 10 samples/m 2 over transects exceeding 1000 m 2. Clearly, adequate sampling coverage of such a transect requires an impractically large number of sensing nodes. This paper describes a new approach where the deployment of a combination of autonomousarticulated and static sensor nodes enables sufficient spatiotemporal sampling density over large transects to meet a general set of environmental mapping
Estimation diversity and energy efficiency in distributed sensing
 IEEE Transactions on Signal Processing
, 2007
"... Abstract—Distributed estimation based on measurements from multiple wireless sensors is investigated. It is assumed that a group of sensors observe the same quantity in independent additive observation noises with possibly different variances. The observations are transmitted using amplifyandforw ..."
Abstract

Cited by 63 (1 self)
 Add to MetaCart
(Show Context)
Abstract—Distributed estimation based on measurements from multiple wireless sensors is investigated. It is assumed that a group of sensors observe the same quantity in independent additive observation noises with possibly different variances. The observations are transmitted using amplifyandforward (analog) transmissions over nonideal fading wireless channels from the sensors to a fusion center, where they are combined to generate an estimate of the observed quantity. Assuming that the best linear unbiased estimator (BLUE) is used by the fusion center, the equalpower transmission strategy is first discussed, where the system performance is analyzed by introducing the concept of estimation outage and estimation diversity, and it is shown that there is an achievable diversity gain on the order of the number of sensors. The optimal power allocation strategies are then considered for two cases: minimum distortion under power constraints; and minimum power under distortion constraints. In the first case, it is shown that by turning off bad sensors, i.e., sensors with bad channels and bad observation quality, adaptive power gain can be achieved without sacrificing diversity gain. Here, the adaptive power gain is similar to the array gain achieved in multipleinput singleoutput (MISO) multiantenna systems when channel conditions are known to the transmitter. In the second case, the sum power is minimized under zerooutage estimation distortion constraint, and some related energy efficiency issues in sensor networks are discussed. Index Terms—Distributed estimation, energy efficiency, estimation diversity, estimation outage.
Joint SourceChannel Communication for Distributed Estimation in Sensor Networks
"... Power and bandwidth are scarce resources in dense wireless sensor networks and it is widely recognized that joint optimization of the operations of sensing, processing and communication can result in significant savings in the use of network resources. In this paper, a distributed joint sourcechan ..."
Abstract

Cited by 53 (3 self)
 Add to MetaCart
Power and bandwidth are scarce resources in dense wireless sensor networks and it is widely recognized that joint optimization of the operations of sensing, processing and communication can result in significant savings in the use of network resources. In this paper, a distributed joint sourcechannel communication architecture is proposed for energyefficient estimation of sensor field data at a distant destination and the corresponding relationships between power, distortion, and latency are analyzed as a function of number of sensor nodes. The approach is applicable to a broad class of sensed signal fields and is based on distributed computation of appropriately chosen projections of sensor data at the destination – phasecoherent transmissions from the sensor nodes enable exploitation of the distributed beamforming gain for energy efficiency. Random projections are used when little or no prior knowledge is available about the signal field. Distinct features of the proposed scheme include: 1) processing and communication are combined into one distributed projection operation; 2) it virtually eliminates the need for innetwork processing and communication; 3) given sufficient prior knowledge about the sensed data, consistent estimation is possible with increasing sensor density even with vanishing total network power; and 4) consistent signal estimation is possible with power and latency requirements growing at most sublinearly with the number of sensor nodes even when little or no prior knowledge about the sensed data is assumed at the sensor nodes.
Linear Coherent Decentralized Estimation
"... Abstract—We consider the distributed estimation of an unknown vector signal in a resource constrained sensor network with a fusion center. Due to power and bandwidth limitations, each sensor compresses its data in order to minimize the amount of information that needs to be communicated to the fusio ..."
Abstract

Cited by 47 (1 self)
 Add to MetaCart
(Show Context)
Abstract—We consider the distributed estimation of an unknown vector signal in a resource constrained sensor network with a fusion center. Due to power and bandwidth limitations, each sensor compresses its data in order to minimize the amount of information that needs to be communicated to the fusion center. In this context, we study the linear decentralized estimation of the source vector, where each sensor linearly encodes its observations and the fusion center also applies a linear mapping to estimate the unknown vector signal based on the received messages. We adopt the mean squared error (MSE) as the performance criterion. When the channels between sensors and the fusion center are orthogonal, it has been shown previously that the complexity of designing the optimal encoding matrices is NPhard in general. In this paper, we study the optimal linear decentralized estimation when the multiple access channel (MAC) is coherent. For the case when the source and observations are scalars, we derive the optimal power scheduling via convex optimization and show that it admits a simple distributed implementation. Simulations show that the proposed power scheduling improves the MSE performance by a large margin when compared to the uniform power scheduling. We also show that under a finite network power budget, the asymptotic MSE performance (when the total number of sensors is large) critically depends on the multiple access scheme. For the case when the source and observations are vectors, we study the optimal linear decentralized estimation under both bandwidth and power constraints. We show that when the MAC between sensors and the fusion center is noiseless, the resulting problem has a closedform solution (which is in sharp contrast to the orthogonal MAC case), while in the noisy MAC case, the problem can be efficiently solved by semidefinite programming (SDP). Index Terms—Distributed estimation, energy efficiency, multiple access channel, linear sourcechannel coding, convex optimization. I.
Sparse poisson intensity reconstruction algorithms
 in Proc. IEEE Work. Stat. Signal Processing (SSP
, 2009
"... The observations in many applications consist of counts of discrete events, such as photons hitting a dector, which cannot be effectively modeled using an additive bounded or Gaussian noise model, and instead require a Poisson noise model. As a result, accurate reconstruction of a spatially or tempo ..."
Abstract

Cited by 40 (8 self)
 Add to MetaCart
(Show Context)
The observations in many applications consist of counts of discrete events, such as photons hitting a dector, which cannot be effectively modeled using an additive bounded or Gaussian noise model, and instead require a Poisson noise model. As a result, accurate reconstruction of a spatially or temporally distributed phenomenon (f) from Poisson data (y) cannot be accomplished by minimizing a conventional ℓ2 − ℓ1 objective function. The problem addressed in this paper is the estimation of f from y in an inverse problem setting, where (a) the number of unknowns may potentially be larger than the number of observations and (b) f admits a sparse approximation in some basis. The optimization formulation considered in this paper uses a negative Poisson loglikelihood objective function with nonnegativity constraints (since Poisson intensities are naturally nonnegative). This paper describes computational methods for solving the constrained sparse Poisson inverse problem. In particular, the proposed approach incorporates key ideas of using quadratic separable approximations to the objective function at each iteration and computationally efficient partitionbased multiscale estimation methods. Index Terms—Photonlimited imaging, Poisson noise, wavelets, convex optimization, sparse approximation, compressed sensing
Matched sourcechannel communication for field estimation in wireless sensor networks
 Proc. the Fouth Int. Symposium on Information Processing in Sensor Networks
, 2005
"... Abstract — Sensing, processing and communication must be jointly optimized for efficient operation of resourcelimited wireless sensor networks. We propose a novel sourcechannel matching approach for distributed field estimation that naturally integrates these basic operations and facilitates a uni ..."
Abstract

Cited by 40 (11 self)
 Add to MetaCart
(Show Context)
Abstract — Sensing, processing and communication must be jointly optimized for efficient operation of resourcelimited wireless sensor networks. We propose a novel sourcechannel matching approach for distributed field estimation that naturally integrates these basic operations and facilitates a unified analysis of the impact of key parameters (number of nodes, power, field complexity) on estimation accuracy. At the heart of our approach is a distributed sourcechannel communication architecture that matches the spatial scale of field coherence with the spatial scale of node synchronization for phasecoherent communication: the sensor field is uniformly partitioned into multiple cells and the nodes in each cell coherently communicate simple statistics of their measurements to the destination via a dedicated noisy multiple access channel (MAC). Essentially, the optimal field estimate in each cell is implicitly computed at the destination via the coherent spatial averaging inherent in the MAC, resulting in optimal powerdistortion scaling with the number
Fast multiresolution photonlimited image reconstruction
 in Proc. IEEE Int. Sym. Biomedical Imaging — ISBI ’04
, 2004
"... The techniques described in this paper allow multiscale photonlimited image reconstruction methods to be implemented with significantly less computational complexity than previously possible. Methods such as multiscale Haar estimation, wedgelets, and platelets are all promising techniques in the co ..."
Abstract

Cited by 19 (8 self)
 Add to MetaCart
(Show Context)
The techniques described in this paper allow multiscale photonlimited image reconstruction methods to be implemented with significantly less computational complexity than previously possible. Methods such as multiscale Haar estimation, wedgelets, and platelets are all promising techniques in the context of Poisson data, but the computational burden they impose makes them impractical for many applications which involve iterative algorithms, such as deblurring and tomographic reconstruction. With the advent of the proposed implementation techniques, hereditary translationinvariant Haar waveletbased estimates can be calculated in O (N log N) operations and wedgelet and platelet estimates can be computed in O � N 7/6 � operations, where N is the number of pixels; these complexities are comparable to those of standard wavelet denoising (O (N)) and translationinvariant wavelet denoising (O (N log N)). Fast translationinvariant Haar denoising for Poisson data is accomplished by deriving the relationship between maximum penalized likelihood tree pruning decisions and the undecimated wavelet transform coefficients. Fast wedgelet and platelet methods are accomplished with a coarsetofine technique which detects possible boundary locations before performing wedgelet or platelet fits. 1 PHOTONLIMITED IMAGING