Results 11  20
of
263
Corrupted Sensing: Novel Guarantees for Separating Structured Signals.
 IEEE Transactions on Information Theory,
, 2014
"... AbstractWe study the problem of corrupted sensing, a generalization of compressed sensing in which one aims to recover a signal from a collection of corrupted or unreliable measurements. While an arbitrary signal cannot be recovered in the face of arbitrary corruption, tractable recovery is possib ..."
Abstract

Cited by 14 (0 self)
 Add to MetaCart
(Show Context)
AbstractWe study the problem of corrupted sensing, a generalization of compressed sensing in which one aims to recover a signal from a collection of corrupted or unreliable measurements. While an arbitrary signal cannot be recovered in the face of arbitrary corruption, tractable recovery is possible when both signal and corruption are suitably structured. We quantify the relationship between signal recovery and two geometric measures of structure, the Gaussian complexity of a tangent cone and the Gaussian distance to a subdifferential. We take a convex programming approach to disentangling signal and corruption, analyzing both penalized programs that trade off between signal and corruption complexity, and constrained programs that bound the complexity of signal or corruption when prior information is available. In each case, we provide conditions for exact signal recovery from structured corruption and stable signal recovery from structured corruption with added unstructured noise. Our simulations demonstrate close agreement between our theoretical recovery bounds and the sharp phase transitions observed in practice. In addition, we provide new interpretable bounds for the Gaussian complexity of sparse vectors, blocksparse vectors, and lowrank matrices, which lead to sharper guarantees of recovery when combined with our results and those in the literature.
Anisotropic total variation regularized L1approximation and denoising/deblurring of 2d bar codes
 Inverse Problems and Imaging
"... We consider variations of the RudinOsherFatemi functional which are particularly wellsuited to denoising and deblurring of 2D bar codes. These functionals consist of an anisotropic total variation favoring rectangles and a fidelity term which measure the L1 distance to the signal, both with and w ..."
Abstract

Cited by 12 (5 self)
 Add to MetaCart
(Show Context)
We consider variations of the RudinOsherFatemi functional which are particularly wellsuited to denoising and deblurring of 2D bar codes. These functionals consist of an anisotropic total variation favoring rectangles and a fidelity term which measure the L1 distance to the signal, both with and without the presence of a deconvolution operator. Based upon the existence of a certain associated vector field, we find necessary and sufficient conditions for a function to be a minimizer. We apply these results to 2D bar codes to find explicit regimes – in terms of the fidelity parameter and smallest length scale of the bar codes – for which the perfect bar code is attained via minimization of the functionals. Via a discretization reformulated as a linear program, we perform numerical experiments for all functionals demonstrating their denoising and deblurring capabilities. Key words: anisotropic total variation, L1approximation, 2D bar code, denoising, deblurring MSC2010: 49N45, 94A08 1
Inferring the root cause in road traffic anomalies
 In Proceeding of the 2012 IEEE International Conference on Data Mining
"... Abstract—We propose a novel twostep mining and optimization framework for inferring the root cause of anomalies that appear in road traffic data. We model road traffic as a timedependent flow on a network formed by partitioning a city into regions bounded by major roads. In the first step we iden ..."
Abstract

Cited by 11 (2 self)
 Add to MetaCart
(Show Context)
Abstract—We propose a novel twostep mining and optimization framework for inferring the root cause of anomalies that appear in road traffic data. We model road traffic as a timedependent flow on a network formed by partitioning a city into regions bounded by major roads. In the first step we identify link anomalies based on their deviation from their historical traffic profile. However, link anomalies on their own shed very little light on what caused them to be anomalous. In the second step we take a generative approach by modeling the flow in a network in terms of the origindestination (OD) matrix which physically relates the latent flow between origin and destination and the observable flow on the links. The key insight is that instead of using all of link traffic as the observable vector we only use the link anomaly vector. By solving an L1 inverse problem we infer the routes (the origindestination pairs) which gave rise to the link anomalies. Experiments on a very large GPS data set consisting on nearly eight hundred million data points demonstrate that we can discover routes which can clearly explain the appearance of link anomalies. The use of optimization techniques to explain observable anomalies in a generative fashion is, to the best of our knowledge, entirely novel. I.
Design of Optimal Sparse Interconnection Graphs for Synchronization of Oscillator Networks
 IEEE TRANSACTIONS ON AUTOMATIC CONTROL
, 2014
"... We study the optimal design of a conductance network as a means for synchronizing a given set of oscillators. Synchronization is achieved when all oscillator voltages reach consensus, and performance is quantified by the meansquare deviation from the consensus value. We formulate optimization probl ..."
Abstract

Cited by 10 (3 self)
 Add to MetaCart
We study the optimal design of a conductance network as a means for synchronizing a given set of oscillators. Synchronization is achieved when all oscillator voltages reach consensus, and performance is quantified by the meansquare deviation from the consensus value. We formulate optimization problems that address the tradeoff between synchronization performance and the number and strength of oscillator couplings. We promote the sparsity of the coupling network by penalizing the number of interconnection links. For identical oscillators, we establish convexity of the optimization problem and demonstrate that the design problem can be formulated as a semidefinite program. Finally, for special classes of oscillator networks we derive explicit analytical expressions for the optimal conductance values.
Compressive depth map acquisition using a single photoncounting detector: Parametric signal processing meets sparsity
 In IEEE Computer Vision and Pattern Recognition, CVPR 2012
, 2012
"... Active range acquisition systems such as light detection and ranging (LIDAR) and timeofflight (TOF) cameras achieve high depth resolution but suffer from poor spatial resolution. In this paper we introduce a new range acquisition architecture that does not rely on scene raster scanning as in LIDAR ..."
Abstract

Cited by 9 (6 self)
 Add to MetaCart
(Show Context)
Active range acquisition systems such as light detection and ranging (LIDAR) and timeofflight (TOF) cameras achieve high depth resolution but suffer from poor spatial resolution. In this paper we introduce a new range acquisition architecture that does not rely on scene raster scanning as in LIDAR or on a twodimensional array of sensors as used in TOF cameras. Instead, we achieve spatial resolution through patterned sensing of the scene using a digital micromirror device (DMD) array. Our depth map reconstruction uses parametric signal modeling to recover the set of distinct depth ranges present in the scene. Then, using a convex program that exploits the sparsity of the Laplacian of the depth map, we recover the spatial content at the estimated depth ranges. In our experiments we acquired 64×64pixel depth maps of frontoparallel scenes at ranges up to 2.1 m using a pulsed laser, a DMD array and a single photoncounting detector. We also demonstrated imaging in the presence of unknown partiallytransmissive occluders. The prototype and results provide promising directions for nonscanning, lowcomplexity range acquisition devices for various computer vision applications. 1.
Robust regression through the Huber’s criterion and adaptive lasso penalty
 Electron. J. Stat
"... Abstract: The Huber’s Criterion is a useful method for robust regression. The adaptive least absolute shrinkage and selection operator (lasso) is a popular technique for simultaneous estimation and variable selection. The adaptive weights in the adaptive lasso allow to have the oracle properties. In ..."
Abstract

Cited by 9 (1 self)
 Add to MetaCart
(Show Context)
Abstract: The Huber’s Criterion is a useful method for robust regression. The adaptive least absolute shrinkage and selection operator (lasso) is a popular technique for simultaneous estimation and variable selection. The adaptive weights in the adaptive lasso allow to have the oracle properties. In this paper we propose to combine the Huber’s criterion and adaptive penalty as lasso. This regression technique is resistant to heavytailed errors or outliers in the response. Furthermore, we show that the estimator associated withthisprocedure enjoys the oracle properties.Thisapproach is compared with LADlasso based on least absolute deviation with adaptive lasso. Extensive simulation studies demonstrate satisfactory finitesample performance of such procedure. A real example is analyzed for illustration purposes.
Improved total variationtype regularization using higherorder edge detectors
 SIAM Journal on Imaging Sciences
"... Abstract. We present a novel deconvolution approach to accurately restore piecewise smooth signals from blurred data. The first stage uses Higher Order Total Variation restorations to obtain an estimate of the location of jump discontinuities from the blurred data. In the second stage the estimated ..."
Abstract

Cited by 9 (1 self)
 Add to MetaCart
(Show Context)
Abstract. We present a novel deconvolution approach to accurately restore piecewise smooth signals from blurred data. The first stage uses Higher Order Total Variation restorations to obtain an estimate of the location of jump discontinuities from the blurred data. In the second stage the estimated jump locations are used to determine the local orders of a Variable Order Total Variation restoration. The method replaces the first order derivative approximation used in standard Total Variation by a variable order derivative operator. Smooth segments as well as jump discontinuities are restored while the staircase effect typical for standard first order Total Variation regularization is avoided. As compared to first order Total Variation, signal restorations are more accurate representations of the true signal, as measured in a relative l 2 norm. The method can also be used to obtain an accurate estimation of the locations and sizes of the true jump discontinuities. The approach is independent of the algorithm used for the standard Total Variation problem and is, consequently, readily incorporated in existing Total Variation restoration codes.
Transmit signal and bandwidth optimization in multipleantenna relay channels
 IEEE Tras. on Commun
, 2011
"... ar ..."
(Show Context)
Multistage stochastic programming: A scenario tree based approach to planning under uncertainty
 APPLICATIONS IN ARTIFICIAL INTELLIGENCE: CONCEPTS AND SOLUTIONS, CHAPTER 6
, 2011
"... In this chapter, we present the multistage stochastic programming framework for sequential decision making under uncertainty and stress its differences with Markov Decision Processes. We describe the main approximation technique used for solving problems formulated in the multistage stochastic progr ..."
Abstract

Cited by 8 (6 self)
 Add to MetaCart
(Show Context)
In this chapter, we present the multistage stochastic programming framework for sequential decision making under uncertainty and stress its differences with Markov Decision Processes. We describe the main approximation technique used for solving problems formulated in the multistage stochastic programming framework, which is based on a discretization of the disturbance space. We explain that one issue of the approach is that the discretization scheme leads in practice to illposed problems, because the complexity of the numerical optimization algorithms used for computing the decisions restricts the number of samples and optimization variables that one can use for approximating expectations, and therefore makes the numerical solutions very sensitive to the parameters of the discretization. As the framework is weak in the absence of efficient tools for evaluating and eventually selecting competing approximate solutions, we show how one can extend it by using machine learning based techniques, so as to yield a sound and generic method to solve approximately a large class of multistage decision problems under uncertainty. The framework and solution techniques presented in the chapter are explained and illustrated on several examples. Along the way, we describe notions from decision theory that are relevant to sequential decision making under uncertainty in general.
Distortion Minimization in Gaussian Layered Broadcast Coding With Successive Refinement
, 2009
"... A transmitter without channel state information wishes to send a delaylimited Gaussian source over a slowly fading channel. The source is coded in superimposed layers, with each layer successively refining the description in the previous one. The receiver decodes the layers that are supported by t ..."
Abstract

Cited by 8 (1 self)
 Add to MetaCart
A transmitter without channel state information wishes to send a delaylimited Gaussian source over a slowly fading channel. The source is coded in superimposed layers, with each layer successively refining the description in the previous one. The receiver decodes the layers that are supported by the channel realization and reconstructs the source up to a distortion. The expected distortion is minimized by optimally allocating the transmit power among the source layers. For two source layers, the allocation is optimal when power is first assigned to the higher layer up to a power ceiling that depends only on the channel fading distribution; all remaining power, if any, is allocated to the lower layer. For convex distortion cost functions with convex constraints, the minimization is formulated as a convex optimization problem. In the limit of a continuum of infinite layers, the minimum expected distortion is given by the solution to a set of linear differential equations in terms of the density of the fading distribution. As the number of channel uses per source symbol tends to zero, the power distribution that minimizes expected distortion converges to the one that maximizes expected capacity.