Results 1  10
of
70
A Probabilistic and RIPless Theory of Compressed Sensing
, 2010
"... This paper introduces a simple and very general theory of compressive sensing. In this theory, the sensing mechanism simply selects sensing vectors independently at random from a probability distribution F; it includes all models — e.g. Gaussian, frequency measurements — discussed in the literature, ..."
Abstract

Cited by 95 (3 self)
 Add to MetaCart
(Show Context)
This paper introduces a simple and very general theory of compressive sensing. In this theory, the sensing mechanism simply selects sensing vectors independently at random from a probability distribution F; it includes all models — e.g. Gaussian, frequency measurements — discussed in the literature, but also provides a framework for new measurement strategies as well. We prove that if the probability distribution F obeys a simple incoherence property and an isotropy property, one can faithfully recover approximately sparse signals from a minimal number of noisy measurements. The novelty is that our recovery results do not require the restricted isometry property (RIP) — they make use of a much weaker notion — or a random model for the signal. As an example, the paper shows that a signal with s nonzero entries can be faithfully recovered from about s log n Fourier coefficients that are contaminated with noise.
Graph sketches: sparsification, spanners, and subgraphs
 In PODS
, 2012
"... When processing massive data sets, a core task is to construct synopses of the data. To be useful, a synopsis data structure should be easy to construct while also yielding good approximations of the relevant properties of the data set. A particularly useful class of synopses are sketches, i.e., tho ..."
Abstract

Cited by 46 (10 self)
 Add to MetaCart
(Show Context)
When processing massive data sets, a core task is to construct synopses of the data. To be useful, a synopsis data structure should be easy to construct while also yielding good approximations of the relevant properties of the data set. A particularly useful class of synopses are sketches, i.e., those based on linear projections of the data. These are applicable in many models including various parallel, stream, and compressed sensing settings. A rich body of analytic and empirical work exists for sketching numerical data such as the frequencies of a set of entities. Our work investigates graph sketching where the graphs of interest encode the relationships between these entities. The main challenge is to capture this richer structure and build the necessary synopses with only linear measurements. In this paper we consider properties of graphs including the size of the cuts, the distances between nodes, and the prevalence of
Simple and Practical Algorithm for Sparse Fourier Transform
"... We consider the sparse Fourier transform problem: given a complex vector x of length n, and a parameter k, estimate the k largest (in magnitude) coefficients of the Fourier transform of x. The problem is of key interest in several areas, including signal processing, audio/image/video compression, an ..."
Abstract

Cited by 36 (9 self)
 Add to MetaCart
(Show Context)
We consider the sparse Fourier transform problem: given a complex vector x of length n, and a parameter k, estimate the k largest (in magnitude) coefficients of the Fourier transform of x. The problem is of key interest in several areas, including signal processing, audio/image/video compression, and learning theory. We propose a new algorithm for this problem. The algorithm leverages techniques from digital signal processing, notably Gaussian and DolphChebyshev filters. Unlike the typical approach to this problem, our algorithm is not iterative. That is, instead of estimating “large ” coefficients, subtracting them and recursing on the reminder, it identifies and estimates the k largest coefficients in “one shot”, in a manner akin to sketching/streaming algorithms. The resulting algorithm is structurally simpler than its predecessors. As a consequence, we are able to extend considerably the range of sparsity, k, for which the algorithm is faster than FFT, both in theory and practice. 1
Tight bounds for lp samplers, finding duplicates in streams, and related problems
 In PODS
, 2011
"... In this paper, we present nearoptimal space bounds for Lpsamplers. Given a stream of updates (additions and subtraction) to the coordinates of an underlying vector x ∈ R n, a perfect Lp sampler outputs the ith coordinate with probability xi p/‖x‖pp. In SODA 2010, Monemizadeh and Woodruff showe ..."
Abstract

Cited by 18 (0 self)
 Add to MetaCart
(Show Context)
In this paper, we present nearoptimal space bounds for Lpsamplers. Given a stream of updates (additions and subtraction) to the coordinates of an underlying vector x ∈ R n, a perfect Lp sampler outputs the ith coordinate with probability xi p/‖x‖pp. In SODA 2010, Monemizadeh and Woodruff showed polylog space upper bounds for approximate Lpsamplers and demonstrated various applications of them. Very recently, Andoni, Krauthgamer and Onak improved the upper bounds and gave a O(ǫ−p log3 n) space ǫ relative error and constant failure rate Lpsampler for p ∈ [1, 2]. In this work, we give another such algorithm requiring only O(ǫ−p log2 n) space for p ∈ (1, 2). For p ∈ (0, 1), our space bound is O(ǫ−1 log2 n), while for the p = 1 case we have an O(log(1/ǫ)ǫ−1 log2 n) space algorithm. We also give a O(log2 n) bits zero relative error L0sampler, improving the O(log3 n) bits algorithm due to Frahling, Indyk and Sohler. As an application of our samplers, we give better upper bounds for the problem of finding duplicates in data streams. In case the length of the stream is longer than the alphabet size, L1 sampling gives us an O(log 2 n) space algorithm, thus improving the previous O(log3 n) bound due to Gopalan and Radhakrishnan. In the second part of our work, we prove an Ω(log2 n) lower bound for sampling from 0, ±1 vectors (in this special case, the parameter p is not relevant for Lp sampling). This matches the space of our sampling algorithms for constant ǫ> 0. We also prove tight space lower bounds for the finding duplicates and heavy hitters problems. We obtain these lower bounds using reductions from the communication complexity problem augmented indexing.
Compressed Matrix Multiplication ∗
"... Motivated by the problems of computing sample covariance matrices, and of transforming a collection of vectors to a basis where they are sparse, we present a simple algorithm that computes an approximation of the product of two nbyn real matrices A and B. Let ABF denote the Frobenius norm of A ..."
Abstract

Cited by 16 (4 self)
 Add to MetaCart
(Show Context)
Motivated by the problems of computing sample covariance matrices, and of transforming a collection of vectors to a basis where they are sparse, we present a simple algorithm that computes an approximation of the product of two nbyn real matrices A and B. Let ABF denote the Frobenius norm of AB, and b be a parameter determining the time/accuracy tradeoff. Given 2wise independent hash functions h1, h2: [n] → [b], and s1, s2: [n] → {−1, +1} the algorithm works by first “compressing ” the matrix product into the polynomial n∑ n∑ p(x) = Aiks1(i) x h1(i) n∑ Bkjs2(j) x h2(j) k=1 i=1 j=1 Using FFT for polynomial multiplication, we can compute c0,..., cb−1 such that ∑ i cixi = (p(x) mod x b)+(p(x) div x b) in time Õ(n2 + nb). An unbiased estimator of (AB)ij with variance at most AB  2 F /b can then be computed as: Cij = s1(i) s2(j) c(h1(i)+h2(j)) mod b. Our approach also leads to an algorithm for computing AB exactly, whp., in time Õ(N + nb) in the case where A and B have at most N nonzero entries, and AB has at most b nonzero entries. Also, we use errorcorrecting codes in a novel way to recover significant entries of AB in nearlinear time.
Sparse Signal Recovery and Acquisition with Graphical Models  A review of a broad set of sparse models, analysis tools, and recovery algorithms within the graphical models formalism
, 2010
"... Many applications in digital signal processing, machine learning, and communications feature a linear regression problem in which unknown data points, hidden variables, or code words are projected into a lower dimensional space via y 5 Fx 1 n. (1) In the signal processing context, we refer to x [ R ..."
Abstract

Cited by 14 (1 self)
 Add to MetaCart
Many applications in digital signal processing, machine learning, and communications feature a linear regression problem in which unknown data points, hidden variables, or code words are projected into a lower dimensional space via y 5 Fx 1 n. (1) In the signal processing context, we refer to x [ R N as the signal, y [ R M as measurements with M, N, F[R M3N as the measurement matrix, and n [ R M as the noise. The measurement matrix F is a matrix with random entries in data streaming, an overcomplete dictionary of features in sparse Bayesian learning, or a code matrix in communications [1]–[3]. Extracting x from y in (1) is ill posed in general since M, N and the measurement matrix F hence has a nontrivial null space; given any vector v in this null space, x 1 v defines a solution that produces the same observations y. Additional information is therefore necessary to distinguish the true x among the infinitely many possible solutions [1], [2], [4], [5]. It is now well known that sparse
On the Power of Adaptivity in Sparse Recovery
, 2011
"... The goal of (stable) sparse recovery is to recover a ksparse approximation x ∗ of a vector x from linear measurements of x. Specifically, the goal is to recover x ∗ such that ‖x − x ∗ ‖ ..."
Abstract

Cited by 14 (4 self)
 Add to MetaCart
(Show Context)
The goal of (stable) sparse recovery is to recover a ksparse approximation x ∗ of a vector x from linear measurements of x. Specifically, the goal is to recover x ∗ such that ‖x − x ∗ ‖
Efficient Sketches for the Set Query Problem
"... We develop an algorithm for estimating the values of a vector x ∈ R n over a support S of size k from a randomized sparse binary linear sketch Ax of size O(k). Given Ax and S, we can recover x ′ with ‖x ′ − xS‖ 2 ≤ ɛ ‖x − xS‖ ..."
Abstract

Cited by 12 (2 self)
 Add to MetaCart
We develop an algorithm for estimating the values of a vector x ∈ R n over a support S of size k from a randomized sparse binary linear sketch Ax of size O(k). Given Ax and S, we can recover x ′ with ‖x ′ − xS‖ 2 ≤ ɛ ‖x − xS‖