Results 1 
9 of
9
Theoretical and experimental analysis of a randomized algorithm for sparse Fourier transform analysis
 J. Comp. Phys
, 2005
"... Abstract. We analyze a family of sublinear algorithms that find a nearoptimal Bterm Sparse Representation R for a given discrete signal S of length N, in time and space poly(B, log(N), log(δ),ǫ −1). These representations are expansions with respect to a particular basis or a family of bases; examp ..."
Abstract

Cited by 14 (3 self)
 Add to MetaCart
(Show Context)
Abstract. We analyze a family of sublinear algorithms that find a nearoptimal Bterm Sparse Representation R for a given discrete signal S of length N, in time and space poly(B, log(N), log(δ),ǫ −1). These representations are expansions with respect to a particular basis or a family of bases; examples are wavelet bases, wavelet packets or Fourier bases. We shall use the acronym RAℓSTA (Randomized Algorithm for Sparse Transform Analysis) for this family of algorithms. We here restrict ourselves to the Fourier case and thus RAℓSFA (Randomized Algorithm for Sparse Fourier Analysis), where the poly(log(N)) time of RAℓSFA should be compared with the superlinear Ω(N log N) time requirement of the Fast Fourier Transform (FFT). However, a straightforward implementation of the original RAℓSFA, as presented in the theoretical paper [5], turns out to be very slow in practice. Our main result is a greatly improved and practical RAℓSFA implementation. We introduce several new ideas and techniques to speed up the algorithm; tests on numerical examples show that our implementation is about four thousand times faster than the original RAℓSFA. Both rigorous and heuristic arguments for parameter choices are presented. Our empirically improved RAℓSFA constructs, with probability at least 1 − δ, a nearoptimal Bterm representation R in time poly(B) log(N)log(δ)/ǫ 2 such that ‖S − R ‖ 2 ≤ (1 + ǫ)‖S − Ropt ‖ 2. Furthermore, the improved RAℓSFA already beats FFT for not unreasonably large N. We extend the algorithm to higher dimensional cases both theoretically and numerically. The crossover point lies at N ≃ 25, 000 in one dimension, and at N ≃ 460 for data on a N × N grid in two dimensions for small B signals where there is no noise.
Faithful Representations and Moments of Satisfaction: Probabilistic Methods in Learning and Logic
, 1998
"... ..."
Estimating the Significant NonLinearities in the Genome ProblemCoding
, 1999
"... Substantial insight in the genome problemcoding can be gained if we know the most important Walsh coecients  that is, the coefficients with large value. The practical use of the Walsh transform however is severely limited by the computational cost, even when using the Fast Walsh Transform. Part of ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Substantial insight in the genome problemcoding can be gained if we know the most important Walsh coecients  that is, the coefficients with large value. The practical use of the Walsh transform however is severely limited by the computational cost, even when using the Fast Walsh Transform. Part of this is caused by the fact that the transform computes all Walsh coecients, not just the most significant. Here we discuss the use in a GA context of the recently developed KM algorithm, which estimates the most significant Walsh coefficients in a computational efficient way.
Search Techniques for FourierBased Learning
"... Fourierbased learning algorithms rely on being able to efficiently find the large coefficients of a function’s spectral representation. In this paper, we introduce and analyze techniques for finding large coefficients. We show how a previously introduced search technique can be generalized from the ..."
Abstract
 Add to MetaCart
(Show Context)
Fourierbased learning algorithms rely on being able to efficiently find the large coefficients of a function’s spectral representation. In this paper, we introduce and analyze techniques for finding large coefficients. We show how a previously introduced search technique can be generalized from the Boolean case to the realvalued case, and we apply it in branchandbound and beam search algorithms that have significant advantages over the bestfirst algorithm in which the technique was originally introduced. 1
A Sparse Spectral Method for the Homogenization of Multiscale Problems ∗
, 2006
"... We develop a new sparse spectral method, in which the Fast Fourier Transform (FFT) is replaced by RAℓSFA (Randomized Algorithm of Sparse Fourier Analysis); this is a sublinear randomized algorithm that takes time O(B log N) to recover a Bterm Fourier representation for a signal of length N, where w ..."
Abstract
 Add to MetaCart
We develop a new sparse spectral method, in which the Fast Fourier Transform (FFT) is replaced by RAℓSFA (Randomized Algorithm of Sparse Fourier Analysis); this is a sublinear randomized algorithm that takes time O(B log N) to recover a Bterm Fourier representation for a signal of length N, where we assume B ≪ N. To illustrate its potential, we consider the parabolic homogenization problem with a characteristic fine scale size ε. For fixed tolerance the sparse method has a computational cost of O(  log ε) per time step, whereas standard methods cost at least O(ε −d). We present a theoretical analysis as well as numerical results; they show the advantage of the new method in speed over the traditional spectral methods when ε is very small. We also show some ways to extend the methods to hyperbolic and elliptic problems. 1
ABSTRACT LEARNING REALWORLD PROBLEMS BY FINDING CORRELATED BASIS FUNCTIONS
, 2006
"... by majority vote has been found to be satisfactory. ..."
(Show Context)
An Empirical Comparison of Spectral Learning Methods for Classification
"... Abstract—In this paper, we explore the problem of how to learn spectral (e.g., Fourier) models for classification problems. Specifically, we consider two subproblems of spectral learning: (1) how to select the basis functions that will be included in the model and (2) how to assign coefficients to ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract—In this paper, we explore the problem of how to learn spectral (e.g., Fourier) models for classification problems. Specifically, we consider two subproblems of spectral learning: (1) how to select the basis functions that will be included in the model and (2) how to assign coefficients to the selected basis functions. Interestingly, empirical results suggest that the most commonly used approach does not perform as well in practice as other approaches, while a method for assigning coefficients based on finding an optimal linear combination of loworder basis functions usually outperforms other approaches. I.