Results 1  10
of
857
Good ErrorCorrecting Codes based on Very Sparse Matrices
, 1999
"... We study two families of errorcorrecting codes defined in terms of very sparse matrices. "MN" (MacKayNeal) codes are recently invented, and "Gallager codes" were first investigated in 1962, but appear to have been largely forgotten, in spite of their excellent properties. The ..."
Abstract

Cited by 750 (23 self)
 Add to MetaCart
but also for any channel with symmetric stationary ergodic noise. We give experimental results for binarysymmetric channels and Gaussian channels demonstrating that practical performance substantially better than that of standard convolutional and concatenated codes can be achieved; indeed
Gradientbased learning applied to document recognition
 Proceedings of the IEEE
, 1998
"... Multilayer neural networks trained with the backpropagation algorithm constitute the best example of a successful gradientbased learning technique. Given an appropriate network architecture, gradientbased learning algorithms can be used to synthesize a complex decision surface that can classify hi ..."
Abstract

Cited by 1533 (84 self)
 Add to MetaCart
highdimensional patterns, such as handwritten characters, with minimal preprocessing. This paper reviews various methods applied to handwritten character recognition and compares them on a standard handwritten digit recognition task. Convolutional neural networks, which are specifically designed
SURF: Speeded Up Robust Features
 ECCV
"... Abstract. In this paper, we present a novel scale and rotationinvariant interest point detector and descriptor, coined SURF (Speeded Up Robust Features). It approximates or even outperforms previously proposed schemes with respect to repeatability, distinctiveness, and robustness, yet can be comp ..."
Abstract

Cited by 897 (12 self)
 Add to MetaCart
be computed and compared much faster. This is achieved by relying on integral images for image convolutions; by building on the strengths of the leading existing detectors and descriptors (in casu, using a Hessian matrixbased measure for the detector, and a distributionbased descriptor); and by simplifying
An EM Algorithm for WaveletBased Image Restoration
, 2002
"... This paper introduces an expectationmaximization (EM) algorithm for image restoration (deconvolution) based on a penalized likelihood formulated in the wavelet domain. Regularization is achieved by promoting a reconstruction with lowcomplexity, expressed in terms of the wavelet coecients, taking a ..."
Abstract

Cited by 352 (22 self)
 Add to MetaCart
the efficient image representation oered by the discrete wavelet transform (DWT) with the diagonalization of the convolution operator obtained in the Fourier domain. The algorithm alternates between an Estep based on the fast Fourier transform (FFT) and a DWTbased Mstep, resulting in an ecient iterative
Practical conebeam algorithm
 J Opt Soc Am
, 1984
"... A convolutionbackprojection formula is deduced for direct reconstruction of a threedimensional density function from a set of twodimensional projections. The formula is approximate but has useful properties, including errors that are relatively small in many practical instances and a form that le ..."
Abstract

Cited by 308 (0 self)
 Add to MetaCart
A convolutionbackprojection formula is deduced for direct reconstruction of a threedimensional density function from a set of twodimensional projections. The formula is approximate but has useful properties, including errors that are relatively small in many practical instances and a form
Convolutional Networks for Images, Speech, and TimeSeries
, 1995
"... INTRODUCTION The ability of multilayer backpropagation networks to learn complex, highdimensional, nonlinear mappings from large collections of examples makes them obvious candidates for image recognition or speech recognition tasks (see PATTERN RECOGNITION AND NEURAL NETWORKS). In the traditional ..."
Abstract

Cited by 134 (5 self)
 Add to MetaCart
). In the traditional model of pattern recognition, a handdesigned feature extractor gathers relevant information from the input and eliminates irrelevant variabilities. A trainable classifier then categorizes the resulting feature vectors (or strings of symbols) into classes. In this scheme, standard, fully
Necklaces, Convolutions, and X + Y
"... We give subquadratic algorithms that, given two necklaces each with n beads at arbitrary positions, compute the optimal rotation of the necklaces to best align the beads. Here alignment is measured according to the ℓp norm of the vector of distances between pairs of beads from opposite necklaces i ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
in the best perfect matching. We show surprisingly different results for p = 1, p = 2, and p = ∞. For p = 2, we reduce the problem to standard convolution, while for p = ∞ and p = 1, we reduce the problem to (min, +) convolution and (median, +) convolution. Then we solve the latter two convolution problems
On interleaved, differentially encoded convolutional codes
 IEEE Trans. Inform. Theory
, 1999
"... We study a serially interleaved concatenated code construction, where the outer code is a standard convolutional code, and the inner code is a recursive convolutional code of rate 1. Focus is put on the ubiquitous inner differential encoder (used in particular to resolve phase ambiguities), double d ..."
Abstract

Cited by 17 (1 self)
 Add to MetaCart
We study a serially interleaved concatenated code construction, where the outer code is a standard convolutional code, and the inner code is a recursive convolutional code of rate 1. Focus is put on the ubiquitous inner differential encoder (used in particular to resolve phase ambiguities), double
The FourierSeries Method For Inverting Transforms Of Probability Distributions
, 1991
"... This paper reviews the Fourierseries method for calculating cumulative distribution functions (cdf's) and probability mass functions (pmf's) by numerically inverting characteristic functions, Laplace transforms and generating functions. Some variants of the Fourierseries method are remar ..."
Abstract

Cited by 211 (52 self)
 Add to MetaCart
are remarkably easy to use, requiring programs of less than fifty lines. The Fourierseries method can be interpreted as numerically integrating a standard inversion integral by means of the trapezoidal rule. The same formula is obtained by using the Fourier series of an associated periodic function constructed
Fourier meets Möbius: fast subset convolution
 Proceedings of the 39th Annual ACM Symposium on Theory of Computing
, 2007
"... We present a fast algorithm for the subset convolution problem: given functions f and g defined on the lattice of subsets of an nelement set N, compute their subset convolution f ∗g, defined for all S ⊆ N by (f ∗ g)(S) = X f(T)g(S \ T), T ⊆S where addition and multiplication is carried out in an a ..."
Abstract

Cited by 76 (10 self)
 Add to MetaCart
convolution over the ordinary sum–product ring can be computed in Õ(2 n log M) time; the notation Õ suppresses polylogarithmic factors. Furthermore, using a standard embedding technique we can compute the subset convolution over the max–sum or min–sum semiring in Õ(2n M) time. To demonstrate the applicability
Results 1  10
of
857