Results 1  10
of
1,100,192
Iterative decoding of binary block and convolutional codes
 IEEE Trans. Inform. Theory
, 1996
"... Abstract Iterative decoding of twodimensional systematic convolutional codes has been termed “turbo ” (de)coding. Using loglikelihood algebra, we show that any decoder can he used which accepts soft inputsincluding a priori valuesand delivers soft outputs that can he split into three terms: the ..."
Abstract

Cited by 600 (43 self)
 Add to MetaCart
is controlled by a stop criterion derived from cross entropy, which results in a minimal number of iterations. Optimal and suboptimal decoders with reduced complexity are presented. Simulation results show that very simple component codes are sufficient, block codes are appropriate for high rates
Decoding by Linear Programming
, 2004
"... This paper considers the classical error correcting problem which is frequently discussed in coding theory. We wish to recover an input vector f ∈ Rn from corrupted measurements y = Af + e. Here, A is an m by n (coding) matrix and e is an arbitrary and unknown vector of errors. Is it possible to rec ..."
Abstract

Cited by 1400 (17 self)
 Add to MetaCart
for some ρ> 0. In short, f can be recovered exactly by solving a simple convex optimization problem (which one can recast as a linear program). In addition, numerical experiments suggest that this recovery procedure works unreasonably well; f is recovered exactly even in situations where a significant
Near Shannon limit errorcorrecting coding and decoding
, 1993
"... Abstract This paper deals with a new class of convolutional codes called Turbocodes, whose performances in terms of Bit Error Rate (BER) are close to the SHANNON limit. The TurboCode encoder is built using a parallel concatenation of two Recursive Systematic Convolutional codes and the associated ..."
Abstract

Cited by 1738 (5 self)
 Add to MetaCart
and the associated decoder, using a feedback decoding rule, is implemented as P pipelined identical elementary decoders. Consider a binary rate R=1/2 convolutional encoder with constraint length K and memory M=K1. The input to the encoder at time k is a bit dk and the corresponding codeword
Fast Effective Rule Induction
, 1995
"... Many existing rule learning systems are computationally expensive on large noisy datasets. In this paper we evaluate the recentlyproposed rule learning algorithm IREP on a large and diverse collection of benchmark problems. We show that while IREP is extremely efficient, it frequently gives error r ..."
Abstract

Cited by 1257 (21 self)
 Add to MetaCart
Many existing rule learning systems are computationally expensive on large noisy datasets. In this paper we evaluate the recentlyproposed rule learning algorithm IREP on a large and diverse collection of benchmark problems. We show that while IREP is extremely efficient, it frequently gives error
Mining Generalized Association Rules
, 1995
"... We introduce the problem of mining generalized association rules. Given a large database of transactions, where each transaction consists of a set of items, and a taxonomy (isa hierarchy) on the items, we find associations between items at any level of the taxonomy. For example, given a taxonomy th ..."
Abstract

Cited by 577 (7 self)
 Add to MetaCart
We introduce the problem of mining generalized association rules. Given a large database of transactions, where each transaction consists of a set of items, and a taxonomy (isa hierarchy) on the items, we find associations between items at any level of the taxonomy. For example, given a taxonomy
Fast Algorithms for Mining Association Rules
, 1994
"... We consider the problem of discovering association rules between items in a large database of sales transactions. We present two new algorithms for solving this problem that are fundamentally different from the known algorithms. Empirical evaluation shows that these algorithms outperform the known a ..."
Abstract

Cited by 3551 (15 self)
 Add to MetaCart
We consider the problem of discovering association rules between items in a large database of sales transactions. We present two new algorithms for solving this problem that are fundamentally different from the known algorithms. Empirical evaluation shows that these algorithms outperform the known
Optimal Aggregation Algorithms for Middleware
 IN PODS
, 2001
"... Assume that each object in a database has m grades, or scores, one for each of m attributes. For example, an object can have a color grade, that tells how red it is, and a shape grade, that tells how round it is. For each attribute, there is a sorted list, which lists each object and its grade under ..."
Abstract

Cited by 701 (4 self)
 Add to MetaCart
under that attribute, sorted by grade (highest grade first). There is some monotone aggregation function, or combining rule, such as min or average, that combines the individual grades to obtain an overall grade. To determine the top k objects (that have the best overall grades), the naive algorithm
Rules, discretion, and reputation in a model of monetary policy
 JOURNAL OF MONETARY ECONOMICS
, 1983
"... In a discretionary regime the monetary authority can print more money and create more inflation than people expect. But, although these inflation surprises can have some benefits, they cannot arise systematically in equilibrium when people understand the policymakor's incentives and form their ..."
Abstract

Cited by 794 (9 self)
 Add to MetaCart
their expectations accordingly. Because the policymaker has the power to create inflation shocks ex post, the equilibrium growth rates of money and prices turn out to be higher than otherwise. Therefore, enforced commitments (rules) for monetary behavior can improve matters. Given the repeated interaction between
Particle swarm optimization
, 1995
"... eberhart @ engr.iupui.edu A concept for the optimization of nonlinear functions using particle swarm methodology is introduced. The evolution of several paradigms is outlined, and an implementation of one of the paradigms is discussed. Benchmark testing of the paradigm is described, and applications ..."
Abstract

Cited by 3535 (22 self)
 Add to MetaCart
eberhart @ engr.iupui.edu A concept for the optimization of nonlinear functions using particle swarm methodology is introduced. The evolution of several paradigms is outlined, and an implementation of one of the paradigms is discussed. Benchmark testing of the paradigm is described
Near Optimal Signal Recovery From Random Projections: Universal Encoding Strategies?
, 2004
"... Suppose we are given a vector f in RN. How many linear measurements do we need to make about f to be able to recover f to within precision ɛ in the Euclidean (ℓ2) metric? Or more exactly, suppose we are interested in a class F of such objects— discrete digital signals, images, etc; how many linear m ..."
Abstract

Cited by 1513 (20 self)
 Add to MetaCart
Suppose we are given a vector f in RN. How many linear measurements do we need to make about f to be able to recover f to within precision ɛ in the Euclidean (ℓ2) metric? Or more exactly, suppose we are interested in a class F of such objects— discrete digital signals, images, etc; how many linear measurements do we need to recover objects from this class to within accuracy ɛ? This paper shows that if the objects of interest are sparse or compressible in the sense that the reordered entries of a signal f ∈ F decay like a powerlaw (or if the coefficient sequence of f in a fixed basis decays like a powerlaw), then it is possible to reconstruct f to within very high accuracy from a small number of random measurements. typical result is as follows: we rearrange the entries of f (or its coefficients in a fixed basis) in decreasing order of magnitude f  (1) ≥ f  (2) ≥... ≥ f  (N), and define the weakℓp ball as the class F of those elements whose entries obey the power decay law f  (n) ≤ C · n −1/p. We take measurements 〈f, Xk〉, k = 1,..., K, where the Xk are Ndimensional Gaussian
Results 1  10
of
1,100,192