Results 1  10
of
666,531
1Universal Divergence Estimation for Finitealphabet Sources
"... We study universal estimation of divergence from the realizations of two unknown finitealphabet sources. We present two algorithms using techniques from data compression. The first divergence estimator applies the BurrowsWheeler block sorting transform to the concatenation of the two realizations; ..."
Abstract
 Add to MetaCart
We study universal estimation of divergence from the realizations of two unknown finitealphabet sources. We present two algorithms using techniques from data compression. The first divergence estimator applies the BurrowsWheeler block sorting transform to the concatenation of the two realizations
Universal divergence estimation for finitealphabet sources
 IEEE Trans. Inf. Theory
, 2006
"... Abstract—This paper studies universal estimation of divergence from the realizations of two unknown finitealphabet sources. Two algorithms that borrow techniques from data compression are presented. The first divergence estimator applies the Burrows–Wheeler block sorting transform to the concatenat ..."
Abstract

Cited by 9 (3 self)
 Add to MetaCart
Abstract—This paper studies universal estimation of divergence from the realizations of two unknown finitealphabet sources. Two algorithms that borrow techniques from data compression are presented. The first divergence estimator applies the Burrows–Wheeler block sorting transform
The ratedistortion function for source coding with side information at the decoder
 IEEE Trans. Inform. Theory
, 1976
"... AbstractLet {(X,, Y,J}r = 1 be a sequence of independent drawings of a pair of dependent random variables X, Y. Let us say that X takes values in the finite set 6. It is desired to encode the sequence {X,} in blocks of length n into a binary stream*of rate R, which can in turn be decoded as a seque ..."
Abstract

Cited by 1060 (1 self)
 Add to MetaCart
AbstractLet {(X,, Y,J}r = 1 be a sequence of independent drawings of a pair of dependent random variables X, Y. Let us say that X takes values in the finite set 6. It is desired to encode the sequence {X,} in blocks of length n into a binary stream*of rate R, which can in turn be decoded as a
A new learning algorithm for blind signal separation

, 1996
"... A new online learning algorithm which minimizes a statistical dependency among outputs is derived for blind separation of mixed signals. The dependency is measured by the average mutual information (MI) of the outputs. The source signals and the mixing matrix are unknown except for the number of ..."
Abstract

Cited by 622 (80 self)
 Add to MetaCart
A new online learning algorithm which minimizes a statistical dependency among outputs is derived for blind separation of mixed signals. The dependency is measured by the average mutual information (MI) of the outputs. The source signals and the mixing matrix are unknown except for the number
Equivariant Adaptive Source Separation
 IEEE Trans. on Signal Processing
, 1996
"... Source separation consists in recovering a set of independent signals when only mixtures with unknown coefficients are observed. This paper introduces a class of adaptive algorithms for source separation which implements an adaptive version of equivariant estimation and is henceforth called EASI (Eq ..."
Abstract

Cited by 449 (9 self)
 Add to MetaCart
Source separation consists in recovering a set of independent signals when only mixtures with unknown coefficients are observed. This paper introduces a class of adaptive algorithms for source separation which implements an adaptive version of equivariant estimation and is henceforth called EASI
The strength of weak learnability
 MACHINE LEARNING
, 1990
"... This paper addresses the problem of improving the accuracy of an hypothesis output by a learning algorithm in the distributionfree (PAC) learning model. A concept class is learnable (or strongly learnable) if, given access to a Source of examples of the unknown concept, the learner with high prob ..."
Abstract

Cited by 871 (26 self)
 Add to MetaCart
This paper addresses the problem of improving the accuracy of an hypothesis output by a learning algorithm in the distributionfree (PAC) learning model. A concept class is learnable (or strongly learnable) if, given access to a Source of examples of the unknown concept, the learner with high
Distributed spacetimecoded protocols for exploiting cooperative diversity in wireless networks
 IEEE TRANS. INF. THEORY
, 2003
"... We develop and analyze space–time coded cooperative diversity protocols for combating multipath fading across multiple protocol layers in a wireless network. The protocols exploit spatial diversity available among a collection of distributed terminals that relay messages for one another in such a m ..."
Abstract

Cited by 622 (5 self)
 Add to MetaCart
manner that the destination terminal can average the fading, even though it is unknown a priori which terminals will be involved. In particular, a source initiates transmission to its destination, and many relays potentially receive the transmission. Those terminals that can fully decode the transmission
Power allocation and linear precoding for wireless communications with finitealphabet inputs
, 2012
"... ..."
DETECTION OF SPARSE SIGNALS UNDER FINITEALPHABET CONSTRAINTS
"... In this paper, we solve the problem of detecting the entries of asparsefinitealphabet signal from a limited amount of data, for instance obtained by compressive sampling. While existing methods either rely on the sparsity property, the finitealphabet property, or none of those properties to solve t ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
the underdetermined system of linear equations, we capitalize on both the sparsity and the finitealphabet features of the signal. The problem is first formulated in a Bayesian framework to incorporate the prior knowledge of sparsity, which is then shown to be solvable using sphere decoding (SD
Results 1  10
of
666,531