Results 1  10
of
106
Clustering with Bregman Divergences
 JOURNAL OF MACHINE LEARNING RESEARCH
, 2005
"... A wide variety of distortion functions are used for clustering, e.g., squared Euclidean distance, Mahalanobis distance and relative entropy. In this paper, we propose and analyze parametric hard and soft clustering algorithms based on a large class of distortion functions known as Bregman divergence ..."
Abstract

Cited by 443 (57 self)
 Add to MetaCart
(Show Context)
A wide variety of distortion functions are used for clustering, e.g., squared Euclidean distance, Mahalanobis distance and relative entropy. In this paper, we propose and analyze parametric hard and soft clustering algorithms based on a large class of distortion functions known as Bregman divergences. The proposed algorithms unify centroidbased parametric clustering approaches, such as classical kmeans and informationtheoretic clustering, which arise by special choices of the Bregman divergence. The algorithms maintain the simplicity and scalability of the classical kmeans algorithm, while generalizing the basic idea to a very large class of clustering loss functions. There are two main contributions in this paper. First, we pose the hard clustering problem in terms of minimizing the loss in Bregman information, a quantity motivated by ratedistortion theory, and present an algorithm to minimize this loss. Secondly, we show an explicit bijection between Bregman divergences and exponential families. The bijection enables the development of an alternative interpretation of an ecient EM scheme for learning models involving mixtures of exponential distributions. This leads to a simple soft clustering algorithm for all Bregman divergences.
Coding for Computing
 IEEE Transactions on Information Theory
, 1998
"... A sender communicates with a receiver who wishes to reliably evaluate a function of their combined data. We show that if only the sender can transmit, the number of bits required is a conditional entropy of a naturally defined graph. We also determine the number of bits needed when the communicators ..."
Abstract

Cited by 138 (0 self)
 Add to MetaCart
(Show Context)
A sender communicates with a receiver who wishes to reliably evaluate a function of their combined data. We show that if only the sender can transmit, the number of bits required is a conditional entropy of a naturally defined graph. We also determine the number of bits needed when the communicators exchange two messages. 1 Introduction Let f be a function of two random variables X and Y . A sender PX knows X, a receiver PY knows Y , and both want PY to reliably determine f(X; Y ). How many bits must PX transmit? Embedding this communicationcomplexity scenario (Yao [22]) in the standard informationtheoretic setting (Shannon [17]), we assume that (1) f(X; Y ) must be determined for a block of many independent (X; Y )instances, (2) PX transmits after observing the whole block of X instances, (3) a vanishing block error probability is allowed, and (4) the problem's rate L f (XjY ) is the number of bits transmitted for the block, normalized by the number of instances. Two simple bou...
Universal Discrete Denoising: Known Channel
 IEEE Trans. Inform. Theory
, 2003
"... A discrete denoising algorithm estimates the input sequence to a discrete memoryless channel (DMC) based on the observation of the entire output sequence. For the case in which the DMC is known and the quality of the reconstruction is evaluated with a given singleletter fidelity criterion, we pr ..."
Abstract

Cited by 99 (34 self)
 Add to MetaCart
A discrete denoising algorithm estimates the input sequence to a discrete memoryless channel (DMC) based on the observation of the entire output sequence. For the case in which the DMC is known and the quality of the reconstruction is evaluated with a given singleletter fidelity criterion, we propose a discrete denoising algorithm that does not assume knowledge of statistical properties of the input sequence. Yet, the algorithm is universal in the sense of asymptotically performing as well as the optimum denoiser that knows the input sequence distribution, which is only assumed to be stationary and ergodic. Moreover, the algorithm is universal also in a semistochastic setting, in which the input is an individual sequence, and the randomness is due solely to the channel noise.
Fifty Years of Shannon Theory
, 1998
"... A brief chronicle is given of the historical development of the central problems in the theory of fundamental limits of data compression and reliable communication. ..."
Abstract

Cited by 50 (1 self)
 Add to MetaCart
A brief chronicle is given of the historical development of the central problems in the theory of fundamental limits of data compression and reliable communication.
Coordination Capacity
, 2009
"... We develop elements of a theory of cooperation and coordination in networks. Rather than considering a communication network as a means of distributing information, or of reconstructing random processes at remote nodes, we ask what dependence can be established among the nodes given the communicatio ..."
Abstract

Cited by 48 (17 self)
 Add to MetaCart
We develop elements of a theory of cooperation and coordination in networks. Rather than considering a communication network as a means of distributing information, or of reconstructing random processes at remote nodes, we ask what dependence can be established among the nodes given the communication constraints. Specifically, in a network with communication rates {Ri,j} between the nodes, we ask what is the set of all achievable joint distributions p(x1,..., xm) of actions at the nodes on the network. Several networks are solved, including arbitrarily large cascade networks. Distributed cooperation can be the solution to many problems such as distributed games, distributed control, and establishing mutual information bounds on the influence of one part of a physical system on another.
Lattices are Everywhere
"... As bees and crystals (and people selling oranges in the market) know it for many years, lattices provide efficient structures for packing, covering, quantization and channel coding. In the recent years, interesting links were found between lattices and coding schemes for multiterminal networks. Thi ..."
Abstract

Cited by 41 (3 self)
 Add to MetaCart
(Show Context)
As bees and crystals (and people selling oranges in the market) know it for many years, lattices provide efficient structures for packing, covering, quantization and channel coding. In the recent years, interesting links were found between lattices and coding schemes for multiterminal networks. This tutorial paper covers close to 20 years of my research in the area; of enjoying the beauty of lattice codes, and discovering their power in dithered quantization, dirty paper coding, WynerZiv DPCM, modulolattice modulation, distributed interference cancellation, and more.
Side information aware coding strategies for sensor networks
 IEEE J. Selected Areas Commun
"... Abstract—We develop coding strategies for estimation under communication constraints in treestructured sensor networks. The strategies have a modular and decentralized architecture. This promotes the flexibility, robustness, and scalability that wireless sensor networks need to operate in uncertain ..."
Abstract

Cited by 34 (0 self)
 Add to MetaCart
(Show Context)
Abstract—We develop coding strategies for estimation under communication constraints in treestructured sensor networks. The strategies have a modular and decentralized architecture. This promotes the flexibility, robustness, and scalability that wireless sensor networks need to operate in uncertain, changing, and resourceconstrained environments. The strategies are based on a generalization of Wyner–Ziv source coding with decoder side information. We develop solutions for general trees, and illustrate our results in serial (pipeline) and parallel (hubandspoke) networks. Additionally, the strategies can be applied to other network information theory problems. They have a successive coding structure that gives an inherently less complex way to attain a number of prior results, as well as some novel results, for the Chief Executive Officer problem, multiterminal source coding, and certain classes of relay channels. Index Terms—Chief Executive Officer (CEO) problems, data fusion, distributed detection, distributed estimation, multiterminal source coding, rate distortion theory, relay channels, sensor networks, side information, Wyner–Ziv coding. I.
Iterative decoding of a broadcast message
 in Proc. Allerton Conf. Commun., Contr., Comput
, 2003
"... We develop communication strategies for the rateconstrained interactive decoding of a message broadcast to a group of interested users. This situation differs from the relay channel in that all users are interested in the transmitted message, and from the broadcast channel because no user can decod ..."
Abstract

Cited by 30 (0 self)
 Add to MetaCart
(Show Context)
We develop communication strategies for the rateconstrained interactive decoding of a message broadcast to a group of interested users. This situation differs from the relay channel in that all users are interested in the transmitted message, and from the broadcast channel because no user can decode on its own. We focus on twouser scenarios, and describe a baseline strategy that uses ideas of coding with decoder side information. One user acts initially as a relay for the other. That other user then decodes the message and sends back random parity bits, enabling the first user to decode. We show how to improve on this scheme’s performance through a conversation consisting of multiple rounds of discussion. While there are now more messages, each message is shorter, lowering the overall rate of the conversation. Such multiround conversations can be more efficient because earlier messages serve as side information known at both encoder and decoder. We illustrate these ideas for binary erasure channels. We show that multiround conversations can decode using less overall rate than is possible with the singleround scheme. 1
Pointwise Redundancy in Lossy Data Compression and Universal Lossy Data Compression
 IEEE Trans. Inform. Theory
, 1999
"... We characterize the achievable pointwise redundancy rates for lossy data compression at a fixed distortion level. "Pointwise redundancy" refers to the difference between the description length achieved by an nthorder block code and the optimal nR(D) bits. For memoryless sources, we show t ..."
Abstract

Cited by 30 (15 self)
 Add to MetaCart
(Show Context)
We characterize the achievable pointwise redundancy rates for lossy data compression at a fixed distortion level. "Pointwise redundancy" refers to the difference between the description length achieved by an nthorder block code and the optimal nR(D) bits. For memoryless sources, we show that the best achievable redundancy rate is of order O( p n) in probability. This follows from a secondorder refinement to the classical source coding theorem, in the form of a "onesided central limit theorem." Moreover, we show that, along (almost) any source realization, the description lengths of any sequence of block codes operating at distortion level D exceed nR(D) by at least as much as C p n log log n, infinitely often. Corresponding direct coding theorems are also given, showing that these rates are essentially achievable. The above rates are in sharp contrast with the expected redundancy rates of order O(log n) recently reported by various authors. Our approach is based on showing that...
Achievable rate regions and performance comparison of half duplex bidirectional relaying protocols
 IEEE Trans. Inf. Theory
, 2011
"... Abstract—In a bidirectional relay channel, two nodes wish to exchange independent messages over a shared wireless halfduplex channel with the help of a relay. In this paper, we derive achievable rate regions for four new halfduplex protocols and compare these to four existing halfduplex protoc ..."
Abstract

Cited by 29 (1 self)
 Add to MetaCart
(Show Context)
Abstract—In a bidirectional relay channel, two nodes wish to exchange independent messages over a shared wireless halfduplex channel with the help of a relay. In this paper, we derive achievable rate regions for four new halfduplex protocols and compare these to four existing halfduplex protocols and outer bounds. In time, our protocols consist of either two or three phases. In the two phase protocols, both users simultaneously transmit during the first phase and the relay alone transmits during the second phase, while in the three phase protocol the two users sequentially transmit followed by a transmission from the relay. The relay may forward information in one of four manners; we outline existing amplify and forward (AF), decode and forward (DF), lattice based, and compress and forward (CF) relaying schemes and introduce the novel mixed forward scheme. The latter is a combination of CF in one direction and DF in the other. We derive achievable rate regions for the CF andMixed relaying schemes for the two and three phase protocols. We provide a comprehensive treatment of eight possible halfduplex bidirectional relaying protocols in Gaussian noise, obtaining their relative performance under different SNR and relay geometries. Index Terms—Achievable rate regions, bidirectional communication, compress and forward, relaying. I.