Results 1  10
of
373
The multiway relay channel
 in Proc. IEEE Int. Symposium on Inf. Theory (ISIT), Seoul, Korea
"... Abstract—The multiuser communication channel, in which multiple users exchange information with the help of a single relay terminal, called the multiway relay channel, is considered. In this model, multiple interfering clusters of users communicate simultaneously, where the users within the same c ..."
Abstract

Cited by 62 (3 self)
 Add to MetaCart
(Show Context)
Abstract—The multiuser communication channel, in which multiple users exchange information with the help of a single relay terminal, called the multiway relay channel, is considered. In this model, multiple interfering clusters of users communicate simultaneously, where the users within the same cluster wish to exchange messages among themselves. It is assumed that the users cannot receive each other’s signals directly, and hence the relay terminal is the enabler of communication. A relevant metric to study in this scenario is the symmetric rate achievable by all users, which we identify for amplifyandforward (AF), decodeandforward (DF) and compressandforward (CF) protocols. We also present an upper bound for comparison. The two extreme cases, namely full data exchange, in which every user wants to receive messages of all other users, and pairwise data exchange, consisting of multiple twoway relay channels, are investigated and presented in detail. I.
Multiple access channels with states causally known at transmitters,” November 2010, submitted to IEEE Transactions on Information Theory, available online at http://arxiv.org/abs/1011.6639
"... Abstract—It has been recently shown by Lapidoth and Steinberg that strictly causal state information can be beneficial in multiple access channels (MACs). Specifically, it was proved that the capacity region of a twouser MAC with independent states, each known strictly causally to one encoder, can ..."
Abstract

Cited by 21 (3 self)
 Add to MetaCart
(Show Context)
Abstract—It has been recently shown by Lapidoth and Steinberg that strictly causal state information can be beneficial in multiple access channels (MACs). Specifically, it was proved that the capacity region of a twouser MAC with independent states, each known strictly causally to one encoder, can be enlarged by letting the encoders send compressed past state information to the decoder. In this study, a generalization of the said strategy is proposed whereby the encoders compress also the past transmitted codewords along with the past state sequences. The proposed scheme uses a combination of longmessage encoding, compression of the past state sequences and codewords without binning, and joint decoding over all transmission blocks. The proposed strategy has been recently shown by Lapidoth and Steinbergtostrictlyimprove upon the original one. Capacity results are then derived for a class of channels that include twouser moduloadditive statedependent MACs. Moreover, the proposed scheme is extended to statedependent MACs with an arbitrary number of users. Finally, output feedback is introduced and an example is provided to illustrate the interplay between feedback and availability of strictly causal state information in enlarging the capacity region. Index Terms—Longmessage encoding, multiple access channels (MACs), output feedback, quantizeforward, statedependent channels, strictly causal state information. I.
Achievability proof via output statistics of random binning
 in Proc. IEEE Int. Symp. Inform. Theory (ISIT), 2012
"... This paper introduces a new and ubiquitous framework for establishing achievability results in network information theory (NIT) problems. The framework uses random binning arguments and is based on a duality between channel and source coding problems. Further, the framework uses pmf approximation ar ..."
Abstract

Cited by 18 (6 self)
 Add to MetaCart
This paper introduces a new and ubiquitous framework for establishing achievability results in network information theory (NIT) problems. The framework uses random binning arguments and is based on a duality between channel and source coding problems. Further, the framework uses pmf approximation arguments instead of counting and typicality. This allows for proving coordination and strong secrecy problems where certain statistical conditions on the distribution of random variables need to be satisfied. These statistical conditions include independence between messages and eavesdropper’s observations in secrecy problems and closeness to a certain distribution (usually, i.i.d. distribution) in coordination problems. One important feature of the framework is to enable one to add an eavesdropper and obtain a result on the secrecy rates “for free.” We make a case for generality of the framework by studying examples in the variety of settings containing channel coding, lossy source coding, joint sourcechannel coding, coordination, strong secrecy, feedback and relaying. In particular, by investigating the framework for the lossy source coding problem over broadcast channel, it is shown that the new framework provides a simple alternative scheme to hybrid coding scheme. Also, new results on secrecy rate region (under strong secrecy criterion) of wiretap broadcast channel and wiretap relay channel are derived. In a set of accompanied papers, we have shown the usefulness of the framework to establish achievability results for coordination problems including interactive channel simulation, coordination via relay and channel simulation via another channel. Index terms — Random binning, achievability, network information theory, strong secrecy, duality. 1
Shitz), “Optimality and approximate optimality of sourcechannel separation in networks
 in Proc. IEEE Int. Symp. Inf. Theory
, 2010
"... Abstract — We consider the sourcechannel separation architecture for lossy source coding in communication networks. It is shown that the separation approach is optimal in two general scenarios and is approximately optimal in a third scenario. The two scenarios for which separation is optimal compl ..."
Abstract

Cited by 18 (1 self)
 Add to MetaCart
(Show Context)
Abstract — We consider the sourcechannel separation architecture for lossy source coding in communication networks. It is shown that the separation approach is optimal in two general scenarios and is approximately optimal in a third scenario. The two scenarios for which separation is optimal complement each other: the first is when the memoryless sources at source nodes are arbitrarily correlated, each of which is to be reconstructed at possibly multiple destinations within certain distortions, but the channels in this network are synchronized, orthogonal, and memoryless pointtopoint channels; the second is when the memoryless sources are mutually independent, each of which is to be reconstructed only at one destination within a certain distortion, but the channels are general, including multiuser channels, such as multiple access, broadcast, interference, and relay channels, possibly with feedback. The third scenario, for which we demonstrate approximate optimality of sourcechannel separation, generalizes the second scenario by allowing each source to be reconstructed at multiple destinations with different distortions. For this case, the loss from optimality using the separation approach can be upperbounded when a difference distortion measure is taken, and in the special case of quadratic distortion measure, this leads to universal constant bounds. Index Terms — Joint sourcechannel coding, separation. I.
Multiterminal source coding under logarithmic loss
 CoRR
"... Abstract—We consider the twoencoder multiterminal source coding problem subject to distortion constraints computed under logarithmic loss. We provide a singleletter description of the achievable rate distortion region for arbitrarily correlated sources with finite alphabets. In doing so, we also g ..."
Abstract

Cited by 13 (3 self)
 Add to MetaCart
(Show Context)
Abstract—We consider the twoencoder multiterminal source coding problem subject to distortion constraints computed under logarithmic loss. We provide a singleletter description of the achievable rate distortion region for arbitrarily correlated sources with finite alphabets. In doing so, we also give the rate distortion region for the CEO problem under logarithmic loss. Notably, the BergerTung inner bound is tight in both settings. I.
On the capacity region for index coding
 IN PROC. IEEE INT. SYMP. INF. THEORY
, 2013
"... A new inner bound on the capacity region of the general index coding problem is established. Unlike most existing bounds that are based on graph theoretic or algebraic tools, the bound relies on a random coding scheme and optimal decoding, and has a simple polymatroidal singleletter expression. Th ..."
Abstract

Cited by 13 (5 self)
 Add to MetaCart
A new inner bound on the capacity region of the general index coding problem is established. Unlike most existing bounds that are based on graph theoretic or algebraic tools, the bound relies on a random coding scheme and optimal decoding, and has a simple polymatroidal singleletter expression. The utility of the inner bound is demonstrated by examples that include the capacity region for all index coding problems with up to five messages (there are 9846 nonisomorphic ones).
Binary Energy Harvesting Channel with Finite Energy Storage
"... Abstract—We consider the capacity of an energy harvesting communication channel with a finitesized battery. As an abstraction of this problem, we consider a system where energy arrives at the encoder in multiples of a fixed quantity, and the physical layer is modeled accordingly as a finite discret ..."
Abstract

Cited by 13 (8 self)
 Add to MetaCart
(Show Context)
Abstract—We consider the capacity of an energy harvesting communication channel with a finitesized battery. As an abstraction of this problem, we consider a system where energy arrives at the encoder in multiples of a fixed quantity, and the physical layer is modeled accordingly as a finite discrete alphabet channel based on this fixed quantity. Further, for tractability, we consider the case of binary energy arrivals into a unitcapacity battery over a noiseless binary channel. Viewing the available energy as state, this is a statedependent channel with causal state information available only at the transmitter. Further, the state is correlated over time and the channel inputs modify the future states. We show that this channel is equivalent to an additive geometricnoise timing channel with causal information of the noise available at the transmitter. We provide a singleletter capacity expression involving an auxiliary random variable, and evaluate this expression with certain auxiliary random variable selection, which resembles noise concentration and latticetype coding in the timing channel. We evaluate the achievable rates by the proposed auxiliary selection and extend our results to noiseless ternary channels. I.
Lattice codes for the Gaussian relay channel: DecodeandForward and CompressandForward,” [Online]. Available: http://arxiv.org/pdf/1111.0084v1.pdf
"... Abstract—Lattice codes are known to achieve capacity in the Gaussian pointtopoint channel, achieving the same rates as i.i.d. random Gaussian codebooks. Lattice codes are also known to outperform random codes for certain channel models that are able to exploit their linearity. In this paper, we sh ..."
Abstract

Cited by 12 (3 self)
 Add to MetaCart
Abstract—Lattice codes are known to achieve capacity in the Gaussian pointtopoint channel, achieving the same rates as i.i.d. random Gaussian codebooks. Lattice codes are also known to outperform random codes for certain channel models that are able to exploit their linearity. In this paper, we show that lattice codes may be used to achieve the same performance as known i.i.d. Gaussian random coding techniques for the Gaussian relay channel, and show several examples of how this may be combined with the linearity of lattices codes in multisource relay networks. In particular, we present a nested lattice list decoding technique in which lattice codes are shown to achieve the decodeandforward (DF) rate of single source, single destination Gaussian relay channels with one or more relays. We next present two examples of how this DF scheme may be combined with the linearity of lattice codes to achieve new rate regions which for some channel conditions outperform analogous known Gaussian random coding techniques in multisource relay channels. That is, we derive a new achievable rate region for the twoway relay channel with direct links and compare it to existing schemes, and derive a new achievable rate region for the multiple access relay channel. We furthermore present a lattice compressandforward (CF) scheme for the Gaussian relay channel which exploits a lattice Wyner–Ziv binning scheme and achieves the same rate as the Cover–El Gamal CF rate evaluated for Gaussian random codes. These results suggest that structured/lattice codes may be used to mimic, and sometimes outperform, random Gaussian codes in general Gaussian networks. Index Terms—Compress and forward, decode and forward, Gaussian relay channel, lattice codes, relay channel. I.
Sparse signal processing with linear and nonlinear observations: A unified Shannontheoretic approach,” arXiv preprint arXiv:1304.0682
, 2013
"... In this work we derive fundamental limits for many linear and nonlinear sparse signal processing models including linear and quantized compressive sensing, group testing, multivariate regression and problems with missing features. In general, sparse signal processing problems can be characterized i ..."
Abstract

Cited by 11 (4 self)
 Add to MetaCart
(Show Context)
In this work we derive fundamental limits for many linear and nonlinear sparse signal processing models including linear and quantized compressive sensing, group testing, multivariate regression and problems with missing features. In general, sparse signal processing problems can be characterized in terms of the following Markovian property. We are given a set of N variables X1, X2,..., XN, and there is an unknown subset of variables S ⊂ {1, 2,..., N} that are relevant for predicting outcomes/outputs Y. In other words, when Y is conditioned on {Xn}n∈S it is conditionally independent of the other variables, {Xn}n 6∈S. Our goal is to identify the set S from samples of the variables X and the associated outcomes Y. We characterize this problem as a version of the noisy channel coding problem. Using asymptotic information theoretic analyses, we establish mutual information formulas that provide sufficient and necessary conditions on the number of samples required to successfully recover the salient variables. These mutual information expressions unify conditions for both linear and nonlinear observations. We then compute sample complexity bounds for the aforementioned models, based on the mutual information expressions in order to demonstrate the applicability and flexibility of our results in general sparse signal processing models. 1
Polar Codes for Broadcast Channels
, 2013
"... Polar codes are introduced for discrete memoryless broadcast channels. For muser deterministic broadcast channels, polarization is applied to map uniformly random message bits from m independent messages to one codeword while satisfying broadcast constraints. The polarizationbased codes achieve ra ..."
Abstract

Cited by 10 (1 self)
 Add to MetaCart
Polar codes are introduced for discrete memoryless broadcast channels. For muser deterministic broadcast channels, polarization is applied to map uniformly random message bits from m independent messages to one codeword while satisfying broadcast constraints. The polarizationbased codes achieve rates on the boundary of the privatemessage capacity region. For twouser noisy broadcast channels, polar implementations are presented for two informationtheoretic schemes: i) Cover’s superposition codes; ii) Marton’s codes. Due to the structure of polarization, constraints on the auxiliary and channelinput distributions are identified to ensure proper alignment of polarization indices in the multiuser setting. The codes achieve rates on the capacity boundary of a few classes of broadcast channels (e.g., binaryinput stochastically degraded). The complexity of encoding and decoding is O(nlogn) where n is the block length. In addition, polar code sequences obtain a stretchedexponential decay of O(2−nβ) of the average block error probability where 0 < β < 1 2. Reproducible experiments for finite block