Results 1  10
of
26
Capacity Regions and Bounds for a Class of Zinterference Channels
"... We define a class of Zinterference channels for which we obtain a new upper bound on the capacity region. The bound exploits a technique first introduced by Korner and Marton. A channel in this class has the property that, for the transmitterreceiver pair that suffers from interference, the condit ..."
Abstract

Cited by 14 (6 self)
 Add to MetaCart
(Show Context)
We define a class of Zinterference channels for which we obtain a new upper bound on the capacity region. The bound exploits a technique first introduced by Korner and Marton. A channel in this class has the property that, for the transmitterreceiver pair that suffers from interference, the conditional output entropy at the receiver is invariant with respect to the transmitted codewords. We compare the new capacity region upper bound with the Han/Kobayashi achievable rate region for interference channels. This comparison shows that our bound is tight in some cases, thereby yielding specific points on the capacity region as well as sum capacity for certain Zinterference channels. In particular, this result can be used as an alternate method to obtain sum capacity of Gaussian Zinterference channels. We then apply an additional restriction on our channel class: the transmitterreceiver pair that suffers from interference achieves its maximum output entropy with a single input distribution irrespective of the interference distribution. For these channels we show that our new capacity region upper bound coincides with the Han/Kobayashi achievable rate region, which is therefore capacityachieving. In particular, for these channels superposition encoding with partial decoding is shown to be optimal and a singleletter characterization for the capacity region is obtained.
A Mutual Information Invariance Approach to Symmetry in Discrete Memoryless Channels
"... Abstract — There are numerous notions of symmetry for discrete memoryless channels. A common goal of these various definitions is that the capacity may be easily computed once the channel is declared to be symmetric. In this paper we focus on a class of definitions of symmetry characterized by the i ..."
Abstract

Cited by 12 (6 self)
 Add to MetaCart
Abstract — There are numerous notions of symmetry for discrete memoryless channels. A common goal of these various definitions is that the capacity may be easily computed once the channel is declared to be symmetric. In this paper we focus on a class of definitions of symmetry characterized by the invariance of the channel mutual information over a group of permutations of the input distribution. For definitions of symmetry within this class, we give a simple proof of the optimality of the uniform distribution. The fundamental channels are all symmetric with a general enough definition of symmetry. This paper provides a definition of symmetry that covers these fundamental channels along with a proof that is simple enough to find itself on the chalkboard of even the most introductory class in information theory. I.
Interference Channels with Correlated Receiver Side Information
"... The problem of joint sourcechannel coding in transmitting independent sources over interference channels with correlated receiver side information is studied. When each receiver has side information correlated with its own desired source, it is shown that sourcechannel code separation is optimal. ..."
Abstract

Cited by 7 (4 self)
 Add to MetaCart
(Show Context)
The problem of joint sourcechannel coding in transmitting independent sources over interference channels with correlated receiver side information is studied. When each receiver has side information correlated with its own desired source, it is shown that sourcechannel code separation is optimal. When each receiver has side information correlated with the interfering source, sufficient conditions for reliable transmission are provided based on a joint sourcechannel coding scheme using the superposition encoding and partial decoding idea of Han and Kobayashi. When the receiver side information is a deterministic function of the interfering source, sourcechannel code separation is again shown to be optimal. As a special case, for a class of Zinterference channels, when the side information of the receiver facing interference is a deterministic function of the interfering source, necessary and sufficient conditions for reliable transmission are provided in the form of single letter expressions. As a byproduct of these joint sourcechannel coding results, the capacity region of a class of Zchannels with degraded message sets is also provided.
Bounds and Capacity Results for the Cognitive zinterference Channel
, 2009
"... AbstractWe study the discrete memoryless Zinterference channel (ZIC) where the transmitter of the pair that suffers from interference is cognitive. We first provide upper and lower bounds on the capacity of this channel. We then show that, when the channel of the transmitterreceiver pair that doe ..."
Abstract

Cited by 7 (1 self)
 Add to MetaCart
(Show Context)
AbstractWe study the discrete memoryless Zinterference channel (ZIC) where the transmitter of the pair that suffers from interference is cognitive. We first provide upper and lower bounds on the capacity of this channel. We then show that, when the channel of the transmitterreceiver pair that does not face interference is noiseless, the two bounds coincide and therefore define the capacity region. The obtained results imply that, unlike in the Gaussian cognitive ZIC, in the considered channel superposition encoding at the noncognitive transmitter as well as Gel'fandPinsker encoding at the cognitive transmitter are needed in order to minimize the impact of interference. As a byproduct of the obtained capacity region, we obtain the capacity result for a generalized Gel'fandPinsker problem. I.
The Capacity Region of the Cognitive Zinterference Channel with One Noiseless Component
, 2008
"... We study the discrete memoryless Zinterference channel (ZIC) where the transmitter of the pair that suffers from interference is cognitive. We first provide upper and lower bounds on the capacity of this channel. We then show that, when the channel of the transmitterreceiver pair that does not fac ..."
Abstract

Cited by 7 (2 self)
 Add to MetaCart
We study the discrete memoryless Zinterference channel (ZIC) where the transmitter of the pair that suffers from interference is cognitive. We first provide upper and lower bounds on the capacity of this channel. We then show that, when the channel of the transmitterreceiver pair that does not face interference is noiseless, the two bounds coincide and therefore yield the capacity region. The obtained results imply that, unlike in the Gaussian cognitive ZIC, in the considered channel superposition encoding at the noncognitive transmitter as well as Gel’fandPinsker encoding at the cognitive transmitter are needed in order to minimize the impact of interference. As a byproduct of the obtained capacity region, we obtain the capacity result for a generalized Gel’fandPinsker problem.
On the Sum Secure Degrees of Freedom of TwoUnicast Layered Wireless Networks
"... Abstract—In this paper, we study the sum secure degrees of freedom (d.o.f.) of twounicast layered wireless networks. Without a secrecy constraint, the sum d.o.f. of this class of networks was studied by [1] and shown to take only one of three possible values: 1, 3/2 and 2, for all network configura ..."
Abstract

Cited by 7 (4 self)
 Add to MetaCart
(Show Context)
Abstract—In this paper, we study the sum secure degrees of freedom (d.o.f.) of twounicast layered wireless networks. Without a secrecy constraint, the sum d.o.f. of this class of networks was studied by [1] and shown to take only one of three possible values: 1, 3/2 and 2, for all network configurations. We consider the setting where the message of each sourcedestination pair must be kept informationtheoretically secure from the unintended receiver. We show that the sum secure d.o.f. can take 0, 1, 3/2, 2 and at most countably many other positive values, which we enumerate. s1 u1 u2 u3 t1 t2 s2 w1 w2 w3
On the totally asynchronous interference channel with singleuser receivers
 International Symp. Inf. Theory, ISIT 2009, Seoul, Korea
, 2009
"... Abstract—The performance characterization of decentralized wireless networks with uncoordinated senderdestination pairs motivates the study of the totally asynchronous interference channel with singleuser receivers. Since this channel is not information stable, its capacity region is determined re ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
(Show Context)
Abstract—The performance characterization of decentralized wireless networks with uncoordinated senderdestination pairs motivates the study of the totally asynchronous interference channel with singleuser receivers. Since this channel is not information stable, its capacity region is determined resorting to information density, although more amenable singleletter inner and outer bounds are provided as well. Aiming at numerical evaluation of the achievable rates, we subsequently concentrate on the inner bound for the Gaussian case. We show that taking Gaussian inputs is not the best choice in general and derive analytical conditions under which other input distributions may be optimal. Essentially, these conditions require the channel to be interferencelimited. Finally, the existence of such nonGaussian distributions with superior performance is validated numerically in different scenarios. I.
Secrecy Games on the OneSided Interference Channel
"... Abstract—In this paper, we study the twouser onesided interference channel with confidential messages. In this interference channel, in addition to the usual selfishness of the users, the relationship between the two pairs of users is further adversarial in the sense of both receivers ’ desires to ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
(Show Context)
Abstract—In this paper, we study the twouser onesided interference channel with confidential messages. In this interference channel, in addition to the usual selfishness of the users, the relationship between the two pairs of users is further adversarial in the sense of both receivers ’ desires to eavesdrop on the communication of the other pair. We develop a gametheoretic model to study the informationtheoretic secure communications in this setting. We first start with a gametheoretic model where each pair’s payoff is their own secrecy rate. The analysis of the binary deterministic interference channel with this payoff function shows that selfjamming of a transmitter, which injures the eavesdropping ability of its own receiver, is not excluded by the Nash equilibria. We propose a refinement for the payoff function by explicitly accounting for the desire of the receiver to eavesdrop on the other party’s communication. This payoff function captures the adversarial relationship between the two pairs of users better. We determine the Nash equilibria for the binary deterministic channel for both payoff functions. I.
Optimal IndependentEncoding Schemes for Several Classes of Discrete Degraded Broadcast Channels,” ArXiv:0811.4162v2
, 2009
"... ..."
(Show Context)
Superposition Encoding and Partial Decoding Is Optimal for a Class of Zinterference Channels
"... Abstract — We apply a technique introduced by Korner and Marton to the converse of a class of Zinterference channels. This class has the properties that, for the transmitterreceiver pair that suffers from interference, 1) the conditional output entropy is invariant with respect to the input and, 2 ..."
Abstract

Cited by 3 (3 self)
 Add to MetaCart
(Show Context)
Abstract — We apply a technique introduced by Korner and Marton to the converse of a class of Zinterference channels. This class has the properties that, for the transmitterreceiver pair that suffers from interference, 1) the conditional output entropy is invariant with respect to the input and, 2) the maximum output entropy is achieved by a single input distribution irrespective of the interference distribution. We show that for this class of channels, superposition encoding and partial decoding is optimal. We thus provide a singleletter characterization for the capacity region, which was previously unknown. I.