Results 1  10
of
36
Capacity of interference channels with partial transmitter cooperation
 IEEE Transactions on Information Theory
"... Abstract—Capacity regions are established for several twosender, tworeceiver channels with partial transmitter cooperation. First, the capacity regions are determined for compound multipleaccess channels (MACs) with common information and compound MACs with conferencing. Next, two interference chan ..."
Abstract

Cited by 107 (10 self)
 Add to MetaCart
Abstract—Capacity regions are established for several twosender, tworeceiver channels with partial transmitter cooperation. First, the capacity regions are determined for compound multipleaccess channels (MACs) with common information and compound MACs with conferencing. Next, two interference channel models are considered: an interference channel with common information (ICCI) and an interference channel with unidirectional cooperation (ICUC) in which the message sent by one of the encoders is known to the other encoder. The capacity regions of both of these channels are determined when there is strong interference, i.e., the interference is such that both receivers can decode all messages with no rate penalty. The resulting capacity regions coincide with the capacity region of the compound MAC with common information. Index Terms—Capacity region, cooperation, strong interference. I.
Lossy Source Coding
 IEEE Trans. Inform. Theory
, 1998
"... Lossy coding of speech, highquality audio, still images, and video is commonplace today. However, in 1948, few lossy compression systems were in service. Shannon introduced and developed the theory of source coding with a fidelity criterion, also called ratedistortion theory. For the first 25 year ..."
Abstract

Cited by 104 (1 self)
 Add to MetaCart
(Show Context)
Lossy coding of speech, highquality audio, still images, and video is commonplace today. However, in 1948, few lossy compression systems were in service. Shannon introduced and developed the theory of source coding with a fidelity criterion, also called ratedistortion theory. For the first 25 years of its existence, ratedistortion theory had relatively little impact on the methods and systems actually used to compress real sources. Today, however, ratedistortion theoretic concepts are an important component of many lossy compression techniques and standards. We chronicle the development of ratedistortion theory and provide an overview of its influence on the practice of lossy source coding. Index TermsData compression, image coding, speech coding, rate distortion theory, signal coding, source coding with a fidelity criterion, video coding. I.
On Maximal Correlation, Hypercontractivity, and the Data Processing Inequality studied by Erkip and
"... In this paper we provide a new geometric characterization of the HirschfeldGebeleinRényi maximal correlation of a pair of random (X, Y), as well as of the chordal slope of the nontrivial boundary of the hypercontractivity ribbon of (X, Y) at infinity. The new characterizations lead to simple proof ..."
Abstract

Cited by 12 (1 self)
 Add to MetaCart
(Show Context)
In this paper we provide a new geometric characterization of the HirschfeldGebeleinRényi maximal correlation of a pair of random (X, Y), as well as of the chordal slope of the nontrivial boundary of the hypercontractivity ribbon of (X, Y) at infinity. The new characterizations lead to simple proofs for some of the known facts about these quantities. We also provide a counterexample to a data processing inequality claimed by Erkip and Cover, and find the correct tight constant for this kind of inequality. I.
Wiretap Channel with Causal State Information
"... Abstract—A lower bound on the secrecy capacity of the wiretap channel with state information available causally at both the encoder and decoder is established. The lower bound is shown to be strictly larger than that for the noncausal case by Liu and Chen. Achievability is proved using block Markov ..."
Abstract

Cited by 11 (0 self)
 Add to MetaCart
(Show Context)
Abstract—A lower bound on the secrecy capacity of the wiretap channel with state information available causally at both the encoder and decoder is established. The lower bound is shown to be strictly larger than that for the noncausal case by Liu and Chen. Achievability is proved using block Markov coding, Shannon strategy, and key generation from common state information. The state sequence available at the end of each block is used to generate a key, which is used to enhance the transmission rate of the confidential message in the following block. An upper bound on the secrecy capacity when the state is available noncausally at the encoder and decoder is established and is shown to coincide with the lower bound for several classes of wiretap channels with state. I.
Gamal, “Threereceiver broadcast channels with common and confidential messages
 IEEE Transactions on Information Theory
, 2012
"... Abstract—This paper establishes inner bounds on the secrecy capacity regions for the general threereceiver broadcast channel with one common and one confidential message sets. We consider two setups. The first is when the confidential message is to be sent to two receivers and kept secret from the ..."
Abstract

Cited by 11 (0 self)
 Add to MetaCart
Abstract—This paper establishes inner bounds on the secrecy capacity regions for the general threereceiver broadcast channel with one common and one confidential message sets. We consider two setups. The first is when the confidential message is to be sent to two receivers and kept secret from the third receiver. Achievability is established using indirect decoding, Wyner wiretap channel coding, and the new idea of generating secrecy from a publicly available superposition codebook. The inner bound is shown to be tight for a class of reversely degraded broadcast channels and when both legitimate receivers are less noisy than the third receiver. The second setup investigated in this paper is when the confidential message is to be sent to one receiver and kept secret from the other two receivers. Achievability in this case follows from Wyner wiretap channel coding and indirect decoding. This inner bound is also shown to be tight for several special cases. Index Terms—Secrecy capacity regions, threereceiver broadcast channels, wiretap channels. I.
On Marton’s inner bound for broadcast channels,” Arxiv:1202.0898
, 2012
"... Abstract—Marton’s inner bound is the best known achievable region for a general discrete memoryless broadcast channel. To compute Marton’s inner bound one has to solve an optimization problem over a set of joint distributions on the input and auxiliary random variables. The optimizers turn out to be ..."
Abstract

Cited by 6 (4 self)
 Add to MetaCart
(Show Context)
Abstract—Marton’s inner bound is the best known achievable region for a general discrete memoryless broadcast channel. To compute Marton’s inner bound one has to solve an optimization problem over a set of joint distributions on the input and auxiliary random variables. The optimizers turn out to be structured in many cases. Finding properties of optimizers not only results in efficient evaluation of the region, but it may also help one to prove factorization of Marton’s inner bound (and thus its optimality). The first part of this paper formulates this factorization approach explicitly and states some conjectures and results along this line. The second part of this paper focuses primarily on the structure of the optimizers. This section is inspired by a new binary inequality that recently resulted in a very simple characterization of the sumrate of Marton’s inner bound for binary input broadcast channels. This prompted us to investigate whether this inequality can be extended to larger cardinality input alphabets. We show that several of the results for the binary input case do carry over for higher cardinality alphabets and we present a collection of results that help restrict the search space of probability distributions to evaluate the boundary of Marton’s inner bound in the general case. We also prove a new inequality for the binary skewsymmetric broadcast channel that yields a very simple characterization of the entire Marton inner bound for this channel. I.
An Achievable Region for a General Multiterminal Network and its Chain Graph Representation
, 2012
"... ..."
Universal polarization
 Online]. Available: http://arxiv.org/abs/1307.7495v2
, 2013
"... ar ..."
(Show Context)
Success Exponent of Wiretapper: A Tradeoff between Secrecy and Reliability
, 2008
"... Equivocation rate has been widely used as an informationtheoretic measure of security after Shannon [12]. It simplifies problems by removing the effect of atypical behavior from the system. In [11], however, Merhav and Arikan considered the alternative of using guessing exponent to analyze the Sha ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
Equivocation rate has been widely used as an informationtheoretic measure of security after Shannon [12]. It simplifies problems by removing the effect of atypical behavior from the system. In [11], however, Merhav and Arikan considered the alternative of using guessing exponent to analyze the Shannon’s cipher system. Because guessing exponent captures the atypical behavior, the strongest expressible notion of secrecy requires the more stringent condition that the size of the key, instead of its entropy rate, to be equal to the size of the message. 1 The relationship between equivocation and guessing exponent are also investigated in [8][9] but it is unclear which is a better measure, and whether there is a unifying measure of security. Instead of using equivocation rate or guessing exponent, we study the wiretap channel in [2] using the success exponent, defined as the exponent of a wiretapper successfully learn the secret after making an exponential number of guesses to a sequential verifier that gives yes/no answer to each guess. By extending the coding scheme in [2][6] and the converse proof in [4] with the new Overlap Lemma V.2, we obtain a tradeoff between secrecy and reliability expressed in terms of lower bounds on the error and success exponents of authorized and respectively unauthorized decoding of the transmitted messages. From this, we obtain an inner bound to the region of strongly achievable public, private and guessing rate triples for which the exponents are strictly positive. The closure of this region is equivalent to the closure of the region in Theorem 1 of [2] when we treat equivocation rate as the guessing rate. However, it is unclear if the inner bound is tight.
Wiretap channel with ratelimited feedback
 Proc. IEEE Int. Symp. on Inform. Theory (ISIT
, 2008
"... Abstract—This paper studies the problem of secure communication over a degraded wiretap channel p(y, zx) = p(yx)p(zy) with secure feedback link of rate Rf, where X is the channel input, and Y and Z are channel outputs observed by the legitimate receiver and the wiretapper respectively. The secr ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
(Show Context)
Abstract—This paper studies the problem of secure communication over a degraded wiretap channel p(y, zx) = p(yx)p(zy) with secure feedback link of rate Rf, where X is the channel input, and Y and Z are channel outputs observed by the legitimate receiver and the wiretapper respectively. The secrecy capacity is characterized as Cs(Rf) = max p(x) min{I(X;Y), I(X;Y Z) + Rf}. A capacityachieving coding scheme is presented, in which the receiver securely feeds back fresh randomness with rate Rf, independent of the received channel output. The transmitter then uses the shared randomness as a secret key on top of Wyner’s coding scheme for wiretap channel without feedback. Hence, when the receiver has a means of interacting with the transmitter, he should allocate all resources to convey a new key rather than sending back the channel output. For the converse, a recursive argument is used to obtain the singleletter characterization. I.