### Table 2 Number of Times that Max-Flow is Executed in c = 1 and c = 2 passes in CutMap

1995

"... In PAGE 5: ... We evaluate the impact of applying Theorem 3 in the optimal min-cost K-feasible cut algorithm by reporting number of times that max-flow routine is executed in c = 1 and c = 2 passes. Table2... ..."

Cited by 38

### Table 2 Number of Times that Max-Flow is Executed in c = 1 and c = 2 passes in CutMap

1995

"... In PAGE 5: ... We evaluate the impact of applying Theorem 3 in the optimal min-cost K-feasible cut algorithm by reporting number of times that max-flow routine is executed in c = 1 and c = 2 passes. Table2... ..."

Cited by 38

### Table 2 Number of Times that Max-Flow is Executed in c = 1 and c = 2 passes in CutMap

"... In PAGE 5: ... We evaluate the impact of applying Theorem 3 in the optimal min-cost K-feasible cut algorithm by reporting number of times that max-flow routine is executed in c = 1 and c = 2 passes. Table2... ..."

### Table 1 Number of times that Max-Flow is executed in c = 1 and c = 2 passes in CutMap.

"... In PAGE 12: ... We evaluate the impact of applying Theorem 2 in the optimal min-cost K-feasible cut algorithm by reporting number of times that max-flow routine is executed in c = 1 and c = 2 passes. Table1 shows the numbers reported by CutMap for six small benchmarks. Numbers in row I and II are the numbers reported by CutMap without and with applying Theorem 2 for speed-up, respectively.... ..."

### Table 1 Number of times that Max-Flow is executed in c = 1 and c = 2 passes in CutMap.

1995

"... In PAGE 12: ... We evaluate the impact of applying Theorem 2 in the optimal min-cost K-feasible cut algorithm by reporting number of times that max-flow routine is executed in c = 1 and c = 2 passes. Table1 shows the numbers reported by CutMap for six small benchmarks. Numbers in row I and II are the numbers reported by CutMap without and with applying Theorem 2 for speed-up, respectively.... ..."

### Table 1: Max- ow min-cut upper bounds and generalized block Markov lower bounds on C(P; P ) and CFD(P; P ).

2006

"... In PAGE 3: ... In the following discussion we summarize our main results and provide an outline for the rest of the paper. Bounds on capacity: In Section 2, we use the max- ow min-cut upper bound [4] and the generalized block Markov lower bound [4, 5] on the capacity of the relay channel to derive upper and lower bounds on the capacity of the general and FD-AWGN relay channels (see Table1 ). The bounds are not tight for the general AWGN model for any a; b gt; 0 and are tight only for a restricted range of these parameters for the FD-AWGN model.... In PAGE 6: ...that R(P; P ) is in fact achievable by evaluating the mutual information terms in (2) using a jointly Gaussian (U; X; X1). We now show that the lower bound in (2) with the power constraints is upper bounded by R(P; P ) in Table1 . It is easy to verify that I(X; X1; Y ) C (1 + b2 + 2b p )P N ; where is the correlation coe cient between X and X1.... In PAGE 7: ...I(U; Y1jX1) + I(X; Y jX1; U) 1 2 log a2(1 2)P + N N + 1 2 log P + N a2P + N C P N : For a gt; 1, note that h(Y1jX1; U) = h(aX+Z1jX1; U) = h(aX+ZjX1; U) h(X+ZjX1; U) = h(Y jX1; U) and hence I(U; Y1jX1) + I(X; Y jX1; U) C a2(1 2)P N : Note that the above bounds are achieved by choosing (U; X1; X) to be jointly Gaussian with zero mean and appropriately chosen covariance matrix. Performing the maximization over gives the lower bound result in Table1 . This completes the derivation of the lower bound.... In PAGE 9: ... First note that the minimum energy-per-bit for the direct channel, given by 2N ln 2, is an upper bound on the minimum energy-per-bit for both relay channel models considered. Using Theorem 1 and the bounds on capacity given in Table1 , we obtain the lower and upper bounds on the minimum energy-per-bit... In PAGE 10: ...e., 1 + a2 + b2 (1 + a2)(1 + b2) Eb 2N ln 2 min 1; a2 + b2 a2(1 + b2) : To prove the lower bound we use the upper bound ^ C(P; P ) on capacity in Table1 and the relationship of Theorem 1 to obtain the bound. Substituting the upper bound given in Table 1 and taking limits as P ! 0, we obtain the expression Eb 2N ln 2 min 8 gt; lt; gt; : min 0 lt;a2 b2 (1 + )(1 + a2) abp + p1 + a2 b2 2 ; min a2 b2 1 + 1 + a2 9 gt; = gt; ; : To complete the derivation of the lower bound, we analytically perform the minimization.... In PAGE 10: ...hannel given in table, i.e., 1 + a2 + b2 (1 + a2)(1 + b2) Eb 2N ln 2 min 1; a2 + b2 a2(1 + b2) : To prove the lower bound we use the upper bound ^ C(P; P ) on capacity in Table 1 and the relationship of Theorem 1 to obtain the bound. Substituting the upper bound given in Table1 and taking limits as P ! 0, we obtain the expression Eb 2N ln 2 min 8 gt; lt; gt; : min 0 lt;a2 b2 (1 + )(1 + a2) abp + p1 + a2 b2 2 ; min a2 b2 1 + 1 + a2 9 gt; = gt; ; : To complete the derivation of the lower bound, we analytically perform the minimization. For a2=b2, it is easy to see that the minimum is achieved by making as small as possible, i.... In PAGE 10: ... Now we turn our attention to upper bounds on minimum energy-per-bit. Using the lower bound on capacity given in Table1 and the relationship in Theorem 1, we can obtain an... In PAGE 11: ...Table1 satis es the conditions on C(P; P ) in Lemma 1, and therefore, the best upper bound is given by Eb inf 0 lim P!0 (1 + )P R(P; P ): Now we show that this bound gives Eb 2N ln 2 min 1; a2 + b2 a2(1 + b2) : Substituting the lower bound R(P; P ) from Table 1 in theorem 1 and taking the limit as P ! 0, for a gt; 1 we obtain Eb 2N ln 2 min 8 gt; lt; gt; : min 0 lt;a2 1 b2 (1 + )a2 bp(a2 1) + pa2 b2 2 ; min a2 1 b2 1 + a2 9 gt; = gt; ; : To evaluate this bound we use the same approach we used in evaluating the lower bound. We consider the two cases lt; (a2 1)=b2 and (a2 1)=b2 and nd that the minimization is achieved for = (a2 1)b2=(b4 + 2b2 + a2) lt; (a2 1)=b2, and the bound is given by the expression in the theorem.... In PAGE 11: ...the conditions on C(P; P ) in Lemma 1, and therefore, the best upper bound is given by Eb inf 0 lim P!0 (1 + )P R(P; P ): Now we show that this bound gives Eb 2N ln 2 min 1; a2 + b2 a2(1 + b2) : Substituting the lower bound R(P; P ) from Table1 in theorem 1 and taking the limit as P ! 0, for a gt; 1 we obtain Eb 2N ln 2 min 8 gt; lt; gt; : min 0 lt;a2 1 b2 (1 + )a2 bp(a2 1) + pa2 b2 2 ; min a2 1 b2 1 + a2 9 gt; = gt; ; : To evaluate this bound we use the same approach we used in evaluating the lower bound. We consider the two cases lt; (a2 1)=b2 and (a2 1)=b2 and nd that the minimization is achieved for = (a2 1)b2=(b4 + 2b2 + a2) lt; (a2 1)=b2, and the bound is given by the expression in the theorem.... In PAGE 11: ... The new de nition is E(n) = 1 nRn maxk E(n)(k) + E(n) r : It is easy to see that the bounds in this section hold with b replaced by b=p . 4 Side-Information Lower Bounds The lower bounds on capacity given in Table1 are based on the generalized block Markov encoding scheme. In this scheme, the relay node is either required to fully decode the message transmitted by the sender or is not used at all.... In PAGE 28: ... Note that I(X; Y1; YD; YRjX1) = I(X; Y1jX1) + I(X; YDjX1; Y1) + I(X; YRjX1; Y1; YD) = I(X; Y1jX1) + I(X; YDjX1; Y1) = h(Y1jX1) h(Y1jX; X1) + h(YDjX1; Y1) h(YDjX; X1; Y1) h(Y1jX1) + h(YDjX1; Y1) log 2 eN = h(Y1jX1) + h(YDjY1) log 2 eN 1 2 log 2 eVar(Y1jX1) + 1 2 log 2 eVar(YDjY1) log 2 eN = C a2P (1 2) N + C P a2P + N : Similarly, it can be shown that I(X; X1; YD; YR) = I(X; YD; YR) + I(X1; YD; YRjX) = I(X; YD) + I(X; YRjYD) + I(X1; YRjX) + I(X1; YDjX; YR) = I(X; YD) + I(X; YRjYD) + I(X1; YRjX) C P N + C b2 2NP (b2 P (1 2) + N)(P + N) + C b2 P (1 2) N : Again both terms are maximized for = 0. As a result the following upper bound on capacity can be established C min C P N + C b2 P N ; C (1 + a2)P N : Upper and lower bounds in Table1 can be readily established.... ..."

Cited by 6

### Table 2: Experiment Result of MaxFlow

2004

"... In PAGE 4: ...odes. They have the same demand as 100. The unicast path be- tween any pair of nodes with each session is determined by shortest- path routing. Table2 shows the result of MaxFlow with different approxima- tion ratios. The overall throughput is the aggregate receiving rate of all session members, i.... In PAGE 6: ... The overhead of this extra step is reflected in the second part. From the data in Table 4, we have the same observation as from Table2 , except that the rate of session 2 is increased, at the price of dragging down the rate of session 1. The overall throughput also drops for the same reason.... ..."

Cited by 6

### Table 2: Experiment Result of MaxFlow

2004

"... In PAGE 4: ...odes. They have the same demand as 100. The unicast path be- tween any pair of nodes with each session is determined by shortest- path routing. Table2 shows the result of MaxFlow with different approxima- tion ratios. The overall throughput is the aggregate receiving rate of all session members, i.... In PAGE 6: ... The overhead of this extra step is reflected in the second part. From the data in Table 4, we have the same observation as from Table2 , except that the rate of session 2 is increased, at the price of dragging down the rate of session 1. The overall throughput also drops for the same reason.... ..."

Cited by 6

### TABLE II EXPERIMENT RESULT OF MaxFlow

2004

Cited by 6

### Table 1 (Min Cut)

1994

"... In PAGE 21: ...ensity is 23.6%.- Circuit 3 consists of 2670 cells and 3128 nets, its density is 50.3%. The density, of a circuit is the total area of the cells to be placed divided by the area of the master. Table1 presents the results of a min-cut based placement procedure (cf. LAUTHER (1979) for a detailed discussion of the procedure), Table 2 shows the results for Gordian, a method based on an energy model (KLEINHANS, SlGL, JOHANNES (1988)).... ..."

Cited by 6