### Table 2: Cell loss probabilities for Example 2.

"... In PAGE 15: ... We shall use the same eight streams in both Examples 2 and 3, but with di erent stream loads in the two cases. The load values for Example 2 are shown in Table2 . As in Example 1, Streams S3 and S4 each have o ered load = 0:9, and Streams S5-S8 each have = 0:5.... In PAGE 16: ...8, so the fair cell loss probability for each of these streams is (1:8 ? 1)=1:8 = 0:444, to rst order. The third column of Table2 shows this idealized fair performance. As in Example 1, Scheme NC gives all of Streams S1-S4 roughly the same poor per- formance that only S3 and S4 deserve.... ..."

### Table 3: Cell loss probabilities for Example 3.

"... In PAGE 17: ... Our third example further explores fairness issues when streams of di erent intensities collide. The load values for Example 3 are shown in Table3 . Note that both output lines 0 and 3 are overloaded in this case.... In PAGE 18: ... Streams S3 and S4 should each see this same 41% loss, because cells get pushed out of output line 3 apos;s queue without regard for their particular streams. The loss data for Streams S3 and S4 in Table3 closely match this projection. Output line 0 apos;s overall loss probability should be: (0:4 + 0:8 ? 1)=(0:4 + 0:8) = 0:167.... ..."

### Table 2 - Cell loss probability (K = k = buffer length) for different analytical methods

"... In PAGE 4: ......+N X D X /D/1 and MMPP/M/1/K. These are summarised in Table2 , in what concerns the cell loss probability calculation. ... ..."

Cited by 1

### Table 1: Cell loss probabilities for Example 1. 12 Basak, Choudhury and Hahne

"... In PAGE 10: ...on dence intervals for the data points in this table are all narrower than 0.002. Before consulting the actual cell loss data, let us do a crude rst-order analysis of this load pattern. Imagine that the eight streams are smooth, rather than bursty, but with the same ow rates given in Table1 . For these smooth ows, let us de ne fair loss performance as follows.... In PAGE 11: ... The term \1 quot; in the formula refers to the service rate of the output port. The third column of Table1 gives this idealized fair performance for the load pattern of Example 1. Stream S1 has a load of 0.... In PAGE 13: ... (The di erence between NC apos;s and BP apos;s loss mechanisms will be clearer in later examples with asymmetric loads.) We do not have a good rst-order model to predict the performance of Scheme RBP under these conditions, but it is clear from the data in Table1 that an e ect similar to NC apos;s and BP apos;s is at work in Scheme RBP . 13... In PAGE 14: ...5 each, their loss probabilities are each (0:5 ? 0:333)=0:5 = 0:334. As shown in Table1 , this excess is backed up into Stage 0 and dropped. The Restricted Backpressure Scheme RBP xes this problem for S5 and S6, but goes too far in the opposite direction: RBP causes such heavy losses for Stream S3 in both Stages 1 and 2 that S3 apos;s total loss substantially exceeds that of rival Streams S2 and S4.... ..."

### Table 1: Experimental results. The figures are 10 times averages and standard deviations. The Average Ploss is the average (target) cell loss probability of the misclassified patterns.

1993

"... In PAGE 5: ...gain=0.01, momentum term=0.8). The network was trained 10 times on each experiment and the same initial weights were used in all cases. Table1 shows the 10 times average percentages of bad accepts and bad rejects with standard deviations for all three experiments. The Average Ploss is the average (tar- get) cell loss probability of the misclassified patterns.... In PAGE 5: ....1. E1: Complete set First, the network was trained on 20000 rows from the complete (asymmetric) set and tested on the remaining 3512 rows. As can be seen (E1 in Table1 ), the network makes much fewer bad accept deci- sions than bad rejects, as explained above.... In PAGE 5: ...2. E2: Equal scaling The second experiment (E2 in Table1 ) was to remove the accept situations that are further away from... ..."

Cited by 4

### Table 1: Experimental results. The figures are 10 times averages and standard deviations. The Average Ploss is the average (target) cell loss probability of the misclassified patterns.

"... In PAGE 5: ...gain=0.01, momentum term=0.8). The network was trained 10 times on each experiment and the same initial weights were used in all cases. Table1 shows the 10 times average percentages of bad accepts and bad rejects with standard deviations for all three experiments. The Average Ploss is the average (tar- get) cell loss probability of the misclassified patterns.... In PAGE 5: ....1. E1: Complete set First, the network was trained on 20000 rows from the complete (asymmetric) set and tested on the remaining 3512 rows. As can be seen (E1 in Table1 ), the network makes much fewer bad accept deci- sions than bad rejects, as explained above.... In PAGE 5: ...2. E2: Equal scaling The second experiment (E2 in Table1 ) was to remove the accept situations that are further away from... ..."

### Table 5.2. The accuracy of approximations for cell loss probability in homogeneous cases, 30 sources with three cell rate levels (R31-R60), Ploss = 10-9

### Table 1: State transition probabilities a and b for given average burst lengths and average cell loss rates. The probability of loss for the rst cell, p1, is set equal to 1.

1994

"... In PAGE 2: ... By adjusting the state transition probabilities, a and b, one can achieve the desired val- ues for these parameters. Table1 shows the values used in our simulations. The data were obtained by averaging the results of 100 trials, where each trial contained 10,000 cells.... ..."

Cited by 1