### Table 2. The error in l1 norm of the stationary probabilities between the rotated Markov chain and its approximation Markov chain with fewer states K1 for di erent values of or . Imbedded at the arrival epochs: The transition matrices of the imbedded Markov chain and the rotated Markov chain can be similarly found. a) Use the in nite model as an approximation. The transition probability matrix for the rotated Markov chain is the matrix obtained by augmenting the last row of the northwest corner of the transition probability matrix for the in nite model. The in nite model is stable since the tra c intensity = 1= = = lt; 1. The solution of the stationary probabilities for the in nite model is the same as in (4). The stationary probabilities for the rotated Markov chain is given by

"... In PAGE 10: ... b) Use a nite model of fewer states to approximate the stationary probabilities of the rotated Markov chain. Similar to that in b) for the M=G=1=K queue, one may nd the the error e2(K; K1) between (K) k and (K1) k in l1 norm, which is given by e2(K; K1) = 2 K1+1 K1+2 + + K+1 1 + + + K+1 = e1(K + 1; K1): Therefore, Table2 provides e2 too. The M=Gc=1 queue: a) Use the corresponding in nite model.... ..."

### Table 2. Radio timing parameters of the Crab Pulsar . The integer part of t0;geo denotes the barycentric epoch (TDB) of f0,f1,f2. The remaining fraction denotes the geocentric arrival time of a pulse (corrected for in nite observation frequency). validity [MJD] t0;geo [MJD] f0 [s?1] f1 [s?2] f2 [s?3])

### Table 2: Elementary network point processes and their intensities.

1996

"... In PAGE 12: ... We denote by A S the point process of net arrival epochs to the S-queue, which consists of S- customer external arrival epochs and customer routing epochs from a class in S c to a class in S. We can thus express point process A S as the superposition #28see Appendix A#29 of the elementary network point processes shown in Table2 , as follows: A S = X j2S A 0 j + X i2S c X j2S R ij ; Similarly we denote by D S the point process of net departure epochs from the S-queue, consisting of S-customer external departure epochs and customer routing epochs from a class in S to a class in S c , D S = X j2S D 0 j + X j2S X i2S c R ji : Notice that we ignore customer routing epochs within classes in S, since they do not change the number of customers in the S-queue. For convenience of notation we shall also write p#28i; S#29= X j2S p ij and #0B#28S#29= X j2S #0B j : We denote the Palm probabilities and expectations with respect to point processes A S and D S by P A S #28#01#29, E A S #5B#01#5D and P D S #28#01#29, E D S #5B#01#5D, respectively.... ..."

### Table 3: Packet length distributions for considered arrival processes Percentage of Arrival Arrival Arrival Arrival

"... In PAGE 21: ... In varying these four distributions, the mass of the packet length distribution was shifted from favoring short packets to favoring long packets. The four packet length distributions are shown in Table3 . The mean utilization of the arrival process to each input queue remained xed at 10% for all experiments.... ..."

### Table 1 Model formulations for integrated GPS-GLONASS positioning and their relative redundancy

2001

"... In PAGE 7: ... With such a model formulation procedure, a total of 18 different models can be formed. These model formulations and their relative redundancy are listed in Table1 (Wang, 1999b). Table 1 Model formulations for integrated GPS-GLONASS positioning and their relative redundancy... ..."

Cited by 3

### Table 4. Component Proper Motions Epoch 1 to Epoch 3

"... In PAGE 9: ... The data from Tables 2 and 3 were used in the computation of the proper motions for those maser components identi ed in both epochs. Relevant proper motion information is shown in Table4 . Columns 1 and 2 list the right ascension and declination of the feature as measured during Epoch 1.... In PAGE 9: ... We only chose to show the the proper motions computed over the 40 days separating Epochs 1 and 3 due to the relative size of the errors in comparison to the determined proper motions. From Table4 it can be seen that the errors in the determined proper motions are a signi cant fraction of the actual position change. The shorter time periods between Epochs 1 and 2 and Epochs 2 and 3 yield less reliable results.... ..."

### Table 4: Results of literal comparisons to randomly initialized networks. Epochs Epochs

"... In PAGE 25: ... Table 3 (b) shows the con- vergence time in epochs for DBT and randomly initialized networks to reach the 66%, 95%, and 98% criteria, along with the ratio of the times for the two methods. The second two columns of Table4 (a) show the nal mean performance level of each condition, along with the 99.0% con dence interval.... In PAGE 25: ... \Epochs of Normalization Curve quot; shows the number of epochs in the \normalized quot; curve, which is taken to be all epochs up to the last epoch of statistically signi cant di erence. A \y quot; indicates that the literal network was signi cantly asymptotically inferior to the random network; remaining entries in that row and in the corresponding row of Table4 (b) were for the whole curve. The \? quot; indicates that, for the PB-onemale task, curves crossed during learning, from literal being superior to random being superior.... In PAGE 26: ... Literal vs. Random: Detailed results comparing literal and randomly initialized networks on the seven tasks studied will be discussed below and shown in Table4 . Here, we present highlights of those results.... ..."

### Table 3: Results of DBT comparisons to randomly initialized networks Epochs of Epochs

"... In PAGE 25: ... These results are broken down by task in Tables 3 and 4. The second two columns of Table3 (a) show the nal mean performance level of each condition, along with the 99.0% con dence interval.... In PAGE 25: ... \Epochs of normalization curve quot; shows the number of epochs in the \normalized quot; curve, which is taken to be all epochs up to the last epoch of statistically signi cant di erence. The \? quot; in Table3 (a) indicates that there were no epochs of signi cant di erence between DBT and random for the DNA task. Remaining entries in the row and in the corresponding row of Table 3 (b) are for the entire curve, though there was no signi cance there.... In PAGE 25: ... The \? quot; in Table 3 (a) indicates that there were no epochs of signi cant di erence between DBT and random for the DNA task. Remaining entries in the row and in the corresponding row of Table3 (b) are for the entire curve, though there was no signi cance there. Table 3 (b) shows the con- vergence time in epochs for DBT and randomly initialized networks to reach the 66%, 95%, and 98% criteria, along with the ratio of the times for the two methods.... In PAGE 25: ... Remaining entries in the row and in the corresponding row of Table 3 (b) are for the entire curve, though there was no signi cance there. Table3 (b) shows the con- vergence time in epochs for DBT and randomly initialized networks to reach the 66%, 95%, and 98% criteria, along with the ratio of the times for the two methods. The second two columns of Table 4 (a) show the nal mean performance level of each condition, along with the 99.... In PAGE 26: ... updates Figure 10: Summary of amount of time in period of DBT statistically superior performance over random initialization. of epochs in the DBT and random columns for the 66% convergence criterion in Table3 by the number of weight updates per epoch. Note that the y axis is logarithmic, so, for example, over six million weight updates were saved by using DBT instead of random initialization in the Chess problem to reach this criterion.... In PAGE 27: ... Our primary results were as follows: DBT vs. Random: Detailed results comparing DBT and randomly initialized networks on the seven tasks studied will be discussed below and shown in Table3 . Here, we present highlights of those results.... ..."

### TABLE 3: Service epochs and expansions Epoch Service Cohort Displacement Cohort

2001

Cited by 18

### Table 3 Node Operation in Residual Mode Epoch 1 Epoch 2

"... In PAGE 12: ...hat the application is willing to tolerate is 13.5, as shown in Figure 3. We will explain in detail the transmission of messages for the residual mode of operation, for the sample error filters shown in the figure. In Table3 we present an example based on the aggregation tree of Figure 3. In this table we show the current observed values (V Curr), the newly calculated partial aggregate value (N ewAggr) and the last transmitted partial aggregate value of each node (LTA), the difference between these two values (Diff ), and whether the node makes a transmission or not based on whether the absolute value of this deviation is greater than the maximum permitted error in the node (jDiff j gt; Ei).... ..."