### Table 1. Self-Timed Asynchronous Cell Set

1994

"... In PAGE 10: ... The cells implemented include the Muller C-element, transition LATCHes, TOGGLE, generalized C-element, CALL element, SELECT, Q-SELECT, ring-style ARBITER|, and SEQUENCER. Some performance results of each element is given in Table1 . This table also compares the Xilinx FPGA implementation with a CMOS 2G6d implementation[3].... ..."

Cited by 4

### Table V. Mean final cost function values and associated RMS marker distance and joint parameter errors after 10 000 function evaluations obtained with the parallel synchronous and asynchronous PSO algorithms. Standard deviations are indicated in parentheses. Each set of 10 optimizations used either 10 different sets of initial guesses (synchronous and asynchronous) or 1 set of initial guesses 10 times (asynchronous).

2006

Cited by 1

### Table 9.12 Percentage Asynchronous Metastates in Training Set Viterbi Metastate Sequences for MM- FHMM and PT-FHMM with 2 observation streams, 2 chains, 6 states per chain

2001

Cited by 5

### Table 5. Concurrently connected users with kernel poll and asynchronous threads.

in Abstract

"... In PAGE 6: ...4 Asynchronous threads Four different settings for asynchronous threads has been evaluated in the benchmarks; 50 threads in the pool without kernel poll and 25, 50 and 75 threads in the pool with kernel poll activated. Table 4 shows the number of concurrent connections with asynchronous threads without kernel poll while Table5 shows the number of concurrent connections with asynchronous threads and kernel poll. Operating system Connections SuSE 9.... ..."

### Table 3 Speedup of the best partially asynchronous program over the best competitor.

2001

"... In PAGE 10: ....1.1. Genetic Algorithms. Figures 3 and 4 show speedups over the serial programs for the synchronous, asynchronous, and different age settings (0, 5, 10, 20, and 30) of the partially asynchronous parallel programs. Also shown in Table3 (and the last white bar in Figures 3 and 4) is the speedup of the best partially asynchronous program over the best competitor (i.e.... ..."

### Table 2. Summary of experimental results for the Asynchronous Algorithm

1997

"... In PAGE 6: ...ions, trend and step size. Details of these implementations are provided in the appendix. The main parameters were also var- ied to determine a good set of values, as well as to obtain an indication of the robustness of the method. Table2 summarizes the test results. The result values reported are averages over all ten problems.... ..."

Cited by 3

### Table 2. Axioms for choice and asynchronous parallel composition.

"... In PAGE 12: ...Exp(hA; fi) = recx: 0 @ X i2f1;:::;ng ai:Exp(hAi; gi) 1 A for a new variable x, automata Ai = (Q; ; nf (q0; a; q) j a 2 ; q 2 Q g; qi) over XA [ fxg and function g extending f such that g(x) = q0. Note that we have implicitly used the fact that the operator + is commutative and associative, up to bisimulation (see the equations in Table2 ). Note also that the second rule is actually not needed: we added it just to associate a nite process to an acyclic automaton.... In PAGE 13: ... Let P, Q be nite processes. Then the languages of JPK and JQK coincide i the normal forms of P and Q are equated by using the ACI axioms of + (see Table2 ) and the axiom a:P + a:Q = a:(P + Q): Once more, note how the equation can be interpreted as a left to right rewrit- ing rule, obtaining for each process a further reduced normal form. It is important to realise that this axiom could not be simply added to the set of equations in Tables 2 and 3, since critical pairs would arise because it is not compatible with the distributivity of eager parallel composition.... ..."

### Table 6.6: Transitions rules for asynchronous CCS

in Functionality, Polymorphism, and Concurrency: A Mathematical Investigation of Programming Paradigms

1997

Cited by 3

### Table 8 DBN and ANN AF recognition compared on test set data. DBN system is built on nal model parameters, asynchronous CPTs and all-parameter training.

"... In PAGE 5: ... Recognition with the asynchronous models results in larger numbers of feature combinations occurring in the output, 288 compared to 79 after the nal embedded training step. Table8 gives AF recognition results for a system where the nal model observation GMMs are combined with the asyn- chronous feature CPTs developed using intermediate param- eters, and then all-parameter embedded training performed until convergence. Also shown are the ANN results of sec- tion 5.... ..."

### Table 16. We denote by NP the total number of processors across which the array is distributed and num send procs the number of processors that must send data to the processor executing the routine. This second information is determined when the send and receive sets for pairs of processors are computed. asynchronous redistribution routine 1 // compute the send and receive sets for each array dimension 2 // compute the o sets in the receive bu er for each processor 3 // post a receive request (MPI Irecv) to all processors // that must communicate with me 4 for(root = 0; root lt; NP; root++) f

"... In PAGE 68: ... Table16 : Asynchronous redistribution routine The asynchronous method tries to minimize the synchronization induced by... ..."