Results 1 - 10
of
60,897
Table 4: Lock-free contention resolution. Rules (R) used to resolve contention are shown in cells.
in Q-Pod: Deployable and Scalable End System Support for Enabling QoS in Legacy Enterprise Systems ∗
2004
"... In PAGE 12: ... This allows collecting measurement sequences at a high speed, by affording a writer inserting measurements in the queue each time a packet is intercepted, and a reader deleting from the queue in batches. Correctness: Based on these rules and the provided implementations we rewrite Table 2 to give Table4 . This table shows the correctness of our approach in attaining lock-free consistency control under concurrent entry create, delete, search, read, and write operations.... ..."
Table 1: Serialized network messages for a lock-free shared counter update without contention.
1995
"... In PAGE 5: ... There are two reasons for this result. First, a write miss on an uncached line takes two seri- alized messages, while a write miss on a remote exclusive or remote shared line takes 4 or 3 serialized messages respectively (see Table1 ). Second, NOC does not incur the overhead of invalidations and updates as EXC and UPD do.... ..."
Cited by 14
Table 1: Serialized network messages for a lock-free shared counter update without contention.
1995
"... In PAGE 5: ... There are two reasons for this result. First, a write miss on an uncached line takes two seri- alized messages, while a write miss on a remote exclusive or remote shared line takes 4 or 3 serialized messages respectively (see Table1 ). Second, NOC does not incur the overhead of invalidations and updates as EXC and UPD do.... ..."
Cited by 14
Table 2: Adversaries in contention. 1 in a cell indicates that corresponding operations if run concurrently can cause inconsistency. (c=create, del=delete, r=read, w=write, X=meas or pol)
in Q-Pod: Deployable and Scalable End System Support for Enabling QoS in Legacy Enterprise Systems ∗
2004
"... In PAGE 11: ...If all these commands are allowed to execute concurrently we will have contention resulting in possible inconsistency as shown in Table2 . (Discussion of hashing scheme including explanation of GIVE_UP and tries is deferred.... In PAGE 12: ... This allows collecting measurement sequences at a high speed, by affording a writer inserting measurements in the queue each time a packet is intercepted, and a reader deleting from the queue in batches. Correctness: Based on these rules and the provided implementations we rewrite Table2 to give Table 4. This table shows the correctness of our approach in attaining lock-free consistency control under concurrent entry create, delete, search, read, and write operations.... ..."
Table 2. PURE context switch times.
1999
"... In PAGE 6: ... The difference is due to the different actions to be taken when returning from a lock-set or a lock-free case. Table2 shows a comparison of the context switch times that result from the different PURE scheduling strategies and the employed nucleus configuration (refer to Figure 3). On basis of a 300MHz Pentium II, they range from 61 clock 1The worst-case execution path of a guarded section has not yet been determined.... ..."
Cited by 27
Table 1: Times for acquiring a Free Lock. The performance gap between OODB apos;s and NDBS apos;s lock operations illus- trates the cost of both allocating the lock header and looking up the lock in the hash table. LCG further eliminates the cost of allocating a lock request control block as taking a lock for the rst time just consists in setting a bit in one of
Table 3 Cycle time of trains
"... In PAGE 22: ... When #15 sen is closer to #15 cross , the time used by the controller may become signi#0Ccant, thus impacting the cycle time. From Table3 it also appears that two trains running in the circuit interfere each other: one train apos;locks apos; the other by forcing it to wait for a section to become free. With six sections, the interference among the two trains increases the cycle time #28with respect to one train#29 of about 20#7B25#25.... ..."
Table 1: Scheduling protocol processing for cache affinity
1996
"... In PAGE 2: ... To illustrate a performance lower bound, we considera global pool, managedLRU; the overhead of migrating the underlying cache lines is incurred when some other processor last accessedthe pool. Table1 reviews the schedulingobjectives,resources, andpoli- cies involved in affinity-based scheduling of multiprocessor net- working. 2.... In PAGE 6: ... Of course, each free memory pool must be protected by a software lock. All of the scheduling policies appearing in Table1 are ap- propriate for send-side processing, with one exception. Stream affinity scheduling offers no benefit on the send-side, since no stream-specific state is written.... ..."
Cited by 19
Table 3b. Error Detection and Diagnosis Syndromes for Errors Detected by Backward Moves in the VDLL using +.
1989
"... In PAGE 20: ... 1 I I 1 Assume now that a single error has been detected during a backward move. The LCED pro- cedure supplies the values of the four detection locks ( Table3... In PAGE 21: ...even-tuple syndrome {Ll, L2. L3. L4. NAnm. NAM,, NA,) is constructed ( Table3 b). For the error-free case.... In PAGE 21: ...rror-free case. the syndrome will be {True. True, True, True, True. True. True]. There are two cases of identical syndromes for different errors. In each case one extra node is accessed to I I I I 1 1 1 I I I Table3 a. Detection and Diagnosis Locks for Backward Moves ... ..."
Cited by 2
Table 10. Number of Candidates Requesting Special Arrangements by Presenting Condition, 2001-2005
2005
"... In PAGE 42: ...For the September sessions, absentees were as follows: Table10 . Percentage of registered absentees (September 2004) Level Registered Sitting Absent SEC 4208 4021 187 (4.... ..."
Results 1 - 10
of
60,897