• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Tools

Sorted by:
Try your query at:
Semantic Scholar Scholar Academic
Google Bing DBLP
Results 1 - 10 of 22,816
Next 10 →

Table 1. NAS Communication patterns: number of total messages to the most frequently communicated peer and the average message size to that peer

in Nomad: Migrating OS-bypass Networks in Virtual Machines Abstract
by unknown authors
"... In PAGE 8: ... We observe that the remote suspension time vary largely depending on the com- munication patterns. Table1 characterizes the communication pat- terns observed on the MPI process hosted in the migrating VM. As we can see, CG has the longest remote suspension time, be- cause it communicates frequently with relatively large messages, thus likely takes long time waiting for the outstanding communi- cations.... ..."

Table 1: Classi cation of benchmark suite none: No communication needed. array o set: Communication between di erent array elements (i.e. A[i] and A[i+o set]). Here it is possible to prefetch all remote elements at once. 2-D grid: Typical north-west-south-east commu- nication pattern. With an optimal data layout, only the border of the local j j rectangular block consists of remote data elements, which can be prefetched.

in Latency Hiding in Parallel Systems: A Quantitative Approach
by Ur Informatik, Thomas M. Warschko, Thomas M. Warschko, Christian G. Herter, Christian G. Herter, Walter F. Tichy, Walter F. Tichy
"... In PAGE 5: ...mple given in section 3.1. The prefetch instruction itself was inserted using the asm facility of dlxcc. To sketch the impact of latency hiding on our bench- mark set, Table1 shows a characterization of each benchmark in terms of used communication patterns. 2A description of how to parallelizethe Livermore Loops can be found in [Feo88].... In PAGE 6: ... To get close to the reality of massively parallel machines we set the number p of physical processors to 1024. However, most of our benchmarks are not in uenced by the number of physical processors, be- cause the communication patterns { array o set, 2-D grid and indirect addressing (see Table1 ) { do not depend on machine size. Only those benchmarks with reduction operations are in uenced by machine size, because the height of the reduction tree and therefore the number of communication operations evaluates to h = logf(p), where f is the fan-in.... In PAGE 8: ... Still, the codefragments show an excellent improve- ment factor of 9 and above with latency hiding. Comparing the simulation results (Table 2) and the classi cation of our benchmark set ( Table1 ), it be- comes evident that benchmarks with similar communi- cations patterns show also roughly the same behavior... ..."

Table 1: Classi cation of benchmark suite none: No communication needed. array o set: Communication between di erent array elements (i.e. A[i] and A[i+o set]). Here it is possible to prefetch all remote elements at once. 2-D grid: Typical north-west-south-east commu- nication pattern. With an optimal data layout, only the border of the local j j rectangular block consists of remote data elements, which can be prefetched.

in Latency Hiding in Parallel Systems: A Quantitative Approach
by Ur Informatik, Thomas M. Warschko, Thomas M. Warschko, Christian G. Herter, Christian G. Herter, Walter F. Tichy, Walter F. Tichy
"... In PAGE 5: ...mple given in section 3.1. The prefetch instruction itself was inserted using the asm facility of dlxcc. To sketch the impact of latency hiding on our bench- mark set, Table1 shows a characterization of each benchmark in terms of used communication patterns. 2A description of how to parallelizethe Livermore Loops can be found in [Feo88].... In PAGE 6: ... To get close to the reality of massively parallel machines we set the number p of physical processors to 1024. However, most of our benchmarks are not in uenced by the number of physical processors, be- cause the communication patterns { array o set, 2-D grid and indirect addressing (see Table1 ) { do not depend on machine size. Only those benchmarks with reduction operations are in uenced by machine size, because the height of the reduction tree and therefore the number of communication operations evaluates to h = logf(p), where f is the fan-in.... In PAGE 8: ... Still, the codefragments show an excellent improve- ment factor of 9 and above with latency hiding. Comparing the simulation results (Table 2) and the classi cation of our benchmark set ( Table1 ), it be- comes evident that benchmarks with similar communi- cations patterns show also roughly the same behavior... ..."

Table 3: Patterns of Communication

in Experiences with Electronic and Voice Mail
by Stephen C. Hayne 1996
"... In PAGE 6: ... Our respondents indicated that EM provided a greater scope of communication, expanded their patterns of communication, and provided more coordination and control in their communications, relative to VM. Splitting out the items measuring Patterns of Communication (see Table3 ), we find that VM is used significantly more often to communicate with peers than with subordinates (3.4, p gt;O.... ..."
Cited by 1

Table 6.3: Remote communication costs

in A Migratable User-Level Process Package For PVM
by Ravindranath Bala Konuru 1995

Table 6.3: Remote communication costs

in A Migratable User-Level Process Package for PVM
by Ravindranath Bala Konuru, Steve W. Otto

Table 2: Benchmark programs and their sharing and com- munication patterns.

in A Cache Coherence Protocol for the Bidirectional Ring Based Multiprocessor
by Hitoshi Oi, N. Ranganathan 1999
"... In PAGE 5: ... Multiple packets of a message are trans- mitted in (possibly) non-consecutive slots and re-assembled at the destination node. We use a set of parallel applications in Table2 that have various sharing and communication patterns. These appli- cations are from SPLASH 2 benchmark suits [9] and we use their default problem sizes.... In PAGE 5: ... These appli- cations are from SPLASH 2 benchmark suits [9] and we use their default problem sizes. The sharing pattern of each program is also shown in the Table2 using the classifica- tion in [10]. Repl is a sharing pattern in which a data struc- ture is accessed by several processors in a read-mostly man- ner.... In PAGE 5: ... The migratory sharing is further divided into read-mostly (MigR) and read-write (MigRW). The third column in Table2 shows the communication patterns of the benchmark programs. Ocean has a nearest- neighbor communication pattern (NN in Table 2).... In PAGE 5: ... The third column in Table 2 shows the communication patterns of the benchmark programs. Ocean has a nearest- neighbor communication pattern (NN in Table2 ). 27% of remote misses are destined to the adjacent nodes.... ..."
Cited by 2

Table 2: Communication Patterns and Optimizations

in Sensitivity of Parallel Applications to Large Differences in Bandwidth and Latency in Two-Layer Interconnects
by Aske Plaat , Henri E. Bal, Rutger F. H. Hofman, Thilo Kielmann 1999
"... In PAGE 4: ... FFT shows a small superlinear speedup, due to cache effects. Table2 summarizes the communication patterns and improvements. Figure 1 summarizes inter-cluster traffic of the original applications.... ..."
Cited by 24

Table 2: Communication Patterns and Optimizations

in Sensitivity of Parallel Applications to Large Differences
by Aske Plaat, Henri E. Bal, Rutger F. H. Hofman, Thilo Kielmann 1999
"... In PAGE 4: ... FFT shows a small superlinear speedup, due to cache effects. Table2 summarizes the communication patterns and improvements. Figure 1 summarizes inter-cluster traffic of the original applications.... ..."
Cited by 24

Table 3: Communication patterns of the applications.

in Parallel Application Experience with Replicated Method Invocation
by Jason Maassen, Thilo Kielmann, Henri E. Bal 2001
"... In PAGE 20: ... naive optimized replicated TSP 100 102 100 ASP 100 177 109 QR 100 127 109 ACP 100 115 100 LEQ 100 204 102 Average 100 123 102 While implementing the replicated versions we found that replication is used for three different purposes: sharing data, broadcasting, and collective communication. Table3 shows an overview of the communication patterns of the different applications. TSP and ACP use replication to share data that are read very frequently, but written infrequently and at irregular intervals.... ..."
Cited by 15
Next 10 →
Results 1 - 10 of 22,816
Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University