• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Tools

Sorted by:
Try your query at:
Semantic Scholar Scholar Academic
Google Bing DBLP
Results 1 - 10 of 66,943
Next 10 →

Tables should be sorted (on random access machines)

in Tables Should Be Sorted (on Random Access Machines)
by Faith Fich, Peter Bro Miltersen

Table 3.3: Operators on statements.

in Implementation of The Nested Relational Algebra in Java
by Biao Hao

Table 1. SHARP Operating Costs

in It'sYours for $50.
by U Sharp Automation, Everything Card Is, Mccormick Place 1974
"... In PAGE 23: ... According to Maizell, the number of papers written by and patents issued to a chemist may be a measure of his creativ- ity. In a test of this belief, the number of papers and patents since 1967 by the group of ten chemists who had read at least five of the 59 books published in 1971 was compared with the number of such documents authored by two groups of ten chemists randomly selected from those who used less than five and none of the books, respectively ( Table1 ). Despite Table 1.... In PAGE 23: ... In a test of this belief, the number of papers and patents since 1967 by the group of ten chemists who had read at least five of the 59 books published in 1971 was compared with the number of such documents authored by two groups of ten chemists randomly selected from those who used less than five and none of the books, respectively (Table 1). Despite Table1 . User Creativity.... In PAGE 28: ... Some spe- cial subject requests are run on a regular monthly basis, and several of the more popular bibliographies, such as the one on quot;underwater sound quot; are updated annually. Operating costs for processing SHARP data in the batch mode on the CDC 6700 are presented in Table1 . Experiments operating the system in the interactive mode using direct access files indicate ... In PAGE 31: ...Table1 . Information Exchange Center Services and Fee Schedule--1973 Copy Service: $1.... ..."

Table 2. Performance of the hypercube RandomAccess algorithm on a Cray XT3 machine in giga-updates-per-second (GUPS) for varying processor counts. Parallel efficiencies are computed from the single-processor hypercube rate.

in The High Performance Computing Challenge benchmark
by Steven J. Plimpton, Ron Brightwell, Courtenay Vaughan, Keith Underwood, Mike Davis
"... In PAGE 4: ... Thus with this algorithm the only limitation to a high GUPS rate is the number of processors a machine has, assuming a very large machine still has suffi- cient bisection bandwidth to perform exchanges of 4K-byte messages between all pairs of processors at each dimension of the hypercube loop. The third line of Table2 lists predicted GUPS rates us- ing a model equation that reflects these two scaling factors, namely TP = T1 + TC log2(P). TP is the CPU time to run on P processors and is the sum of two terms.... ..."

Table 2. Performance of the hypercube RandomAccess algorithm on a Cray XT3 machine in giga-updates-per-second (GUPS) for varying processor counts. Parallel efficiencies are computed from the single-processor hypercube rate.

in The High Performance Computing Challenge benchmark
by Steven J. Plimpton, Ron Brightwell, Courtenay Vaughan, Keith Underwood, Mike Davis
"... In PAGE 4: ... Thus with this algorithm the only limitation to a high GUPS rate is the number of processors a machine has, assuming a very large machine still has suffi- cient bisection bandwidth to perform exchanges of 4K-byte messages between all pairs of processors at each dimension of the hypercube loop. The third line of Table2 lists predicted GUPS rates us- ing a model equation that reflects these two scaling factors, namely TP = T1 + TC log2(P). TP is the CPU time to run on P processors and is the sum of two terms.... ..."

Table 1: Data parallel performance on a network of Sun Sparcstation 5 machines Circuit Processors

in Parallel Algorithms for VLSI Layout Verification
by Ky Macpherson, Prithviraj Banerjee
"... In PAGE 26: ...2 440.5 Table1 0: E ect of task parallel scheduling on a network of Sun Sparcstations Circuit Task Processors Ordering 1 2 3 4 5 plapart random 175.1 142.... In PAGE 28: ... Considering that the overall performance of the DRC is bounded by the performance of the most complex algorithms used by the DRC, the overall complexity for the DRC is O(N logN). Table1 2: Complexities of the DRC operations Operation Execution Time Boolean Operation O(N log N) Sort O(N log N) or O(log N) Grow O(N log N) Width O(N) Spacing O(N) Square Test O(N) Overall DRC O(N log N) 6.2 Analysis of Parallel DRC The performance results demonstrate that both data parallelism and task parallelism can be applied to the DRC problem to achieve better performance and reduced memory re- quirements as compared to serial algorithms.... In PAGE 29: ...Table1 3: Comparison of parallelization methods on CM-5 Circuit Procs per Processors Cluster 16 32 64 128 haab1 1 100.3 44.... ..."

Table 1 Performance in MFlops of GEMM on shared memory multiprocessors using 512-by-512 matrices. We have shown that the use of parallel kernels provides high performance while maintaining portability. We intend to pursue this activity in the future on most of the parallel architectures to which we have access. The ALLIANT FX/2800 provides a good opportunity for validating these ideas, and we intend to implement a version of Level 3 BLAS based on our package on that machine.

in Cerfacs Team "parallel Algorithm" Scientific Report For 1991
by Rt For
"... In PAGE 5: ... Finally, these codes have been used as a platform for the implementation of the uniprocessor version of Level 3 BLAS on the BBN TC2000 (see next Section). We show in Table1 the MFlops rates of the parallel matrix-matrix multiplication, and in Table 2 the performance of the LU factorization (we use a blocked code similar to the LAPACK one) on the ALLIANT FX/80, the CRAY-2, and the IBM 3090-600J obtained using our parallel version of the Level 3 BLAS. Note that our parallel Level 3 BLAS uses the serial manufacturer-supplied versions of GEMM on all the computers.... In PAGE 6: ... This package is available without payment and will be sent to anyone who is interested. We show in Table1 the performance of the single and double precision GEMM on di erent numbers of processors. Table 2 shows the performance of the LAPACK codes corresponding to the blocked LU factorization (GETRF, right-looking variant), and the blocked Cholesky factorization (POTRF, top-looking variant).... In PAGE 8: ... The second part concerned the performance we obtained with tuning and parallelizing these codes, and by introducing library kernels. We give in Table1 a brief summary of the results we have obtained: One of the most important points to mention here is the great impact of the use of basic linear algebra kernels (Level 3 BLAS) and the LAPACK library. The conclusion involves recommendations for a methodology for both porting and developing codes on parallel computers, performance analysis of the target computers, and some comments relating to the numerical algorithms encountered.... In PAGE 12: ... Because of the depth rst search order, the contribution blocks required to build a new frontal matrix are always at the top of the stack. The minimum size of the LU area (see column 5 of Table1 ) is computed during during the symbolic factorization step. The comparison between columns 4 and 5 of Table 1 shows that the size of the LU area is only slightly larger than the space required to store the LU factors.... In PAGE 12: ... The minimum size of the LU area (see column 5 of Table 1) is computed during during the symbolic factorization step. The comparison between columns 4 and 5 of Table1 shows that the size of the LU area is only slightly larger than the space required to store the LU factors. Frontal matrices are stored in a part of the global working space that will be referred to as the additional space.... In PAGE 12: ... In a uniprocessor environment, only one active frontal matrix need be stored at a time. Therefore, the minimum real space (see column 7 of Table1 ) to run the numerical factorization is the sum of the LU area, the space to store the largest frontal matrix and the space to store the original matrix. Matrix Order Nb of nonzeros in Min.... In PAGE 13: ... In this case the size of the LU area can be increased using a user-selectable parameter. On our largest matrix (BBMAT), by increasing the space required to run the factorization (see column 7 in Table1 ) by less than 15 percent from the minimum, we could handle the ll-in due to numerical pivoting and run e ciently in a multiprocessor environment. We reached 1149 M ops during numerical factorization with a speed-up of 4.... In PAGE 14: ...ack after computation. Interleaving and cachability are also used for all shared data. Note that, to prevent cache inconsistency problems, cache ush instructions must be inserted in the code. We show, in Table1 , timings obtained for the numerical factorization of a medium- size (3948 3948) sparse matrix from the Harwell-Boeing set [3]. The minimum degree ordering is used during analysis.... In PAGE 14: ... -in rows (1) we exploit only parallelism from the tree; -in rows (2) we combine the two levels of parallelism. As expected, we rst notice, in Table1 , that version 1 is much faster than version 2... In PAGE 15: ... Results obtained on version 3 clearly illustrate the gain coming from the modi cations of the code both in terms of speed-up and performance. Furthermore, when only parallelism from the elimination tree is used (see rows (1) in Table1 ) all frontal matrices can be allocated in the private area of memory. Most operations are then performed from the private memory and we obtain speedups comparable to those obtained on shared memory computers with the same number of processors [1].... In PAGE 15: ... Most operations are then performed from the private memory and we obtain speedups comparable to those obtained on shared memory computers with the same number of processors [1]. We nally notice, in Table1 , that although the second level of parallelism nicely supplements that from the elimination tree it does not provide all the parallelism that could be expected [1]. The second level of parallelism can even introduce a small speed down on a small number of processors as shown in column 3 of Table 1.... In PAGE 15: ... We nally notice, in Table 1, that although the second level of parallelism nicely supplements that from the elimination tree it does not provide all the parallelism that could be expected [1]. The second level of parallelism can even introduce a small speed down on a small number of processors as shown in column 3 of Table1 . The main reason is that frontal matrices must be allocated in the shared area when the second level of parallelism is enabled.... In PAGE 17: ...5 28.2 Table1 : Results in Mega ops on parallel computers. In Table 1, it can be seen that the performance of the program on the Alliant FX/80 in double precision is better than the performance of the single precision ver- sion.... In PAGE 17: ...2 Table 1: Results in Mega ops on parallel computers. In Table1 , it can be seen that the performance of the program on the Alliant FX/80 in double precision is better than the performance of the single precision ver- sion. The reason for this is that the single precision mathematical library routines are less optimized.... In PAGE 18: ... block diagonal) preconditioner appears to be very suitable and is superior to the Arnoldi-Chebyshev method. Table1 shows the results of the computation on an Alliant FX/80 of the eight eigenpairs with largest real parts of a random sparse matrix of order 1000. The nonzero o -diagonal and the full diagonal entries are in the range [-1,+1] and [0,20] respectively.... In PAGE 19: ... A comparison with the block preconditioned conjugate gradient is presently being investigated.In Table1 , we compare three partitioning strategies of the number of right-hand sides for solving the system of equations M?1AX = M?1B, where A is the ma- trix BCSSTK27 from Harwell-Boeing collection, B is a rectangular matrix with 16 columns, and M is the ILU(0) preconditioner. Method 1 2 3 1 block.... In PAGE 26: ...111 2000 lapack code 0.559 Table1 : Results on matrices of bandwith 9.... In PAGE 30: ... We call \global approach quot; the use of a direct solver on the entire linear system at each outer iteration, and we want to compare it with the use of our mixed solver, in the case of a simple splitting into 2 subdomains. We show the timings (in seconds) in Table1 on 1 processor and in Table 2 on 2 processors, for the following operations : construction amp; assembly : Construction and Assembly, 14% of the elapsed time, factorization : Local Factorization (Dirichlet+Neumann), 23%, substitution/pcg : Iterations of the PCG on Schur complement, 55%, total time The same code is used for the global direct solver and the local direct solvers, which takes advantage of the block-tridiagonal structure due to the privileged direction. Moreover, there has been no special e ort for parallelizing the mono-domain approach.... ..."

Table 1: Port accesses for various aggregate operations

in AFINE-GRAIN PARALLEL ARCHITECTURE BASED ON BARRIER SYNCHRONIZATION
by H. G. Dietz, R. Hoare, T. Mattox
"... In PAGE 4: ...operations are givenin Table1 . Asimple barrier synchroniza- tion takes just twoport register accesses: one to request syn- chronization, another to read the synchronization achievedsig- nal.... In PAGE 4: ...Table 1: Port accesses for various aggregate operations Forexample, Table1 predicts that a cluster using four machines, each with 2us port register access time, would takea little more than 4us to perform a barrier synchronization. Experimentally,the difference between actual and predicted lower-bound time is primarily a function of software overhead imposed by slowprocessors; using a four-machine TTL_PAPERS cluster,the measured times are no more than 5% above the predicted lower bound for 90MHz Pentium, 15% for 33MHz 486DX, 25% for 25MHz 486SX, and 100% for 386DX33 processors.... ..."

Table 1: Port accesses for various aggregate operations

in AFINE-GRAIN PARALLEL ARCHITECTURE BASED ON BARRIER SYNCHRONIZATION
by H. G. Dietz, R. Hoare, T. Mattox
"... In PAGE 4: ...operations are givenin Table1 . Asimple barrier synchroniza- tion takes just twoport register accesses: one to request syn- chronization, another to read the synchronization achievedsig- nal.... In PAGE 4: ...Table 1: Port accesses for various aggregate operations Forexample, Table1 predicts that a cluster using four machines, each with 2us port register access time, would takea little more than 4us to perform a barrier synchronization. Experimentally,the difference between actual and predicted lower-bound time is primarily a function of software overhead imposed by slowprocessors; using a four-machine TTL_PAPERS cluster,the measured times are no more than 5% above the predicted lower bound for 90MHz Pentium, 15% for 33MHz 486DX, 25% for 25MHz 486SX, and 100% for 386DX33 processors.... ..."

Table 1: SOA results for randomly generated codes.

in Address Assignment Combined with Scheduling in DSP Code Generation
by Yoonseo Choi, Taewhan Kim 2002
"... In PAGE 5: ... Since our algorithm is performed at the operation code-level, and not directly at the assembly code-level of target machine, the architecture specific pointer operations with increment/decrement (*p++) was assumed to be achieved with a single variable access operation in code, like LAR p, LACL *, +. Table1 shows comparisons of the results in terms of address code size produced by OFU (the order of the first use) offset as- signment, Solve-SOA with tie-breaking [7], and the proposed Sch- SOA for the randomly generated C programs. |V| and |S| represent the number of variables and the access sequence length, respec- tively.... ..."
Cited by 7
Next 10 →
Results 1 - 10 of 66,943
Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University