Results 1 - 10
of
68
Table 2. Detection rate and memory efficiency results.
2006
"... In PAGE 8: ...indows machine with a Pentium M 1.7 Ghz processor with 1.0 Gb of RAM. 6 Results The results of experiments 1 to 3 are discussed in the following sections and are listed in Table2 whilst those of experiment 4 are found in Table 3. TEA execution times varied from approximately 40 to 50 seconds for experiments 1 and 2, and 7 to 8 minutes for experiment 3.... In PAGE 9: ... 2. Total Tracker and total matching tracker populations with no memory feedback Regarding the presentation of A1, Table2 shows the TEA is able to identify and develop memory trackers that map with 100% success to trends T1, T2 and T3 for each of the 10 runs. There are no redundant price values in the memory pool resulting in 100% memory efficiency.... In PAGE 9: ... This provides the potential to learn from the trends memorised in response to A1, to create associations with the novel trends in A2. Table2 shows feedback from the memory pool has a significant impact on... ..."
Cited by 1
Table 6 The recurrences for the memory efficient alignment algorithm. For every partitioning point k, the location [p..q] and [u..v] of the most likely crossing stem is predetermined by the local complementary alignment
Table 7 A comparison of the mismatch scores between the optimal alignment algorithm and the memory efficient one on a few tested structures. The Pk12 is not determined (n.d.) for the optimal algorithm due to the unrealistic storage space requirement
"... In PAGE 10: ... We then align sequences in the testing set to the model and compare the conformations obtained from the computed alignment results with their conformations annotated in the database. Table7 shows the results of simply counting the number of mismatches between the aligned conformation and the correct conformation as downloaded from the tmRDB database. On this basis, a comparison with the original optimal structural alignment algorithm is made and shows that the memory efficient approach can be 80% as accurate as the original one.... ..."
Table 1: The average number of memory block fetches for different combinations of data structures and reorder- ings. This number is directly related to bandwidth by the size of the cache line. Regardless of the data structure, memory efficiency is improved by choosing a better re- ordering. (128KB cache with 128B cache lines.)
2005
Cited by 5
Table 7: To create an image of size 3002 ZSweep requires 60% more memory than Lazy Sweep. But, while Lazy Sweep maintains its memory requirements essentially the same even for larger images, ZSweep needs to allocate more and more memory, because of the pixel lists. Again ZSweep is comparable to Bunyk et al. in speed, but much more memory efficient.
2000
Cited by 36
Table 7: To create an image of size 3002 ZSweep requires 60% more memory than Lazy Sweep. But, while Lazy Sweep maintains its memory requirements essentially the same even for larger images, ZSweep needs to allocate more and more memory, because of the pixel lists. Again ZSweep is comparable to Bunyk et al. in speed, but much more memory efficient.
Table 1: UMFPACK [Davis 2005] produces LU factorizations for general sparse matrices and is faster than SuperLU. CHOLMOD [Davis 2006] and TAUCS [Toledo et al. 2003] factor sparse Cholesky matrices and are among the fastest direct solvers for this problem. Trilinos ML [Heroux and Willenbring 2003] denotes the multilevel preconditioner ML used via the Trilinos AztecOO interface. Factorization and back-substitution times are reported for the direct solvers while timing for hierarchy construction and iteration to 1e-5 relative residual are recorded for the multilevel solvers. Peak memory consumption is recorded for Trilinos ML and our solver. For UMFPACK, CHOLMOD, and TAUCS, the reported memory cost is for the factors alone. While the system is solved for each of the x,y, and z coordinates, only one factorization and three back-substitutions are required of the direct solvers. Likewise, the multilevel hierarchy is created once and reused. Data has been excluded in tests where memory use exceeded hardware limits. Timing data was collected on a pair of comparably equipped high-end uniprocessors (AO Pentium IV 3.8GHz) with 2GB physical memory. In every test, our solver exhibits the best performance and memory efficiency.
2006
"... In PAGE 7: ... We have tested our implementation on meshes with increasing complexity. Table1 summarizes the performance and scalability of different solvers applied to the equations described in Section 2.2.... In PAGE 7: ...2. All the meshes, except the last one, reported in Table1 are em- bedded in a volume graph. Therefore, the number of free vertices include both mesh vertices and additional volume graph vertices.... In PAGE 7: ... Direct factorization methods are much less scalable than multigrid algorithms in both computational and mem- ory costs. Table1 reveals a super-linear relationship in both factor- ization time and memory cost (the size of the resultant factors) with increasing mesh complexity. Even TAUCS and CHOLMOD, both highly efficient sparse Cholesky codes, exceed available memory in the largest test.... In PAGE 7: ... Meanwhile, we wish to minimize the cost of constructing the multigrid hierarchy. As demonstrated in Table1 , our simple coars- ening strategy can be implemented efficiently. Moreover, numer- ical accuracy and visual quality are not always consistent, i.... ..."
Cited by 16
Table 1: Memory size and access for Mallat, RPA and LWT for a 9/7-tap, 6 level wavelet transform
1999
"... In PAGE 2: ... In [9], a memory efficient architecture for im- plementation of the LWT is presented. In Table1 , the memory size and the number of memory accesses needed when performing... ..."
Cited by 2
Table 1 PowerPC 601 microprocessor performance targets.
in Design
"... In PAGE 2: ... These features provide high-bandwidth access to memory and efficient support for cooperative memory sharing. Table1 summarizes the key performance goals of the 601. The final objective of the 601 design was low cost.... ..."
Table 4. Asynchronous DME: In parenthesis the number of garbage collections is given.
2000
"... In PAGE 9: ... For this variation only small changes in the calculation of Open are necessary. Like in the previous exam- ple the results in Table4 show that the heuristic approach is more memory efficient and less time-consuming. The first experiment in the table uses the set Closed that was omitted in the other experi- ments since this turned out to be more time and memory efficient.... ..."
Cited by 5
Results 1 - 10
of
68