• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Tools

Sorted by:
Try your query at:
Semantic Scholar Scholar Academic
Google Bing DBLP
Results 1 - 10 of 10

Table 4 Rollout analysis of the cube action in the 1998 Hall of Champions match between Malcolm Davis and TD-Gammon 3.1. First set of figures are based on Snowie 3.2 depth-11 truncated rollouts. Second set of figures are TD-Gammon 2.1 full rollouts including the doubling cube

in Programming backgammon using self-teaching neural nets
by Gerald Tesauro 2002
Cited by 24

Table 3. Computation and Communication Counts for Symmetric Toeplitz Systems

in Application and Accuracy of the Parallel Diagonal Dominant Algorithm
by Xian-He Sun 1995
"... In PAGE 22: ... For systems with multiple right hand sides, in which the factorization cost is not considered, the Malcolm and Palmer apos;s method and Thomas method have the same computation count. Table3 gives the com- putation and communication counts of the PDD and reduced PDD algorithms based on Malcolm and Palmer apos;s algorithm. The computation counts of the two algorithms have reduced with the fast method being used in solving the sub-systems.... In PAGE 22: ... The computation counts of the two algorithms have reduced with the fast method being used in solving the sub-systems. Table3 is for solving systems with single right hand side only. For systems with multiple right hand sides, the computation counts remain the same as in Table 1 and 2 for PDD and reduced PDD algorithm respectively.... ..."
Cited by 12

Table 1: Domain-independent planners listed in order of competition code.

in Engineering Note Engineering a Conformant Probabilistic Planner
by Nilufer Onder, Garrett C. Whelan, Li Li
"... In PAGE 7: ... By domain-independent we mean a planner that uses only the PPDDL de- scription of a domain to solve a planning problem and does not employ any previously coded control information. In Table1 we show a brief description of these planners (Edelkamp, Hofiman, Littman, amp; Younes, 2004; Younes, Littman, Weissman, amp; Asmuth, 2005; Bonet amp; Gefiner, 2005; Fern, Yoon, amp; Givan, 2006; Thiebaux, Gretton, Slaney, Price, amp; Kabanza, 2006). The competition was conducted as follows: Each planner was given a set of 24 problems written in probabilistic PDDL (PPDDL) and was allotted 5 minutes to solve the problem.... ..."

Table 1: Domain-independent planners listed in order of competition code.

in Engineering Note Engineering a Conformant Probabilistic Planner
by Nilufer Onder, Garrett C. Whelan, Li Li
"... In PAGE 7: ... By domain-independent we mean a planner that uses only the PPDDL de- scription of a domain to solve a planning problem and does not employ any previously coded control information. In Table1 we show a brief description of these planners (Edelkamp, Hoffman, Littman, amp; Younes, 2004; Younes, Littman, Weissman, amp; Asmuth, 2005; Bonet amp; Geffner, 2005; Fern, Yoon, amp; Givan, 2006; Thiebaux, Gretton, Slaney, Price, amp; Kabanza, 2006). The competition was conducted as follows: Each planner was given a set of 24 problems written in probabilistic PDDL (PPDDL) and was allotted 5 minutes to solve the problem.... ..."

Table 1: Some constrains of quasigroups.

in Solving Open Quasigroup Problems by Propositional Reasoning
by Hantao Zhang, Jieh Hsiang 1994
"... In PAGE 1: ... In this paper, we are interested in the prob- lems in quasigroups given by Fujita, Slaney and Bennett in their award-winning IJCAI paper [3]. The constraints in Table1 are taken from [3]: Among the Latins squares satisfying these con- straints, we are also interested in those squares with a hole, i.e.... In PAGE 3: ... 3 Cyclic Group Construction The propositional reasoning program we used to attack quasigroup problems is called SATO (SAt- is ability Testing Optimized) which is an e cient implementation of the Davis-Putnam algorithm written by Zhang [8]. For a quasigroup of order v, the number of propositional clauses obtained from clauses like QG1 and QG2 in Table1 is O(v6) because there are six distinct variables in QG1 and QG2. For a large v, in addition to the large number of clauses, the search space involved in these problems is also huge.... ..."
Cited by 11

Table 1: Comparison results on quasigroup existence prob- lems. ANOV+ is the abbreviation of AdaptNovelty+.

in Old Resolution Meets Modern SLS
by Duc Nghia Pham, John Slaney, Abdul Sattar
"... In PAGE 3: ... The nomenclature is that followed in SATLIB and the SAT competition from which these par- ticular encodings are taken: problem qgi-x is the i-th ex- istence problem as numbered in (Fujita, Slaney, amp; Bennett 1993) for quasigroups of size x. Table1 shows the runtimes and flip counts of the various solvers on ten quasigroup existence problems, together with the number of runs out of 100 on which a solution was found within the limit of 10 million flips. As observed by Gent and Walsh (1995) GSAT cannot solve any of these problems within a reasonable time, and WalkSAT without preprocess- ing reliably solves only the smaller cases of problems qg1 and qg2.... In PAGE 3: ... Problem qg7-09 in particular is reduced to triviality for two of the solvers. The runtimes in Table1 do not include the time taken by the resolution preprocessor. It is therefore important to record the preprocessing times (which are the same for all three solvers, of course) together with data about the effect of preprocessing on the problem sizes.... ..."

Table 1 Experimental Results

in Value Numbering
by Keith Cooper, Taylor Simpson, Preston Briggs, Preston Briggs, Keith D. Cooper, L. Taylor Simpson
"... In PAGE 10: ... Comparisons were made using routines from a suite of benchmarks consisting of routines drawn from the Spec benchmark and from Forsythe, Malcolm, and Moler apos;s book on numerical methods [8]. The complete results are shown in Table1 . Each column represents dynamic counts of ILOC operations.... In PAGE 10: ...ection 7.1], copy coalescing, and a pass to eliminate empty basic blocks. All forms of value numbering were performed on the SSA form of the routine. The rst section of Table1 compares the hash-based techniques. On average, code optimized using extended basic blocks performs 12.... In PAGE 10: ...alue numbering improves the code by another 5.4%. In our experiment, dominator-tree value numbering performs slightly better on average than global value numbering with dominator-based removal. The second section of Table1 compares the partitioning techniques. The AVAIL-based technique improves the code by 0.... ..."

Table 1. Computation and Communication Counts of the PDD Algorithm

in Application and Accuracy of the Parallel Diagonal Dominant Algorithm
by Xian-He Sun 1995
"... In PAGE 21: ...a and ~b; and jjx ? x jj jjxjj jbjm(1 + b2) j 2(1 ? j~ aj)(jaj ? 1)j = jbjm(1 + b2) j (j j ? b(1?b2m) 1+b2(m+1) (jaj ? 1)j in terms of a and b. When jbj j j lt; 1, we have jjx ? x jj jjxjj jbjm(1 + b2) j (j j ? jbj)(jaj ? 1)j: 5 Experimental Results Table1 gives the computation and communication count of the PDD algorithm. Tridiagonal sys- tems arising in both ADI and in the compact scheme methods are multiple right-side systems.... In PAGE 21: ... It is often more e cient to use a parallel tridiagonal solver for these systems than to remap data among processors to be able to a serial solve, espe- cially for distributed-memory machines where communication cost is high. The computation and communication count for solving multiple right-side systems is also listed in Table1 , in which the factorization of matrix ~ A and computation of Y are not considered (see Eq.(5) and (6) in Section 2).... In PAGE 21: ... While the accuracy analyses given in this study are for Toeplitz tridiagonal systems, the PDD algorithm and the reduced PDD algorithm can be applied for solving general tridiagonal systems. The computation counts given in Table1 and 2 are for general tridiagonal systems. For symmetric Toeplitz tridiagonal systems, a fast method proposed by Malcolm and Palmer [12] has a smaller computation count than Thomas algorithm for systems with single right hand side.... In PAGE 22: ... Table 3 is for solving systems with single right hand side only. For systems with multiple right hand sides, the computation counts remain the same as in Table1 and 2 for PDD and reduced PDD algorithm respectively. As an illustration of the algorithm and theoretical results given by previous sections, a sample matrix is tested here.... In PAGE 25: ... The lower speedup is due to the reduction of the matrix size and the increase of the number of right hand sides. As seen in Table1 , the communication cost increases linearly with the number of right hand sides. Since the Intel/iPSC860 has a very high (communication speed)/(computation speed) ratio, we can expect a better speedup on an Intel Paragon or even on an Intel/iPSC2 [17] multicomputer.... ..."
Cited by 12

Table 1. Comparison of Computation and Communication (Non-Periodic)

in Performance Comparison of a Set of Periodic and Non-Periodic Tridiagonal Solvers on SP2 and Paragon Parallel Computers
by Xian-He Sun, Stuti Moitra
"... In PAGE 10: ... Communication Pattern for Solving Periodic Systems. 3 Operation Comparison Table1 gives the computation and communication count of the tridiagonal solvers under considera- tion for solving non-periodic systems. Tridiagonal systems arising in many applications are multiple right-hand-side (RHS) systems.... In PAGE 10: ... They are usually \kernels quot; in much larger codes. The computation and communication counts for solving multiple RHS systems are listed in Table1 , in which the factorization of matrix ~ A and computation of Y are not considered (see Eq.(5) and (6) in Section 2).... In PAGE 11: ... The communication cost of the total-data-exchange commu- nication is highly architecture dependent. The listed communication cost of the PPT algorithm, in Table1 , 2, and 3, is based on a square 2-D torus with p processors (i.e.... In PAGE 11: ... The conventional sequential algorithm used is the periodic Thomas algorithm [11]. Compared with Table1 , we can see, while the best sequential algorithm has a increased operation count, the parallel algorithms have the same operation and communication count for both periodic and non-periodic systems, except for the PPT algorithm which has a slightly increased operation count. However, for the PDD and Reduced PDD algorithm, the communication is given for any architecture which supports Ring communication, instead of 1-D array.... In PAGE 11: ... Notice that when j lt; n=2, the Reduced PDD algorithm has a smaller operation count than that of Thomas algorithm for periodic systems with multiple RHS. The computation counts given in Table1 and 2 are for general tridiagonal systems. For symmet- ric Toeplitz tridiagonal systems, a fast method proposed by Malcolm and Palmer [14] has a smaller computation count than Thomas algorithm for systems with single RHS.... In PAGE 12: ... Table 3 presents computation and communication counts for solving systems with a single RHS only. For systems with multiple RHS, the computation counts remain the same as in Table1 and 2 for all the periodic and non-periodic systems. 4 Experimental Results The PDD and the Reduced PDD algorithms were implemented on the 48-node IBM SP2 and 72-node Intel Paragon available at NASA Langley Research Center.... In PAGE 16: ... The problem size for all algorithms on SP2 is n = 6400 and n1 = 1024, on Paragon is n = 1600 and n1 = 1024. The measured results con rm the analytical results given in Table1 and 2.... ..."

Table 1. Comparison of Computation and Communication n28Nonn2dPeriodicn29

in Performance Comparison of a Set of Periodic and Non�Periodic Tridiagonal Solvers on SP2 and Paragon Parallel Computers
by Stuti Moitra
"... In PAGE 10: ... Communication Pattern for Solving Periodic Systems. 3 Operation Comparison Table1 gives the computation and communication count of the tridiagonal solvers under consideran2d tion for solving nonn2dperiodic systems. Tridiagonal systems arising in many applications are multiple rightn2dhandn2dside n28RHSn29 systems.... In PAGE 10: ... They are usually n5ckernelsn22 in much larger codes. The computation and communication counts for solving multiple RHS systems are listed in Table1 n2c in which the factorization of matrix n7e A and computation of Y are not considered n28see Eq.n285n29 and n286n29 in Section 2n29.... In PAGE 11: ... The communication cost of the totaln2ddatan2dexchange commun2d nication is highly architecture dependent. The listed communication cost of the PPT algorithmn2c in Table1 n2c 2n2c and 3n2c is based on a square 2n2dD torus with p processors n28i.e.... In PAGE 11: ... The conventional sequential algorithm used is the periodic Thomas algorithm n5b11n5d. Compared with Table1 n2c we can seen2c while the best sequential algorithm has a increased operation countn2c the parallel algorithms have the same operation and communication count for both periodic and nonn2dperiodic systemsn2c except for the PPT algorithm which has a slightly increased operation count. Howevern2c for the PDD and Reduced PDD algorithmn2c the communication is given for any architecture which supports Ring communicationn2c instead of 1n2dD array.... In PAGE 11: ... Notice that when j n3c nn3d2n2c the Reduced PDD algorithm has a smaller operation count than that of Thomas algorithm for periodic systems with multiple RHS. The computation counts given in Table1 and 2 are for general tridiagonal systems. For symmetn2d ric Toeplitz tridiagonal systemsn2c a fast method proposed by Malcolm and Palmer n5b14n5d has a smaller computation count than Thomas algorithm for systems with single RHS.... In PAGE 12: ... Table 3 presents computation and communication counts for solving systems with a single RHS only. For systems with multiple RHSn2c the computation counts remain the same as in Table1 and 2 for all the periodic and nonn2dperiodic systems. 4 Experimental Results The PDD and the Reduced PDD algorithms were implemented on the 48n2dnode IBM SP2 and 72n2dnode Intel Paragon available at NASA Langley Research Center.... In PAGE 16: ... The problem size for all algorithms on SP2 is n n3d 6400 and n1 n3d 1024n2c on Paragon is n n3d 1600 and n1 n3d 1024. The measured results conn0crm the analytical results given in Table1 and 2.... ..."
Results 1 - 10 of 10
Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University