• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Tools

Sorted by:
Try your query at:
Semantic Scholar Scholar Academic
Google Bing DBLP
Results 1 - 10 of 37,907
Next 10 →

TABLE 1. SPEC95 integer benchmarks used. benchmark input dataset dynamic instruction count

in AR-SMT: A Microarchitectural Approach to Fault Tolerance in Microprocessors
by Eric Rotenberg 1999
Cited by 152

TABLE 1. SPEC95 integer benchmarks used. benchmark input dataset dynamic instruction count

in AR-SMT: A Microarchitectural Approach to Fault Tolerance in Microprocessors
by unknown authors

Table 2: Benchmarks and datasets used. All benchmarks simulated to completetion. The benchmarks go and gcc are relevant for fetch mechanism issues. Both contain a large number of dynamic basic blocks and have many conditional branches that are di cult to predict. For all simulations performed, the return address stack was assumed to be ideal. The latency for accessing the ideal second level cache is 10 cycles.

in Critical Issues Regarding the Trace Cache Fetch Mechanism
by Sanjay Jeram Patel, Daniel Holmes Friendly, Yale N. Patt 1997
"... In PAGE 12: ... All experiments were performed on the SPECint95 benchmarks. Table2 lists the benchmarks and the input sets 1. All simulations were run until completion.... ..."
Cited by 53

Table 2: Benchmarks and datasets used. All benchmarks simulated to completetion. The benchmarks go and gcc are relevant for fetch mechanism issues. Both contain a large number of dynamic basic blocks and have many conditional branches that are di cult to predict. For all simulations performed, the return address stack was assumed to be ideal. The latency for accessing the ideal second level cache is 10 cycles.

in Critical Issues Regarding the Trace Cache Fetch Mechanism
by Sanjay Jeram, Daniel Holmes Friendly, Yale N. Patt 1997
"... In PAGE 12: ... All experiments were performed on the SPECint95 benchmarks. Table2 lists the benchmarks and the input sets 1. All simulations were run until completion.... ..."
Cited by 53

Table 6. (cont.) Approaches on the Toronto Benchmark Datasets

in A Survey of Search Methodologies and Automated System Development for Examination Timetabling
by R. Qu, E. K. Burke, B. Mccollum, L. T. G. Merlot, S. Y. Lee
"... In PAGE 12: ... Another important contribution from this work is the introduction of a set of 13 exam timetabling problems, which became standard benchmarks in the eld. They have been widely studied and used by di erent approaches during the years (see Table6 ). We call this the University of Toronto data and discuss it further in Section 3.... In PAGE 30: ... This variant is named Toronto e in Table 5. The approaches developed and tested on di erent variants of the Toronto datasets during the years are listed in Table6 (ordered by the year in which the work was published). The values in \() quot; following the variants of the data give the number of problem instances tested by the corresponding approaches.... In PAGE 31: ... Table6 . Approaches to the Toronto Benchmark Datasets Reference Approach/Technique Problem Carter et al Graph heuristics with clique initialisation and a(13), b(13) [54] 1996 backtracking Carter amp;Johnson Almost cliques with su cient density as the a(13) [51] 1996 initialisation for graph heuristics Burke et al Memetic Algorithm with hill climbing and light and c(5) [40] 1996 heavy mutation Burke et al Di erent initialisation strategies in Memetic d(3) [41] 1998 Algorithms measured by diversity Burke et al Non-determinisms introduced by selection strategies d(3) [42] 1998 in graph heuristics Burke amp;Newall Multi-stage Evolutionary Algorithm based on d(3) [37] 1999 Memetic Algorithm Terashima Genetic Algorithm with in-direct coding of the e(12) -Marin et al constructive strategies and heuristics [149] 1999 Caramia et al Iterated algorithm with novel improving factors a(13), b(13), [48] 2001 c(5) Di Gaspero [73] Adaptive tabu list and cost function in Tabu Search b(11), c(5), amp;Schaerf 2001 d(3) Di Gaspero Multiple neighbourhood Tabu Search b(7),d(3) [72] 2002 White amp;Xie [156] Tabu Search with long term memory b(2) [157] 2001 amp;2004 Relaxation on long and short term tabu lists b(7) Paquete amp; [120] Tabu Search with Lex-tie and Lex-seq strategies in b(8) Stutzle 2002 the objective function Merlot et al Constraint programming as initialisation for a(12), b(12), [110] 2003 Simulated Annealing and hill climbing c(5), d(2) Casey amp; [55] GRASP with modi ed Saturation Degree b(10) Thompson 2003 initialisation and Simulated Annealing improvement Burke amp;Newall Great Deluge with adaptive ordering as the b(11) [38] 2003 initialisation Burke amp;Newall Graph heuristics with adaptive heuristic modi er to b(11), d(3) [39] 2004 dynamically order the exams Burke amp;Bykov Time-prede ned Great Deluge and Simulated b(13), d(2) et al [18] 2004 Annealing Asmuni et al Fuzzy rules with Largest Degree, Saturation Degree b(12) [8] 2004 and Largest Enrolment [9] 2006 Fuzzy evaluation function with multiple criteria b(12) Ross et al Genetic Algorithm evolving constructive strategies e(12) [138] 2004 and heuristics Burke et al Hybridising graph heuristics in hyper-heuristic by b(4) [22] 2005 CBR and systematic strategies Cote et al Bio-objective Evolutionary Algorithm with local b(12) [63] 2005 search operators in the recombination process Kendall amp;Hussin Tabu Search based hyper-heuristic b(8) [94] 2005 Yang amp;Petrovic Similarity measure using fuzzy set on selecting b(12) [161] 2005 hybridisations of Great Deluge and graph heuristics Abdullah et al Large neighbourhood search with tree-based b(12), c(5) [3] 2007 neighbourhood structure [4] 2007 Tabu Search based large neighborhood search... ..."

Table 1: Benchmark Datasets

in Feature Selection and Kernel Design via Linear Programming
by Glenn Fung, Romer Rosales, R. Bharat Rao
"... In PAGE 4: ... (8) 3 Numerical Evaluation For our experimental evaluation we used a collection of nine publicly available datasets, part of the UCI repository4. A summary of these datasets is shown in Table1 . These datasets are commonly used in machine learning as a benchmark for performance evaluation.... ..."

Table 1: Benchmark Datasets

in Feature Selection and Kernel Design via Linear Programming
by Glenn Fung, Romer Rosales, R. Bharat Rao
"... In PAGE 4: ... (8) 3 Numerical Evaluation For our experimental evaluation we used a collection of nine publicly available datasets, part of the UCI repository4.A summary of these datasets is shown in Table1 . These datasets are commonly used in machine learning as a benchmark for performance evaluation.... ..."

Table 1. Benchmark Datasets

in A pitfall and solution in multi-class feature selection for text classification
by George Forman 2004
"... In PAGE 6: ... We performed our evaluations on the Cora dataset, plus 18 other text datasets provided by Han and Karypis (2000). Refer to Table1 . The classification tasks are drawn from standard benchmarks such as Reuters, OHSUMED, and TREC, among others.... ..."
Cited by 10

Table 1. Benchmark Datasets

in A pitfall and solution in multi-class feature selection for text classification
by George Forman, George Forman 2004
"... In PAGE 7: ... We performed our evaluations on the Cora dataset, plus 18 other text datasets provided by Han and Karypis (2000). Refer to Table1 . The classification tasks are drawn from standard benchmarks such as Reuters, OHSUMED, and TREC, among others.... ..."
Cited by 10

Table 3: Dynamic benchmark characteristics.

in Improving the Accuracy of Indirect Branch Predication via Branch Classification
by John Kalamatianos, David R. Kaeli
"... In PAGE 6: ... For each benchmark, the table lists the number of ST and MT branches (only jsr are classi ed as ST since all jmp instructions are conservatively considered to be MT). The dynamic characteristics of the benchmarks are shown in Table3 . They include the input le, the size of the trace, the percentage of indirect branches (IB), the percentage of indirect branches that are MT and ST indirect branches, the number of indirect branches executed per instruction and per conditional... In PAGE 7: ... branch, and the average number of targets per indirect branch. As we can see from Table3 , most benchmarks execute more ST branches than MT branches. This is expected since ST branches are library function calls.... ..."
Next 10 →
Results 1 - 10 of 37,907
Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University