• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Tools

Sorted by:
Try your query at:
Semantic Scholar Scholar Academic
Google Bing DBLP
Results 1 - 10 of 62,783
Next 10 →

Table 1: The precision of the GodClasses detection strategy Set Positive Negative Original parameters Tuned parameters

in object-oriented
by Petru Florin, Mihancea Radu Marinescu

Table 2: The precision of the DataClasses detection strategy Set Positive Negative Original parameters Tuned parameters

in object-oriented
by Petru Florin, Mihancea Radu Marinescu

Table 1: Jetset and Ariadne defaults and optimized parameter sets. The rst column gives the result of the tuning with an evolution strategy, the second was received by conventional tuning 3.

in Applying Unconventional Methods To Tune High Energy Physics Models To Data
by C. Busch, K. H. Becks
Cited by 1

Table 1. Experimental results for several bandit algorithms. The strategies are com- pared in the case of several datasets. The R-x datasets corresponds to a maximization task with random Gaussian levers (the higher the score, the better). The N-x datasets corresponds to a minimization task with levers representing retrieval latencies (the lower the score, the better). The numbers following the strategy names are the tuning parameters used in the experiments.

in Multi-armed bandit algorithms and empirical evaluation
by Joannès Vermorel, Mehryar Mohri 2005
"... In PAGE 9: ...Although we realize that most of the algorithmswe pre- sented were designed for the case where the number of rounds is large compared to the number of lever, we believe (see here below or [4]) that the configuration with more levers than rounds is in fact an important case in practice. Table1 (columns R-100, R-1k and R-10k) shows the results of our experiments obtained with 10 000 simulations. Note that the numbers following the name of the strategies correspond to the tuning parameter values as discussed in section 2.... In PAGE 10: ... The bandit strategies have been tested in two configurations: 130 rounds and 1300 rounds (corresponding respectively to 1/10th of the dataset and to the full dataset). Table1 (columns N-130 and N-1.3k) shows the results which correspond to the average retrieval latencies per round in milliseconds.... ..."
Cited by 1

Table 1. Experimental results for several bandit algorithms. The strategies are com- pared in the case of several datasets. The R-x datasets corresponds to a maximization task with random Gaussian levers (the higher the score, the better). The N-x datasets corresponds to a minimization task with levers representing retrieval latencies (the lower the score, the better). The numbers following the strategy names are the tuning parameters used in the experiments.

in unknown title
by unknown authors 2005
"... In PAGE 9: ... Although we realize that most of the algorithms we presented were designed for the case where the number of rounds is large compared to the number of lever, we believe (see here below or [4]) that the configuration with more levers than rounds is in fact an important case in practice. Table1 (columns R-100, R-1k and R-10k) shows the results of our experiments obtained with 10 000 simulations. Note that the numbers following the name of the strategies correspond to the tuning parameter values as discussed in section 2.... In PAGE 10: ... The bandit strategies have been tested in two configurations: 130 rounds and 1300 rounds (corresponding respectively to 1/10th of the dataset and to the full dataset). Table1 (columns N-130 and N-1.3k) shows the results which correspond to the average retrieval latencies per round in milliseconds.... ..."
Cited by 1

Table 9 for monolingual collections and in the bottom half for bilingual retrieval. We also tuned the parameters of this blind query expansion, as illustrated in the quot;best performance quot; row, showing the best average precision that could be achieved using this strategy (the corresponding parameter setting is given in the following row). Average precision (% change)

in Abstract Cross-Language Information Retrieval: Experiments Based on CLEF 2000 Corpora
by Jacques Savoy
"... In PAGE 25: ... Table9... ..."

Table 1: Summary of Parameter Setting Strategy and Results

in Toward a Dynamic Model of Early Algebra Acquisition
by Benjamin A. MacLaren, Kenneth R. Koedinger 1996
"... In PAGE 5: ... For instance, moving to non-integers involved tuning the parameters for buggy arithmetic, moving to start-unknown involved tuning parameters for unwind productions, and moving to symbolic problems involved parameters to interpreting symbols. Table1 shows the central productions in the model that we tuned (in the left-most column) and for each problem type (along the top) it shows what productions apply for that type. For example, for easy story arithmetic (Arth Easy Stry) there are two strategy selection productions, Select*Verbal-Arithmetic and GiveUp-Problem.... In PAGE 5: ... Table1 also shows for each production we tuned, what group of problems we tuned it for (XX) and what group it also applies to (X). Finally, it also shows the resulting parameters: the estimated probability for success if that production fires (R) and the sum of the production cost and estimated cost-to-goal after firing that production, A+B (measured in seconds).... ..."
Cited by 1

Table 1: Empirical performance comparison of the two sequential strategies Sk and with the default sequential strategy S1 on a set of theorem proving examples. The table is ordered by the last column, i.e. by the relative expected time of the sequential universal strategy. Since the performance of the sequential universal strategy shows only little dependence on to , we did no further tuning of this parameter for the sequential measurements. With this setting we computed the expected values of the default sequential strategy S1, the optimal sequential repeating strategy S and the sequential universal strategy for a number of theorem proving problems which are shown in Table 1. Hereby we used (3) for E(Sk ) and to compute E() we used the generic formula

in Optimal Parallelization of Las Vegas Algorithms
by Michael Luby, Wolfgang Ertel 1994
"... In PAGE 14: ... Since S1 is the default sequential strategy used by SETHEO (and most other combinatorial search algorithms) we will use its expected value as a reference point to compare the sequential repeating strategy S and the sequential universal strategy . Two of the example theorems listed in Table1 (8-puzzle, queens10) are combinatorial puzzle problems which are easy to formalize in logic. All the other theorems have been selected randomly from a set of several hundred mathematical theorems which SETHEO is able to prove.... In PAGE 14: ... All the other theorems have been selected randomly from a set of several hundred mathematical theorems which SETHEO is able to prove. The o set time to for used for computing the gures in Table1 is 100 inferences, since the shortest running time of SETHEO varies between 10 and 100 inferences in most examples.6 6The time required for SETHEO to perform one inference step was used as time unit for our measurements.... In PAGE 15: ...ith t0 = 0. This formula can be applied to any sequential strategy S = (t1; t2; : : :). For parallel strategies p(t) has to be replaced by pk(t). Of special interest are the fourth and the last numeric columns in Table1 , where the time ratios are given. As expected, S is always at least as good as S1 and often much better.... In PAGE 16: ... For the example \mult3 quot;, even with 1000 processors a speedup of 870 can be achieved. Close to linear speedup has also been observed in all the other examples listed in Table1 . For the last example \s8t1-i10 quot; the speedup Spopt of Sk is clearly sublinear, even for a small number of processors.... ..."
Cited by 11

Table 1: ISCAS benchmark circuits. Algorithm TS-CPP was coded in C. The codes of algorithms asp and cep, also in C, are those kindly given by their authors, M. Davis-Moradkhan and C. Roucairol. Extensive numerical results obtained on a Sun SPARCstation-2 and reported by Andreatta [1] are available by request from the authors. We also notice that the weights of the cost function have been xed throughout all computational experiments at = 50 and = 20. These values satisfy the conditions established in Section 4.4 and their ratio ensures that no subcircuit will have more than L + 2 inputs and pseudo-inputs. The rst part of our computational experiments was devoted to tuning the best parameter values and strategies for algorithm TS-CPP. Three aspects have been evaluated: 19

in A Graph Partitioning Heuristic for the Parallel Pseudo-Exhaustive Logical Test of VLSI Combinational Circuits
by Alexandre A. Andreatta , Celso C. Ribeiro 1994
"... In PAGE 21: ... The objective was twofold: rst, to tune the pa- rameter values for the tabu search algorithm; second, to compare and evaluate its e ciency with respect to other algorithms proposed in the literature. We give in Table1 the basic description of each circuit: the number of inputs, gates, outputs and links, as well as the maximum in-degree d? max and the maximum out-degree d+ max among all gates in the circuit.... ..."
Cited by 11

Table 1. Parameter Tuning

in EYELASH REMOVAL METHOD FOR HUMAN IRIS RECOGNITION
by D. Zhang, D. M. Monro, S. Rakshit
"... In PAGE 3: ...5 0.3] Table1 shows some EER results by varying each of the parameters in turn around this chosen operating point. It is seen that the performance is quite sensitive to all parameter settings except Var_Grad.... ..."
Next 10 →
Results 1 - 10 of 62,783
Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University