• Documents
  • Authors
  • Tables

CiteSeerX logo

Tools

Sorted by:
Try your query at:
Semantic Scholar Scholar Academic
Google Bing DBLP
Results 1 - 10 of 199,089
Next 10 →

Table 2: Boosting with dynamic features

in Simple training of dependency parsers via structured boosting
by Qin Iris Wang 2007
"... In PAGE 6: ...tures, we added these additional features to the local predic- tion model and repeated the previous boosting experiment. Table2 shows a significant further improvement in parsing accuracy over just using the static features alone (Table 1). Once again, however, boosting provides further improvement over the base model on both English and Chinese with re- spect to dependency accuracy.... ..."
Cited by 2

Table 3: Results with merged trees.

in MiniBoosting Decision Trees
by J. R. Quinlan 1986
"... In PAGE 10: ... For each run, C1, C2, and C3 were found, merged into a single tree C1:2:3, and this tree further reduced to C0 1:2:3 by the removal of unpopulated leaves. Only 24 of the previous 27 datasets could be processed in this way { three of the largest gave rise to merged trees so huge that virtual memory was exhausted! Results for these 24 datasets appear in Table3 . The rst part shows the sizes of these trees as measured by their numbers of leaves.... In PAGE 10: ... The great majority of leaves on C1:2:3 have no corresponding training cases because, when these unpopulated leaves are removed, the resulting tree C0 1:2:3 is much smaller; on average, it is just under four times as big as C1. The second section of Table3 concerns error rates of the merged trees when used to classify unseen cases, expressed as percentages and ratios to the error rates of the single tree C1. Overall, as expected, C1:2:3 performs very similarly to the boosted classi er CB { the geometric mean of the ratios for these datasets, .... ..."
Cited by 2605

Table 3: Results with merged trees.

in MiniBoosting Decision Trees
by J. R. Quinlan 1986
"... In PAGE 10: ... For each run, C1, C2, and C3 were found, merged into a single tree C1:2:3, and this tree further reduced to C0 1:2:3 by the removal of unpopulated leaves. Only 24 of the previous 27 datasets could be processed in this way { three of the largest gave rise to merged trees so huge that virtual memory was exhausted! Results for these 24 datasets appear in Table3 . The rst part shows the sizes of these trees as measured by their numbers of leaves.... In PAGE 10: ... The great majority of leaves on C1:2:3 have no corresponding training cases because, when these unpopulated leaves are removed, the resulting tree C0 1:2:3 is much smaller; on average, it is just under four times as big as C1. The second section of Table3 concerns error rates of the merged trees when used to classify unseen cases, expressed as percentages and ratios to the error rates of the single tree C1. Overall, as expected, C1:2:3 performs very similarly to the boosted classi er CB { the geometric mean of the ratios for these datasets, .... ..."
Cited by 2605

Table 1. State graphs of the dynamic control

in unknown title
by unknown authors
"... In PAGE 11: ... 3.4 Complexity Table1 gives an overview of a subset of the state graphs we have generated using di erent reduction techniques and allows to compare their sizes. Execution Time With respect to execution time, the following observation can be made: execution times are roughly proportional to the size of the generated graphs, which means that the di erent reduction methods do not introduce any signi cant overhead.... In PAGE 11: ... For partial order reduction it is the case be- cause we use a simple static dependency relation. Table1 shows only minimization results for relatively small graphs (ap4a and mt4a) so that minimization time is small anyway. Nev- ertheless, it can be seen that minimization for observational equivalence is more expensive than for safety equivalence, as the computation of the transitive closure transition relation \ a quot; is required (where represents a non-observable and a an observable transition).... In PAGE 13: ... This lead to a number of error traces which we considered to be \probably because of too loose assumptions on the environment quot; and we added corresponding restrictions for subsequent veri cations. The graphs mentioned in Table1 have been obtained using the most restrictive environment. Property: Association Establishment Req1.... ..."

Table 1. State graphs of the dynamic control

in Verification Experiments on the MASCARA Protocol
by Guoping Jia, Susanne Graf 2001
"... In PAGE 11: ... 3.4 Complexity Table1 gives an overview of a subset of the state graphs we have generated using di erent reduction techniques and allows to compare their sizes. Execution Time With respect to execution time, the following observation can be made: execution times are roughly proportional to the size of the generated graphs, which means that the di erent reduction methods do not introduce any signi cant overhead.... In PAGE 11: ... For partial order reduction it is the case be- cause we use a simple static dependency relation. Table1 shows only minimization results for relatively small graphs (ap4a and mt4a) so that minimization time is small anyway. Nev- ertheless, it can be seen that minimization for observational equivalence is more expensive than for safety equivalence, as the computation of the transitive closure transition relation \ a quot; is required (where represents a non-observable and a an observable transition).... In PAGE 11: ... For the considered system, we get similar reductions when slicing according to the 4 main sub-protocols (1 to 2 addi- tional orders of magnitude), where connection opening is slightly more complicated than the others (it involves more signal exchanges than the others), and thus we get a bit less reduction. It was impossible to generate the state graph of the global system as a whole, thus we started to consider ap and mt in isolation (see rst two parts of the Table1 ). Finally, we were able... In PAGE 13: ... This lead to a number of error traces which we considered to be \probably because of too loose assumptions on the environment quot; and we added corresponding restrictions for subsequent veri cations. The state graphs mentioned in Table1 have been obtained using the most restrictive environment. Property: Association Establishment Req1.... ..."
Cited by 13

Table 1: Threshold values extracted from dynamic programming data, for implementations using Fork on the SB-PRAM and using MPI on the Xeon cluster. The numbers indicate that if a subtask is smaller than this number multiplied by problem size, serialisation is the optimal strategy.

in Load Balancing of Irregular Parallel Divide-and-Conquer Algorithms in Group-SPMD Programming Environments
by Mattias Eriksson, Christoph Kessler, Mikhail Chalabine 2005
"... In PAGE 7: ... Now we need to find threshold values on n and p. In Table1 we have extracted such threshold values, for a few chosen n and p from dynamic programming data. The table should be read as this: the values represent a ratio, r,such that if one subtask has a size n i with n i n lt;rthen the optimal strategy is to serialise the execution.... ..."

Table II. Symbols and Labels Used in CUA Task Diagrams Category Symbol/Label Description

in Task Analysis for Groupware Usability Evaluation: Modeling Shared-Workspace Tasks with the Mechanics of Collaboration
by David Pinelle, Carl Gutwin, Saul Greenberg 2003
Cited by 45

Table 1: Time used to execute test case with boost- ing

in Solving the Priority Inversion Problem in legOS
by Michael Haugaard Pedersen, Morten Klitgaard Christiansen, Thomas Glæsner, Thomas Glsner

Table 4. Eliminating philosophers. protocol continues, then the philosopher that is leaving the table asks the neigh- bors for the name of the forks they use in order to be able to de ne the new links. When he receives these informations, he eliminates his rst fork and he commu- nicates to the neighbors that they have to share his second fork (see Figure 3). Moreover, in order to be sure that at least one right and one left philosopher stay at the table, the neighbors will share the same fork as rst fork. The function compState(i; self; state) computes the new state of the neighbor i of the leaving philosopher: compState(i; self; state) def= rec := (fork1 : state:fork2; phil1 : state:phili) if (state:stPhili:phil1 = self) then rec := rec ] (fork2 : state:stPhili:fork2; phil2 : state:stPhili:phil2) else rec := rec ] (fork2 : state:stPhili:fork1; phil2 : state:stPhili:phil1) return rec

in An Actor Algebra for Specifying Distributed Systems: the Hurried Philosophers Case Study
by Mauro Gaspari, Gianluigi Zavattaro 1998
"... In PAGE 14: ... 4.4 Eliminating philosophers In Table4 we show how it is possible to extend the speci cation of dining philosophers by introducing also the possibility of eliminating philosophers ac- tually around the table. Also the process of elimination can start only when a philosopher is thinking.... ..."
Cited by 7

Table 5. Execution Performance

in Online Testing of Real-time Systems
by Kim G. Larsen, Marius Mikucionis, Brian Nielsen
"... In PAGE 14: ...PPAAL tool to record the size of the symbolic state-set (i.e. the number of symbolic states in the state-set) as the test was executed, and to record the amount of CPU time used to compute the next state-set after a delay and an observable action. Table5 summarizes the results. The state-set size is in average only 2-3 symbolic states per state-set, but it varies a lot, up to 44 states.... ..."
Next 10 →
Results 1 - 10 of 199,089
Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University