### Table 6: Qualitative video analysis of the behaviour of S6 by a human classifier. The confidence factors star, starstar and star star star correspond to vague observation, good observation and confident observation of a behaviour respectively.

2005

### Table V. Results with the best MSEtst found As it can be observed, the use of the granularity that produces the best MSEtst for an speci c learning method in other learning method relatively causes good behaviour. In some cases there is a high performance improvement with respect to the best MSEtst found considering the same number of labels for all the variables. In other cases, the improvement is very small, and there are some cases where the accuracy decreases. Now, we present a similar study, but considering the fuzzy partition granularity that produce the FRBS with the best MSEtra:

2000

Cited by 5

### Table 2 shows the CPU usage. For high loads, HA Replica- tion is almost saturated with an usage of 99%. In summary, our replication algorithm shows a good behaviour for long client interactions. The overhead of replicating ONTs is not perceptible in terms of throughput. Since the main motiva- tion of our algorithm precisely targets long transactions, the attained results are very promising.

2006

"... In PAGE 8: ...eplication algorithm are very close (Fig.6). Both are able Figure 5. J2EEAS benchmark Con guration CPU Utilization Ir=3 CPU Utilization Ir=8 JBoss Non-Replicated 27% 60% HA Replication 42% 99% Table2 . CPU usage in J2EEAS benchmark to reach the target till an Ir of 7.... ..."

Cited by 3

### Table 1. Results for the Enum example The numbers show that all provers bene ted from the reduction of axioms, but there were enormous di erences: Very signi cant improvements were made by Spass and 3TAP, while the other three provers bene ted less. The time neces- sary to prove theorems was reduced by a factor of three on average. A possible explanation for the good behaviour of Protein, Otter and Setheo is, that their proof search concentrates on the distinguished goal clause. E.g. if we intentionally take a wrong goal clause, the time Setheo needs to nd a proof 10

1998

Cited by 17

### Table 1 (resp. Table 2) gives the quadratic errors of estimators of 1 (resp. 2) for each sample size and di erent dimensions kn. In each case, one can notice that R( kn) looks like a convex function of dimension kn and a too large kn gives bad estimates of by increasing the variance of the estimate. Also, it appears that, for the rst example, the best dimension selected for the estimation procedure is reasonably closed to theoretical \optimal quot; dimension. This last point illustrates the good behaviour of our estimator. In real life study, this quadratic criterion error can not be computed and on another hand, it is clear from the above simulation that the quality of the estimator depends considerably 5

### Table 4.13: Stability Margin 1= kTzwk1 The behaviour of the control system is characterised in terms of the closed-loop norms in Table 4.13 and Table 4.14 on the next page in xed operating points only. It would be substantially more complicated to analyse a gain-scheduled control scheme as a linear parameter-varying system, and the conditions for robust stability and robust performance would only be su cient in this framework, so that practically one often resorts to nonlinear simulation studies. Yet, the simple numbers 1= kTzwk1 and kTedk1 already give a good indication of the control system behaviour. The system exhibits good stability properties over a wide range of operating con- ditions, at low manifold pressure, however, and especially during idle operation, the stability is compromised.

### Table 1: Behavior of Lyapunov spectra in 3DFPU model. k and E=N are coupling constant and total energy density, respectively. The sign L means linear behaviour of the Lyapunov spectrum, while the sign C curved behaviour. The curved Lyapunov spectra are good agreement with other four models. k n E=N 1:0 2:0 3:0

1998

Cited by 2

### Table 1: Pearson correlation matrix for behavioural model constructs

"... In PAGE 8: ...orward contracts to sell wool. About 34% of respondents had no experience using forward contracts. Analysis of correlations is a good method of testing the strength of association between two variables. Table1 is the correlation matrix for the variables tested in our behavioural model; it also shows the abbreviations of variable items used herein. The highest correlation coefficient is between relative advantage and intention to adopt forward contracts (r = 0.... ..."

### Table 8 Goodness of Fit Actual Simulated

"... In PAGE 26: ...Goodness of t As an exercise to examine the behaviour of the estimated model relative to the actual data, I simulated a dataset of the same size o the estimated param- eters. Table8 displays the means and standard deviations of the simulated and real data by Wave, calculated for susceptibles only since the behavioral response of suscetibles is the key phenomenon of interest here. Table 8 suggests that the estimated model does a mediocre job of mimic- ing the rst two moments of the real data, but also suggests why any such model will have di culties.... In PAGE 26: ... Table 8 displays the means and standard deviations of the simulated and real data by Wave, calculated for susceptibles only since the behavioral response of suscetibles is the key phenomenon of interest here. Table8 suggests that the estimated model does a mediocre job of mimic- ing the rst two moments of the real data, but also suggests why any such model will have di culties. Beyond Wave 4, the real and simulated data match very well, but the simulated data signi cantly underpredicts partners, both mean and standard deviation, in the rst four Waves.... ..."

### Table 2: In uence of branching criterion on the performance of the algorithm A second series of tests was designed to assess the role of the initial upper bound on the behaviour of the algorithm. It is to be expected that a good upper bound will allow to prune more branches of the enumeration tree at earlier levels, and therefore speed up running times. In Table 3, we compare the speedup achieved when using the optimal value z for an upper bound, versus using +1. It appears that the a priori knowledge of a good initial upper bound has little e ect on computing times. Clearly, the sooner the optimal solution is found by the algorithm, the smaller the advantage of starting the algorithm with a good upper bound. However, in some situations, an e cient heuristic could be useful in improving the e ciency of the algorithm.

"... In PAGE 12: ... Explore rst the branch corresponding to xing that variable at the upper bound. As can be observed in Table2 , the number of nodes that have to be explored before obtaining an optimal solution and proving its optimality is very sensitive to the selected branching criterion. The best results have been obtained under criteria BR2 and BR3, which consistently outperformed criteria BR1 and BR4.... ..."