### Table 1 The theoretical reasons of this result will be exposed later but, at this stage, this result is sufficient to put into light that any attempt to model or forecast stock market must be based on successive variations of price and not on the prices themselves. The classical measure of successive variations is the return, either calculated in discrete

2001

Cited by 3

### Table 34 lists the coe cients for the correctness model for the Towns2 data set. The residual deviance of this model (3493.9 on 2634 degrees of freedom) is large, suggesting that the model does not t the data well. The histograms in Figure 38 con rm this. The addition of higher order terms may improve the t of the model, although initial attempts at this failed to do so. Smoothing techniques may be more appropriate for the analysis of this data.

"... In PAGE 97: ...90 -2.4812 Table34 : Coe cients for the correctness model for group-number GCAs on Towns2. The t of time model for the Town2 data is also questionable due to a high residual deviance (22985 on 2073 degrees of freedom).... ..."

### Table 1 contains some comparisons between fatality rates estimated using the Deterministic and Uncertainty Modes in our model and estimates from the Graham (1999) Method. While the Deterministic Mode estimate and the mean and median estimates of the Uncertainty Mode for the sudden failure case under the existing system are reasonably similar to the Graham suggested value, other estimates differ quite significantly. This is to be expected when comparing estimates from a simulation model, which attempts to take into account various site-specific considerations and processes, and a lumped empirical technique with the limitations summarised in Section 1.2.

2003

"... In PAGE 10: ...5 lives at T = - 3 hours. An 80% confidence bound on fatality rates estimated using the Uncertainty Mode is presented in Table1 for all failure and warning and response system cases. The relative width of these confidence intervals varies significantly from less than 20% of the median estimate for the sudden failure cases to more than 200% for the delayed failure cases.... In PAGE 18: ...ANCOLD 2003 Conference on Dams Page 6 Table1 . Comparison of Simulation Model Estimates for Deterministic and Uncertainty Modes and Graham Method Estimates Lower Upper Mean Median Lower Bound (10%) Upper Bound (90%) Relative Confidence Interval Width b) High No warning Not applicable 0.... ..."

Cited by 4

### Table A.1. Plots of the distortion are also shown in Figures A-1 and A-2. In order to gain some insight into the effect of radial distortion on our method we will attempt to quantify the difference between the radial distortion model

1995

Cited by 26

### Table A.1. Plots of the distortion are also shown in Figures A-1 and A-2. In order to gain some insight into the effect of radial distortion on our method we will attempt to quantify the difference between the radial distortion model

1995

Cited by 26

### Table 7: Average number of scheduling attempts for each instruction and average size of ready and hold list for each scheduling attempts during BUcsr code reorganization.

"... In PAGE 65: ... So we measure these two values for four benchmarks. Table7 shows the data mea- sured during BUcsr code reorganization for the 8-issue model. First we can see that AVAT has always been less than 2.... In PAGE 66: ...for floating-point instructions: it allows only 4 floating-point instructions per cycle, and floating-point instructions compete with load/store instructions for issuing slots. The third column of Table7 shows the average size of ready_list plus hold_list during scheduling, and the fourth column of Table 7 shows the maximum size of ready_list plus hold_list that ever occurs during code reorganization. We can see not only the average list size but also the range in which the list size fluctuates.... In PAGE 66: ...for floating-point instructions: it allows only 4 floating-point instructions per cycle, and floating-point instructions compete with load/store instructions for issuing slots. The third column of Table 7 shows the average size of ready_list plus hold_list during scheduling, and the fourth column of Table7 shows the maximum size of ready_list plus hold_list that ever occurs during code reorganization. We can see not only the average list size but also the range in which the list size fluctuates.... In PAGE 66: ... According to the analysis in Section 3.3 and the data shown in Table7 , we expect that the average behavior of both top-down cycle scheduling and bottom-up code reorga- nization is quite linear, therefore bottom-up code reorganization should only increase the total prepass scheduling time by a constant. In Figure 18, we show the average time spent on each instruction.... In PAGE 67: ... Also we can see that the numbers for all the benchmarks except fpppp are in one scale, and the number for fpppp is in a totally differ- ent scale. These results also correlate to the results in Table7 , in which fpppp has a much longer average list size than other benchmarks. 0 500 1000 1500 2000 2500 3000 3500 4000 4500 wc compress eqntottespresso ijpeg m88ksimmpeg_play go li perl gcc fpppp Scheduling time per instr (usec) TDgreedy TDips TDgreedy-BUsimple TDgreedy-BUcsr Figure 18.... ..."

### Table 1: A posteriori equal error rates (EER) for different model sets and operational modes on a total of 4370 attempts for users and impostors with 1 trial per attempt. The amount of speech data used for verification is specified: 3s corresponds to the first 3s of each utterance; EOS means the entire sentence used with an average duration of 7.1s per sentence.

1995

"... In PAGE 2: ...ng session the a posteriori EER was 5.5%. Therefore this second set of test sentences were used in all the remaining experiments in order to have more realistic conditions. The experimental results on the BREF data are summa- rized in Table1 . This approach was evaluated in both text- independent and text-dependent modes, for one and two trials per validation attempt.... ..."

Cited by 6

### Table 1 shows the number of binary branching nodes for each of the two decision tree models for both English and German. The complexity of these decision trees validates the data-driven approach, and makes clear how daunting it would be to attempt to account for the facts of comma insertion in a declarative framework.

2002

"... In PAGE 4: ... Table1 Complexity of the decision tree models in Amalgam At generation time, a simple algorithm is used to decide where to insert punctuation marks. Pseudo-code for the algorithm is presented in Figure 1.... ..."

Cited by 1

### Table 1: Speaker identification rate (single trial) and equal error rates (EER) for different test data types with multistyle training (left) and type-specific training (right), based on 21775 user attempts and 10908 91 imposter attempts. The text is known.

2000

"... In PAGE 2: ... MULTISTYLE VS TYPE-SPECIFIC TRAINING Experiments were carried out to assess the influence of the amount and type of data used for training speaker- specific models and for the authorization attempts.3 The first row in Table1 compares text-dependent speaker iden- tification rates as a function of the utterance type and the trainingcondition(multi-styleor type-specific). Multi-style training makes use of all types of read-speech training data for the 10 training calls.... In PAGE 2: ... When type-specific training is used, and testing is carried out on the same type of data, the speaker identifi- cation rates are slightly higher for the digits and the SEPT sentences, and slightly lower for the Le Monde sentences. The lower part of Table1 gives the known-text equal er- ror rates (EER) for the different data types for multistyle and type-specific training. Results are given for 1 and 2 user attempts, with and without a minimal duration constraint.... ..."

Cited by 6

### Table 1 -- MLE Results for the Single Period Binomial Logit Model

1998

"... In PAGE 3: ... Maximum likelihood estimation ( MLE ) of the binomial logit model yielded parameter estimates and statistics for separation likelihood. (See Table1 .) Interpreting the signs and significance levels of parameter estimates is similar to linear regression.... ..."