Results 11 - 20
of
47,282
Table 1. Response times from a simple detection experiment (after Townsend and Nozawa, 1995). Small (s) and large (l) circles were presented at left and right locations on a computer monitor.
"... In PAGE 20: ... Typically the midpoint of [np] and [np] + 1 is used, but other weighting schemes can be used as well (Davis amp; Steinberg, 1983). As an example, consider the data in conditions Left(s) in Table1 . We will estimate the median, the p = :50 quantile.... In PAGE 20: ...5th, 25th . . . percentiles { the midpoints of the decile ranges. Thus, the vincentiles separate the sample into q + 1 groups, and the relative frequencies of the middle groups are 1=q, and the relative frequencies of the slowest and fastest groups are 1=2q. Consider the RTs for condition Left(s) shown in Table1 . There are n = 45 observations... In PAGE 25: ... Even with smaller sample sizes, the di erences between the Gaussian kernel estimator and other, more complex estimators are quite small. To illustrate how the Gaussian kernel estimate is computed, consider again the RT data from condition Left(s) given in Table1 . We rst need to determine an appropriate bandwidth parameter hn.... In PAGE 29: ... For density estimates this kernel is actually more e cient than the Gaussian kernel (although, in my experience, there is very little di erence between them, but see Silverman, 1986). The integral of the kernel, necessary for computing ^ F (t), is Z x ?p5 K(u)du = 3 4p5(x ? x3=15) + 1 2: To illustrate how the estimate ^ h(t) is computed, we will work again with the Left(s) data from Table1 . We rst must determine the appropriate smoothing parameter hn, and to do this we will again rely on Silverman apos;s method, described above.... In PAGE 33: ... Prepackaged model- tting applications, such as RTSYS (Heathcote, 1996) or those found in larger applications such as SAS, SPSS or MATLAB, if they do not provide a choice of the search algorithm will typically explain the algorithm that will be used. To illustrate parameter estimation using maximum likelihood, I will step through how the ex-Gaussian distribution was t to the Left(s) data in Table1 . The Gaussian kernel estimate of the density for this sample is shown in Figure 3.... In PAGE 35: ..., 2000). To illustrate the technique, we will t the ex-Gaussian CDF to the Left(s) data from Table1 . The ex-Gaussian CDF is given by F (t) = t ? ? e? t + + 2 2 2 t ? ? 2= ! : (4) We will begin by selecting the same starting values as for the MLE procedure above: ^ = f200; 15; 100g for = f ; ; g.... In PAGE 36: ... In the next half of the chapter, I will present a number of applications of RT distributional analysis that allow for testing of speci c hypotheses concerning the nature and arrangement of mental processes measured by RT. I will illustrate how these techniques are used by way of the data set presented in Table1 . However, the reader should be cautioned that these samples are much smaller than the samples that should be... In PAGE 38: ... The only option is to estimate the variability in the hazard functions by bootstrapping as described above. For purposes of illustration, consider the data in Table1... In PAGE 42: ...42 We can apply this analysis to the data in Table1 . Consider the RTs collected in conditions Both(ii) (LR), Left(i) (LR) and Right(i) (LR), where i is either s or l.... In PAGE 45: ... Using the functions IC(t) and C(t), Townsend and Nozawa (1995) showed that processing in the redundant targets paradigm seemed to occur over parallel channels, and that the response is likely to be based on a coactivation mechanism, although a race mechanism may still be plausible. We can perform the same sort of analysis of the data in Table1 . First, let apos;s examine the capacity function.... In PAGE 51: ...51 We can then construct the samples fT 1 11 T 1 22; T 2 11 T 2 22; : : : ; T m1 11 T m1 22 g and fT 1 12 T 1 21; T 2 12 T 2 21; : : : ; T m2 12 T m2 21 g, where m1 = minfn11; n22g and m2 = minfn12; n21g. For example, the unordered samples for T11 and T22 for the data shown in Table1 are T11 = f484; 720; 485; : : :g and T22 = f536; 369; 430; : : :g. If I wish to test the \minimum quot; decomposition, that is, that the RTs are of the form Tij = min[A(i); B(j)], then I need to construct a sample of min(T11; T22): fmin(484; 536); min(720; 369); min(485; 430); : : :g = f484; 369; 430; : : :g: I also need to do the same for the samples T12 and T21.... In PAGE 52: ... Dzhafarov and Cortese (1996) showed that sample sizes of at least several hundred were necessary to obtain reasonable power { sample sizes easily obtained in most experiments but considerably larger than ours. Under the assumption of perfect positive interdependence the test proceeds in almost exactly the same way, except that the samples are now formed by fT (1) 11 T (1) 22 ; T (2) 11 T (2) 22 ; : : : ; T (m1) 11 T (m1) 22 g and fT (1) 12 T (1) 21 ; T (2) 12 T (2) 21 ; : : : ; T (m2) 12 T (m2) 21 g, where m1 and m2 are as before, but T (1) 11 T (1) 22 (for example) is computed from the ordered samples as given in Table1 . For example, min(T11; T22) = fmin(T (1) 11 ; T (1) 22 ); min(T (2) 11 ; T (2) 22 ); : : :g = fmin(374; 350); min(381; 350); : : :g = f350; 350; : : :g.... In PAGE 52: ... The samples for the two random variables are thus constructed, the EDFs for each sample computed, and the Smirnov distance subjected to the decomposition test. Although the decomposition test can apos;t tell us much about the process that produced the data in Table1 , this is because the sample sizes are too small to provide any power. The decomposition test is a very important method for testing hypotheses about mental architecture, far more powerful than traditional additive factors logic as applied to mean RT data, and should be considered whenever issues of processing stage arrangement are of concern.... In PAGE 56: ... Smith (1990) examined a number of options for ltering noise from the data, and showed how ltering the data after computing the Fourier transform is mathematically equivalent to using a kernel estimator of the density functions. Unfortunately, Smith also showed that recovery of reasonably accurate deconvolved density estimates requires several thousand data points per density, at least, which prevents me from demonstrating the procedure with the data from Table1 . Even with theoretically exact density functions, Smith demonstrated that a number of problems can arise if, e.... ..."
Table 3 Applying Execution Monitoring
"... In PAGE 4: ... The patient receives care in return for fees. The corresponding process model is relatively simple: the patient pays only after receiving the care, according to the execution monitoring pattern, applied from the perspective of the patient ( Table3 ). So there is direct quality feedback.... ..."
Table 8. Applicability of the EEE procedures generated in the study.
2001
"... In PAGE 57: ... The different methods used in the experiment were developed into the EEE procedures. Table8 shows collectively the strengths and weaknesses of the EEE procedures and indicates which procedure is useful in each instance. The relative suitability of various methods to incorporating usability issues at the different stages of the design process has been discussed by van Vianen et al.... In PAGE 58: ... The separate issues specific to the different EEE procedures used in the papers are represented in the small boxes. In Table8 , the EEE procedures are presented as a more common approach applicable to industrial usability testing. With the EEE procedures, the customer perceptions are obtained on a quantitative scale and in numerical terms as perceived by the subjects (customers, users).... ..."
Table 4: Simple modelling of some NACCH signalling procedures
"... In PAGE 3: ... PERFORMANCE TRIALS The interference induced by the NACCH and the resulting data rate on the NACCH was investigated by dynamic dis- crete-event simulations of a complex radio network model [5]. Table4 shows simple assumptions on the packet sizes of some NACCH signalling procedures that were used. Notice that the broadcast channel is modelled to be perma- nently active in downlink with the same constant power as a normal traffic channel.... In PAGE 4: ... Below this threshold, one channel is sufficient in conjunction with a congestion res- olution mechanism. As a reference, simulations were done with signalling phases 10 times as long as specified in Table4 (e.... ..."
Table 4: Simple modelling of some NACCH signalling procedures
"... In PAGE 3: ... PERFORMANCE TRIALS The interference induced by the NACCH and the resulting data rate on the NACCH was investigated by dynamic dis- crete-event simulations of a complex radio network model [5]. Table4 shows simple assumptions on the packet sizes of some NACCH signalling procedures that were used. Notice that the broadcast channel is modelled to be perma- nently active in downlink with the same constant power as a normal traffic channel.... In PAGE 4: ... Below this threshold, one channel is sufficient in conjunction with a congestion res- olution mechanism. As a reference, simulations were done with signalling phases 10 times as long as specified in Table4 (e.... ..."
Table 4 A simple example of the microsimulation procedure for the modelling of migration and survival
"... In PAGE 7: ...Table4 depicts the steps that need to be followed in the procedure for modelling survival and migration. It should be noted, however, that the example depicted in Table 4 is simplified in order to illustrate the process.... ..."
Table 1. The recursive plan generation procedure
Table 8 Likelihood of Opposition Modeled by Procedural and Text Indicators (Simple Probit)
"... In PAGE 28: ... Overall, it can be concluded that the data provide some preliminary empirical evidence for the validity of hypotheses H2. Finally, Table8 shows two further regressions. In column 8A the likelihood of an opposition is modeled using both procedural indicators and text indicators at the same time.... In PAGE 29: ...Insert Table8 about here Surprisingly, in this joint model of procedural and text indicators, only the number of application claims turns out to have a significant coefficient among all the text indicators. Again, the explanation of this result may only be preliminary and was not necessarily to be expected according to Table 3.... ..."
TABLE I OLSwB MODELING PROCEDURE FOR THE SIMPLE FUNCTION EXAMPLE.
Table 1. Synthetic data generation
"... In PAGE 5: ... Each group has an associated type T. We used the generative process described in Table1 to generate a dataset with NO objects and GS average group size, using the settings specified below. The procedure uses a simple model where X1 has an autocorrelation level of 0.... ..."
Results 11 - 20
of
47,282