### Table 2 Facts about continuous random variables discussed in this paper

"... In PAGE 15: .../(K!20)~(0.6*K)~(0.4*2O-Kt9+111) The required probability can also be approximated by using the normal distribution, as shown later in this paper. Continuous random variables Summarized in Table2 are facts about some of the most common uniform theoretical continuous random variables that are used in com- distribution puter system design and analysis. If X has a uniform distribution its values are restricted to a finite interval, and the probability that the value of X falls in any particular subinterval is the ratio of the length of that subinterval to the length of the whole inter- val.... In PAGE 15: ...5. Also,by Table2 ,E[X]=15andVar[X]=102+12=8.33.... In PAGE 18: ... = 1/0.05. Hence the probability that X does not exceed 75 milliseconds is given by F,(0.075) which by Table2 has the value Similarly, the probability that the value of X does not exceed 50 milliseconds is 4 5.i 1 - e-5 x T= 0.... ..."

### TABLE VII. m RANDOM VARIABLES USED FOR PROBABILISTIC

1992

### TABLE 1: Comparison of Exact and DSD-Determined Parameters for a Synthesized Time Signal Consisting of a Sum of Exponentially Decaying Oscillations, with Known Frequencies, Amplitudes, Phases, and Damping Constantsa

### Table 4b. Continued

2007

"... In PAGE 22: ... Assumptions Our evaluation is based on a number of assumptions mostly based on the available literature on Bt cotton experiences in other countries. In this subsection we explain the assumptions and outline the variations considered in each of the five scenarios ( Table4 a and 4b). Adoption Curve A critical parameter for simulating an adoption curve is the maximum level of adoption or adoption rate for the technology.... In PAGE 24: ... In other words the costs advantage is the net of the technology premium charged for the use of Bt seed as compared to conventional varieties with conventional control. We consulted the literature in order to set the values of the triangular distribution (as shown in Table4 b), emphasizing the lower range of values reported. We set the minimum cost difference at zero.... In PAGE 38: ...28 Table4 a. Assumptions used in the estimation of economic surplus model for the adoption of Bt cotton in West Africa Assumptions Scenario 1 No adoption in West Africa- adoption rest of the world Scenario 2 WA adopts available private sector varieties Scenario 3 WA uses West African varieties backcrossed with private sector lines Scenario 4 WA uses West African varieties backcrossed with private sector lines plus and negotiated premium Scenario 5 WA uses West African varieties backcrossed with private sector lines and irregular adoption Source(s) of assumptions Maximum adoption rates (%) 0% in WA 20% in ROW 30% in WA 20% ROW 30% in WA 20% ROW 50% in WA 20% ROW Fluctuating adoption in Benin and Mali, 30% in rest of WA, 20% ROW Based on Cabanilla, et al.... In PAGE 39: ...29 Table4 b. Continued Assumptions Scenario 1 Scenario 2 Scenario 3 Scenario 4 Scenario 5 Source(s) of assumptions Technology fee (US$/ha) Triangular (15, 32, 56) for ROW Triangular (15, 32, 56) for WA and ROW Triangular (15, 32, 56) for WA and ROW Triangular (9,19,34) for WA and (15, 32, 56) for ROW Triangular (15, 32, 56) for WA and ROW Falck- Zepeda et al.... ..."

### Table 2: Gibbs Sampler timings for a binary (G = 2) image (execution time in seconds per iteration on a CM-5 with vector units) 3.2.1 Iterative Gaussian Markov Random Field Sampler The Iterative Gaussian Markov Random Field Sampler is similar to the Gibbs Sampler, but instead of the binomial distribution, as shown in step 3.2 of Algorithm 1, we use the continuous Gaussian Distribution as the probability function. For a neighborhood model N, the conditional probability function for a GMRF is:

in Scalable Data Parallel Algorithms for Texture Synthesis and Compression using Gibbs Random Fields

1995

"... In PAGE 14: ...omplexity, and steps 2.3 and 2.4 in O(G np ) computational time, yielding a computation complexity of Tcomp(n; p) = O( n (G+jNsj) p ) and communication complexity of 8 gt; lt; gt; : Tcomm(n; p) = O(jNsj qnp ), on the CM-2; Tcomm(n; p) = O(jNsj( (p) + qnp )), on the CM-5, per iteration for a problem size of n = I J. Table 1 shows the timings of a binary Gibbs sampler for model orders 1, 2, and 4, on the CM-2, and Table2 shows the corresponding timings for the CM-5. Table 3 presents the timings on the CM-2 for a Gibbs sampler with xed model order 4, but varies the number of gray levels, G.... ..."

Cited by 7

### Table 1. Pdf apos;s, Moments, and Equivalent Density Functions for Some Continuous Random Variables Equivalent

1989

"... In PAGE 3: ... 3. DOUBLY STOCHASTIC REPRESENTATIONS FOR THE IRRADIANCE DISTRIBUTIONS A number of continuous probability-density functions (pdf apos;s) of interest in our study, and their direct moments, are presented in Table1 . Various equivalent representations for the pdf apos;s are also shown.... In PAGE 4: ...4) As indicated above, the K apos; distribution, G(x, 13, U) A G(u, a, N), is symmetric in the two degrees-of-freedom parameters, a and 1. The distribution denoted GI in Table1 , a compounding of the gamma and the noncentral x2 distributions, is also a compounding of two gamma distributions with a Poisson distribution. This can model the smearing of laser light that has a coherent component and an interfering chaotic compo- nent.... In PAGE 5: ... This operation leads to the distributions presented in Table 2, in which the corresponding factorial moments are also given. Note that the factorial moments in Table 2 are identical to the direct moments of the correspond- ing continuous distributions in Table1 , as noted in Section 2. The exception is the noncentral negative-binomial distribu- tion,3437 which includes a gain factor in the Poisson distribu- tion.... In PAGE 5: ... Some well-known density functions27 are included in Table 2 for ease of comparison. The three K distributions in Table1 , K0, K, and K apos;, trans- form to three discrete distributions denoted PKO, PK, and PK apos;. By the associative property of the smearing operation, calculation of these discrete distributions may proceed in any order desired.... ..."

Cited by 4

### Table 1 continued

"... In PAGE 7: ...conditions (i) - (iv) is equivalent to Y having density y?1H(y) for y gt; 0: (i) E(Y s) = 2 (s) (s 2 C ); (21) (ii) for y gt; 0 P (Y y) = G(y) + yG0(y) = ?y?2G0(y?1); (22) that is P (Y y) = 1 X n=?1(1 ? 2 n2y2)e? n2y2 = 4 y?3 1 X n=1 n2e? n2=y2; (23) (iii) with 2 de ned by (6) Y d = q 2 2; (24) (iv) E he? Y 2i = p sinh p !2 : (25) 3 Two in nitely divisible families This section presents an array of results regarding the probability laws on (0; 1) of the random variables h and # h de ned by (6) with special emphasis on results for h = 1 and h = 2, which are summarized by Table1 . Each column of the table presents features of the law of one of the four sums = 1; 2; # 1 of # 2 .... In PAGE 7: ... Those in the 2 column can be read from Proposition 1, while the formulae in other columns provide to analogous results for 1, # 1 and # 2 instead of 2. While the Mellin transforms of 1, 2 and # 2 all involve the function associated with the Riemann zeta function, the Mellin transform of # 1 involves instead the Dirichlet L-function associated with the quadratic character modulo 4, that is L 4(s) := 1 X n=0 (?1)n (2n + 1)s ( lt;s gt; 0): (26) We now discuss the entries of Table1 row by row.... In PAGE 11: ...x)]. The formulae for the densities of h and # h displayed in Table1 for h = 1 and h = 2 can be obtained using the reciprocal relations of Row 5. The self-reciprocal relation involving 2 is a variant of (18), while that involving # 1 , which was observed... In PAGE 12: ....g. Lemma 4). The formulae for the Mellin transforms E( s) can all be obtained by term-by-term integration of the densities for suitable s, followed by analytic continuation. According to the self-reciprocal relation for # 1 , for all s 2 C E h 2 # 1 si = E 2 # 1 ?12?s : (39) Using the formula for E(( # 1 )s) in terms of L 4 de ned by (26), given in Table1 , we see that if we de ne 4(t) := E 2 # 1 t?1 2 = ? t + 1 2 4 t+12 L 4(t) (40) then (39) amounts to the functional equation 4(t) = 4(1 ? t) (t 2 C ): (41) This is an instance of the general functional equation for a Dirichlet L?function, which is recalled as (95) in Section 6. Positive integer moments E( n).... In PAGE 13: ...Table1 reveals the following remarkably simple relation: E( s 1) = 21?2s ? 1 1 ? 2s E( s 2) (43) where the rst factor on the right side is evaluated by continuity for s = 1=2. By elementary integration, this factor can be interpreted as follows: 21?2s ? 1 1 ? 2s = E(W ?2s) (44) for a random variable W with uniform distribution on [1; 2].... In PAGE 29: ...3 Proof of Theorem 3 This is obtained by applying the preceding results in the particular case an = n?2. By application of Lemmas 4 and 5, and the formula for E( s 1) in Table1 , found in [58, (86)], the conclusions of Theorem 3 hold for ?2an;N = 1 n;N = Qj6 =n(j2) Qj6 =n(j2 ? n2) (89) where both products are over j with 1 j N and j 6 = n. The product in the numerator is (N!=n)2 while writing j2 ? n2 = (j ? n)(j + n) allows the product in the denominator to be simpli ed to (?1)n?1(N + n)!(N ? n)!=(2n2).... In PAGE 30: ...The case of the L 4 function The following result can be obtained similarly, with the help of the formula for E(( # 1 )s) in terms of L 4 de ned by (26), given in Table1 of Section 3. Theorem 6 Let L(N) 4 (s) := N?1 X n=0(?1)n 2N ? 1 N ? n ? 1 2N ? 1 N ? 1 (2n + 1)?s: Then 2N ? 1 N ? n ? 1 ! 2N ? 1 N ! ! 1 as N ! 1; for each N, one has L(N) 4 (1 ? 2k) = 0 for k = 1; 2; : : : N ? 1, and L(N) 4 (s) ! L 4(s) uniformly on every compact of C .... ..."

### Table 1 continued

"... In PAGE 7: ...conditions (i) - (iv) is equivalent to Y having density y?1H(y) for y gt; 0: (i) E(Y s) = 2 (s) (s 2 C ); (21) (ii) for y gt; 0 P (Y y) = G(y) + yG0(y) = ?y?2G0(y?1); (22) that is P (Y y) = 1 X n=?1(1 ? 2 n2y2)e? n2y2 = 4 y?3 1 X n=1 n2e? n2=y2; (23) (iii) with 2 de ned by (6) Y d = q 2 2; (24) (iv) E he? Y 2i = p sinh p !2 : (25) 3 Two in nitely divisible families This section presents an array of results regarding the probability laws on (0; 1) of the random variables h and # h de ned by (6) with special emphasis on results for h = 1 and h = 2, which are summarized by Table1 . Each column of the table presents features of the law of one of the four sums = 1; 2; # 1 of # 2 .... In PAGE 7: ... Those in the 2 column can be read from Proposition 1, while the formulae in other columns provide to analogous results for 1, # 1 and # 2 instead of 2. While the Mellin transforms of 1, 2 and # 2 all involve the function associated with the Riemann zeta function, the Mellin transform of # 1 involves instead the Dirichlet L-function associated with the quadratic character modulo 4, that is L 4(s) := 1 X n=0 (?1)n (2n + 1)s ( lt;s gt; 0): (26) We now discuss the entries of Table1 row by row.... In PAGE 11: ...x)]. The formulae for the densities of h and # h displayed in Table1 for h = 1 and h = 2 can be obtained using the reciprocal relations of Row 5. The self-reciprocal relation involving 2 is a variant of (18), while that involving # 1 , which was observed... In PAGE 12: ....g. Lemma 4). The formulae for the Mellin transforms E( s) can all be obtained by term-by-term integration of the densities for suitable s, followed by analytic continuation. According to the self-reciprocal relation for # 1 , for all s 2 C E h 2 # 1 si = E 2 # 1 ?12?s : (39) Using the formula for E(( # 1 )s) in terms of L 4 de ned by (26), given in Table1 , we see that if we de ne 4(t) := E 2 # 1 t?1 2 = ? t + 1 2 4 t+12 L 4(t) (40) then (39) amounts to the functional equation 4(t) = 4(1 ? t) (t 2 C ): (41) This is an instance of the general functional equation for a Dirichlet L?function, which is recalled as (95) in Section 6. Positive integer moments E( n).... In PAGE 13: ...Table1 reveals the following remarkably simple relation: E( s 1) = 21?2s ? 1 1 ? 2s E( s 2) (43) where the rst factor on the right side is evaluated by continuity for s = 1=2. By elementary integration, this factor can be interpreted as follows: 21?2s ? 1 1 ? 2s = E(W ?2s) (44) for a random variable W with uniform distribution on [1; 2].... In PAGE 29: ...3 Proof of Theorem 3 This is obtained by applying the preceding results in the particular case an = n?2. By application of Lemmas 4 and 5, and the formula for E( s 1) in Table1 , found in [58, (86)], the conclusions of Theorem 3 hold for ?2an;N = 1 n;N = Qj6 =n(j2) Qj6 =n(j2 ? n2) (89) where both products are over j with 1 j N and j 6 = n. The product in the numerator is (N!=n)2 while writing j2 ? n2 = (j ? n)(j + n) allows the product in the denominator to be simpli ed to (?1)n?1(N + n)!(N ? n)!=(2n2).... ..."

### Table 1 continued

"... In PAGE 7: ...conditions (i) - (iv) is equivalent to Y having density y?1H(y) for y gt; 0: (i) E(Y s) = 2 (s) (s 2 C ); (21) (ii) for y gt; 0 P (Y y) = G(y) + yG0(y) = ?y?2G0(y?1); (22) that is P (Y y) = 1 X n=?1 (1 ? 2 n2y2)e? n2y2 = 4 y?3 1 X n=1 n2e? n2=y2; (23) (iii) with 2 de ned by (6) Y d = q 2 2; (24) (iv) E h e? Y 2i = p sinh p !2 : (25) 3 Two in nitely divisible families This section presents an array of results regarding the probability laws on (0; 1) of the random variables h and # h de ned by (6) with special emphasis on results for h = 1 and h = 2, which are summarized by Table1 . Each column of the table presents features of the law of one of the four sums = 1; 2; # 1 of # 2 .... In PAGE 7: ... Those in the 2 column can be read from Proposition 1, while the formulae in other columns provide to analogous results for 1, # 1 and # 2 instead of 2. While the Mellin transforms of 1, 2 and # 2 all involve the function associated with the Riemann zeta function, the Mellin transform of # 1 involves instead the Dirichlet L-function associated with the quadratic character modulo 4, that is L 4(s) := 1 X n=0 (?1)n (2n + 1)s ( lt;s gt; 0): (26) We now discuss the entries of Table1 row by row.... In PAGE 11: ...x)]. The formulae for the densities of h and # h displayed in Table1 for h = 1 and h = 2 can be obtained using the reciprocal relations of Row 5. The self-reciprocal relation involving 2 is a variant of (18), while that involving # 1 , which was observed... In PAGE 12: ....g. Lemma 4). The formulae for the Mellin transforms E( s) can all be obtained by term-by-term integration of the densities for suitable s, followed by analytic continuation. According to the self-reciprocal relation for # 1 , for all s 2 C E h 2 # 1 si = E 2 # 1 ?1 2?s : (39) Using the formula for E(( # 1 )s) in terms of L 4 de ned by (26), given in Table1 , we see that if we de ne 4(t) := E 2 # 1 t?1 2 = ? t + 1 2 4 t+12 L 4(t) (40) then (39) amounts to the functional equation 4(t) = 4(1 ? t) (t 2 C ): (41) This is an instance of the general functional equation for a Dirichlet L?function, which is recalled as (95) in Section 6. Positive integer moments E( n).... In PAGE 13: ...Table1 reveals the following remarkably simple relation: E( s 1) = 21?2s ? 1 1 ? 2s E( s 2) (43) where the rst factor on the right side is evaluated by continuity for s = 1=2. By elementary integration, this factor can be interpreted as follows: 21?2s ? 1 1 ? 2s = E(W ?2s) (44) for a random variable W with uniform distribution on [1; 2].... In PAGE 29: ...3 Proof of Theorem 3 This is obtained by applying the preceding results in the particular case an = n?2. By application of Lemmas 4 and 5, and the formula for E( s 1) in Table1 , found in [58, (86)], the conclusions of Theorem 3 hold for ?2an;N = 1 n;N = Q j6 =n(j2) Q j6 =n(j2 ? n2) (89) where both products are over j with 1 j N and j 6 = n. The product in the numerator is (N!=n)2 while writing j2 ? n2 = (j ? n)(j + n) allows the product in the denominator to be simpli ed to (?1)n?1(N + n)!(N ? n)!=(2n2).... In PAGE 30: ...The case of the L 4 function The following result can be obtained similarly, with the help of the formula for E(( # 1 )s) in terms of L 4 de ned by (26), given in Table1 of Section 3. Theorem 6 Let L(N) 4 (s) := N?1 X n=0 (?1)n 2N ? 1 N ? n ? 1 2N ? 1 N ? 1 (2n + 1)?s: Then 2N ? 1 N ? n ? 1 ! 2N ? 1 N ! ! 1 as N ! 1; for each N, one has L(N) 4 (1 ? 2k) = 0 for k = 1; 2; : : : N ? 1, and L(N) 4 (s) ! L 4(s) uniformly on every compact of C .... ..."