### Table 2: Hyperkernels by Power Series Construction.

"... In PAGE 12: ... It is straightforward to find other hyperkernels of this sort, simply by consulting tables on power series of functions. Table2 contains a short list of suitable expansions.... ..."

### Table 2: Hyperkernels by Power Series Construction.

"... In PAGE 13: ... It is straightforward to find other hyperkernels of this sort, simply by consulting tables on power series of functions. Table2 contains a short list of suitable expansions.... ..."

### Table 13: The Laplace expansion for the function f = x The coe cient The Laplace expansion

1994

"... In PAGE 59: ... If the given boundary function f has the Laplace series expansion f = 1 X n=0 anSn Then the solution of the equation (3.2), @u @n, will have the expansion @u @n = 1 X n=0 bnSn where bn = ?nan The following Table13 gives the Laplace expansion for the given boundary function f = x, and Table 14 is the Laplace expansion for the solution.... ..."

Cited by 1

### Table 7. WCET analysis of data sets using an exponential kernel

"... In PAGE 2: ... The reason is that it allows the design and constraints of other parameters such as battery lifetime, real-time schedulability, or HW/SW codesign issues to prescribe the requirements of the sparse kernel learning algorithm. We will see in Table7 that even a 90% reduction in support vectors generate modest increases of the... In PAGE 8: ... Table7 shows the WCET analysis applied to the sparse kernel learning algorithm (SKLA) results for a sparse large margin classifier (SLMC). We display the number of training data m, the dimensionality of the data n, the number of support(expansion) vectors mNSV for a traditional SVM, the difference in error from using the traditional SVM versus a SKLA SVM with 10% expansion vectors, and finally we show the cycle count for an SVM decision function: ie.... ..."

### Table 2: Hyperkernels by Power Series Construction.

2005

"... In PAGE 14: ... It is straightforward to find other hyperkernels of this sort, simply by consulting tables on power series of functions. Table2 contains a short list of suitable expansions. However, if we want the kernel to adapt automatically to different widths for each dimension, we need to perform the summation that led to (12) for each dimension in its arguments separately.... ..."

Cited by 27

### Table 2: Hyperkernels by Power Series Construction.

"... In PAGE 12: ... It is straightforward to find other hyperkernels of this sort, simply by consulting tables on power series of functions. Table2 contains a short list of suitable expansions. However, if we want the kernel to adapt automatically to different widths for each dimension, we need to perform the summation that led to (10) for each dimension in its arguments separately.... ..."

### Table 4: Estimation results for d using 50 independent copies of FARIMA(0; d; 0) time series of length 10; 000. The innovations are exponential and lognormal.

1998

"... In PAGE 20: ... The next best estimator is the Local Whittle. Using exponential and lognormal innovations instead of Gaussian ones does not a ect most of the estimators (see Table4 and the boxplots in Figure 8). 4When p and/or q are 1, the notation FARIMA( 1;d; 1) is used in the tables in order to... ..."

Cited by 39

### Table 2: A comparison of the approximate exponential-series ccdf Gc a(t) in (7.32) with the exact ccdf and the asymptotic in (7.31) for Example 7.3.

1998

"... In PAGE 33: ... Moreover, if the total probability mass is less than 1, we can make the approximating ccdf proper by adding an atom at 0. Numerical results for the case = 0:6 are displayed in Table2 . The exact values are taken from Table 3 of Guillemin and Pinchon (1998).... In PAGE 33: ... The exact values are taken from Table 3 of Guillemin and Pinchon (1998). From Table2 , we see that the complicated exact ccdf can be reasonably approximated by the relatively simple exponential-series ccdf... ..."

Cited by 2

### Table 4: Estimation results for d using 50 independent copies of FARIMA(0; d; 0) time series of length 10; 000. The innovations are exponential and lognormal.

"... In PAGE 20: ... The next best estimator is the Local Whittle. Using exponential and lognormal innovations instead of Gaussian ones does not a ect most of the estimators (see Table4 and the boxplots in Figure 8). 4When p and/or q are 1, the notation FARIMA( 1;d; 1) is used in the tables in order to... ..."

### Table 1: The expansion coe cients bg;(0) k

"... In PAGE 3: ... [13]. Here we list for brevity only the numerical values of the rst 15 coe cients bg;(0) k and bg;(1) k for the Lx and NLx series for Nf = 4, see Table1 . Note that the new terms bg;(1) 0;1 agree with the corresponding results from xed-order perturbation theory already taking into account the `irreducible apos; part of (1) gg only.... ..."