### Table 3 below displays the discretized data set. Positive (resp., negative) patterns are logical rules which impose upper and lower bounds on the val- ues of a subset of the variables, such that a high proportion of the positive (resp., negative) experiments

2007

"... In PAGE 6: ...Table3 : Discretized data set. Experiment Variables Outcome j x1 x2 x3 z 1 3 2 1 1 2 2 0 3 1 3 0 1 2 1 4 3 0 2 0 5 1 1 0 0 in the discretized data set satisfy the conditions imposed by the pattern, and a high proportion of the negative (resp.... ..."

### TABLE VII. THE COMPARISON OF TRAINING TIME FOR DIFFERENT CLASSIFICATION ALGORITHMS.THE VALUES ARE THE TRAINING TIMES FOR THE DISCRETIZED DATA SETS AND SHOWN IN SECOND. DFL C4.5 NB 1NN kNN SVM RIP

### Table 1: f and m values for two computation clusters.

"... In PAGE 3: ... A simple solution to this problem has an e ciency1 model that is a function of the ratio mf , where m is the time to communicate with a peer processor, and f is the time to perform computations on a single discrete data point on the bar. Table1 shows f and m values for a parallel machine and a workstation cluster. Figure 1 shows a plot of e ciency versus the ratio mf , as well as reference points for the above two con gurations.... ..."

### Table 1: Typical discretized data for the e-CLV model. Recency is recorded in months with a max- imum of 6. Frequency represents the aggregated number of purchases, and has a maximum of 6, and Monetary value is the aggregated spending for that customer. Monetary is discretized by introducing categories of 100 dollars wide. (Monetary 1 rep- resents spending between $0 and $100, 2 represents spending between $100 and $200, and so on.)

### Table 1: A generic time series data mining approach

2007

"... In PAGE 4: ... Each can be visualized as an attempt to approximate the signal with a linear combination of basis functions While there are literally hundreds of papers on discretizing (symbolizing, tokenizing, quantizing) time series [3, 27] (see [15] for an extensive survey), none of the techniques allows a distance measure that lower bounds a distance measure defined on the original time series. For this reason, the generic time series data mining approach illustrated in Table1 is of little utility, since the approximate solution to problem created in main memory may be arbitrarily dissimilar to the true solution that would have been obtained on the original data. If, however, one had a symbolic approach that allowed lower bounding of the true distance, one could take advantage of the generic time series data mining model, and of a host of other algorithms, definitions and data structures which are only defined for discrete data, including hashing, Markov models, and suffix trees.... ..."

Cited by 1

### Table 1. Two Sets of Linear Phase, Biorthogonal Wavelet Filter Coefficients.

"... In PAGE 5: ... The fact that biorthogonal wavelets are not energy preserving does not turn out to be a big problem, since there are linear phase biorthogonal filter coefficients which are close to being orthogonal. One example of such a wavelet filter set is the 9/7 filter given in Table1 . This filter set can be plugged into the orthogonality constraints of (1) to show that they are nearly orthogonal.... ..."

### Table 2: Linear model estimation

2006

"... In PAGE 9: ...Computational experience (MIPLIB instances) 3 OUR METHOD Table2 compares the size of the measurement tree obtained by the linear model with the actual number of nodes in T. The last column shows the ratio between the two.... ..."

Cited by 2

### Table 4: Average number of Newton iterations for solving the linearized model in non-linear Fair-estimation.

in Solution of Linear Programming and Non-Linear Regression Problems Using Linear M-Estimation Methods

1999

"... In PAGE 109: ...Table4 : Results for the updating routine of the software package when used as a tool for nding L from scratch. Times are given in seconds.... ..."

### Table 3 Comparison of the 2nd order filters in the Volterra and the MMD structure for loudspeaker identification and linearization

"... In PAGE 18: ... In this case, longer adaption time is not a serious problem, because identification is done only once without any time constraints. Table3 summarizes the memory length and the required filter operations for the nonlinear filter part for both realizations. Even though the memory length of the MMD filter is twice as long, the number of filter operations in the identification phase is similar to that of the general Volterra case.... ..."

### Table 1: MC filters: linear Gaussian model

2000

"... In PAGE 25: ...79. For the different MC filters, the results are presented in Table1 and Table 2. With N = 500 trajectories, the estimates obtained using MC methods are similar to those obtained by Kalman.... ..."

Cited by 293