### Table 7: Results for adding an orthonormalization of to the POD base of 3 modes

### TABLE III PSEUDO-CODE FOR AN EXEMPLARY SIMILARITY SEARCH ALGORITHM BASED ON THE GRAM-SCHMIDT ORTHONORMALIZATION.

### Table II: Coding of the control signals of the generalized PS section for each transform. FCT transform will be computed in 2 log2 N recirculation stages. Table II shows the coding to the control signals appearing in gure 10. VII FAST HAAR TRANSFORM: FHrT The Haar functions constitute a complete set of orthonormal and rectangular bases [57]. They are de ned in the closed interval (0,1), but can be extended periodically out of this interval. These functions are usually grouped into ordered subsets called degrees or families. HAAR(2p + s; m) = 8 gt; lt;

### TABLE I The four orthonormal -rotations

### Table 16: Eigenvectors after TRANSBAK1 (orthonormalized)

"... In PAGE 28: ... After the backtransfor- mation the eigenvectors of (7) are overwritten by those of (5). Table16 demonstrates the result for our exemplarily treated matrices.... ..."

### Table 1: Simulation results for DWPT-based approximate MLEs ^ and ^ fG using the MB(8), LA(16) and MB(16) wavelet lters. An initial parameter estimate of was obtained by least-squares estimation where fG was chosen to be the Fourier frequency with the largest contribution to the periodogram. The portmanteau test ( = 0:05) was applied to the squared wavelet coe cients in order to select the orthonormal basis B T .

"... In PAGE 13: ...Simulations To assess the performance of this approximate ML methodology, we simulate the four time series in Figure 1 using numeric integration to compute their autocovariance sequences. Table1 summarizes the results of this simulation study for 500 iterations. The average MLEs ^ and ^ fG are given along with their empirical bias, standard deviation and empirical mean squared error (MSE).... ..."

### Table 1: Results of recognition The model is represented by a set of control points of a B-spline surface, and consists of a mean shape and a orthonormal basis de ning the shape space. With a global constraint, the model can provide a sensible solution space which we believe supports robust model-based applications. Applications of utilising this model for 3-D shape recovery, tracking and recognition have been demonstrated. Experimental results have shown that the method gives very encouraging results.

1995

Cited by 3

### Table 11.1 Subdivision orthonormal mask gn n

1992

Cited by 62

### Table 8. Performance comparison of N-way analysis techniques for time window 240 seconds and tensor 10x100x60. Tucker1, Tucker3 and PARAFAC are compared based on explained variation, number of parameters used in each model and success ratio of capturing the structure. Comparison of the models is presented for two different noise levels, NR=0 and NR=3.

"... In PAGE 11: ... Besides, while Tucker3 model enables us to decompose a tensor into orthogonal component matrices X, Y , and Z and estimate orthonormal bases, in PARAFAC, we can only do that if tensor is diagonalizable. In Table8 we present the results of performance comparison for multiway techniques. Note that even if we extract the same number of components in each mode, Tucker3 model is more robust.... In PAGE 11: ... When data are noisy, we observe performance degradation in terms of inter- pretability in Tucker1 while Tucker3 can still capture the structure successfully if right number of components is determined for each mode. Table8 demonstrates the importance of right component numbers in success ratio of data interpre- tation. Similarly, it gives an example of a case where selection of component numbers just taking into fit of the model into account does not necessarily im- ply better interpretation of data.... ..."

### Table 8. Performance comparison of N-way analysis techniques for time window 240 seconds and tensor 10x100x60. Tucker1, Tucker3 and PARAFAC are compared based on explained variation, number of parameters used in each model and success ratio of capturing the structure. Comparison of the models is presented for two different noise levels, NR=0 and NR=3.

"... In PAGE 11: ... Besides, while Tucker3 model enables us to decompose a tensor into orthogonal component matrices X, Y , and Z and estimate orthonormal bases, in PARAFAC, we can only do that if tensor is diagonalizable. In Table8 we present the results of performance comparison for multiway techniques. Note that even if we extract the same number of components in each mode, Tucker3 model is more robust.... In PAGE 11: ... When data are noisy, we observe performance degradation in terms of inter- pretability in Tucker1 while Tucker3 can still capture the structure successfully if right number of components is determined for each mode. Table8 demonstrates the importance of right component numbers in success ratio of data interpre- tation. Similarly, it gives an example of a case where selection of component numbers just taking into fit of the model into account does not necessarily im- ply better interpretation of data.... ..."