### Table 6.1: Empirical translation-invariant characteristics of particles.

### Table 1. Results using the 139 protein unfolding trajectories (translation invariant with standard apriori/not translation invariant with standard apriori)

"... In PAGE 10: ... Results using the translation invariant/rotation invariant mining on a syn- thetic dataset Number of Support % Trajectories 123 T 33351/13 12843/11 6565/11 D=1000 L 43/21 32/21 25/21 F 35117/338 11267/312 3205/288 T 53071/24 24984/22 12026/18 D =2000 L 33/21 30/21 12/19 F 27007/338 10705/312 2515/200 T 110824/44 47301/41 23934/35 D=4000 L 32/21 30/21 11/19 F 20128/312 9613/288 1832/200 successful at detecting translated subtrajectories. Table1 presents some quan- titative results with and without the translation invariant mining algorithm. It shows the running time in seconds (T), the length of the longest frequent tra- jectory discovered (L) and the total number of frequent trajectories discovered (F).... ..."

### Table 2. Results using the translation invariant/rotation invariant mining on a syn- thetic dataset

"... In PAGE 10: ... We first applied our mining method to mine these derivative cell representations. See Table2 for the results using the translation invariant algo- rithm. The algorithm accurately detected the frequent translated trajectories.... ..."

### Table 3. Generalization performance of the locally translation invariant logistic linear classi#0Cer

"... In PAGE 7: ...he classi#0Ccation. Instead only gross dimensions of the objects appear to play a signi#0Ccant role in classi#0Ccation. In order to compensate for variations in position, wehave developed a locally translation invariant logistic linear classi#0Cer that classi#0Ces a series of horizontal translations of a given image chip and selects the class label corresponding to the translation which yields the largest di#0Berential above the rejection threshold. Table3 shows the generalization performance of this classi#0Cer. Although the probability of correct classi#0Ccation has fallen slightly due primarily to more errors on examples of people, wehave obtained a large increase in false alarm rejection.... ..."

### Table 3. Generalization performance of the locally translation invariant logistic linear classi#0Cer

"... In PAGE 7: ...he classi#0Ccation. Instead only gross dimensions of the objects appear to play a signi#0Ccant role in classi#0Ccation. In order to compensate for variations in position, wehave developed a locally translation invariant logistic linear classi#0Cer that classi#0Ces a series of horizontal translations of a given image chip and selects the class label corresponding to the translation which yields the largest di#0Berential above the rejection threshold. Table3 shows the generalization performance of this classi#0Cer. Although the probability of correct classi#0Ccation has fallen slightly due primarily to more errors on examples of people, wehave obtained a large increase in false alarm rejection.... ..."

### Table 3. Generalization performance of the locally translation invariant logistic linear classi#230Cer

"... In PAGE 7: ...he classi#230Ccation. Instead only gross dimensions of the objects appear to play a signi#230Ccant role in classi#230Ccation. In order to compensate for variations in position, wehave developed a locally translation invariant logistic linear classi#230Cer that classi#230Ces a series of horizontal translations of a given image chip and selects the class label corresponding to the translation which yields the largest di#230Berential above the rejection threshold. Table3 shows the generalization performance of this classi#230Cer. Although the probability of correct classi#230Ccation has fallen slightly due primarily to more errors on examples of people, wehave obtained a large increase in false alarm rejection.... ..."

### Table 3. Generalization performance of the locally translation invariant logistic linear classi#0Cer

"... In PAGE 7: ...he classi#0Ccation. Instead only gross dimensions of the objects appear to play a signi#0Ccant role in classi#0Ccation. In order to compensate for variations in position, wehave developed a locally translation invariant logistic linear classi#0Cer that classi#0Ces a series of horizontal translations of a given image chip and selects the class label corresponding to the translation which yields the largest di#0Berential above the rejection threshold. Table3 shows the generalization performance of this classi#0Cer. Although the probability of correct classi#0Ccation has fallen slightly due primarily to more errors on examples of people, wehave obtained a large increase in false alarm rejection.... ..."

### Table 1 Average over 100 replications of summed squared errors over 1024 points for various models and methods. All the wavelet-based estimators use the translation-invariant wavelet transform. The standard error of each of the entries is at most 2% of the value reported

2005

"... In PAGE 18: ... More detailed investigation of this issue would be an interesting topic for further research. Because the same noise values are used for each model, there is correla- tion between the various values in Table1 . Comparisons of methods with the Laplace (median) method on a paired-sample basis are given in Ta- ble 2.... In PAGE 19: ...odels and methods. In each case a standard wavelet transform was used. The two nonwavelet methods are not included, because they give the same results as in Table 1. For comparison, the results for the Laplace prior using the translation-invariant transform are repeated from Table1 , in italics High noise Low noise Method bmp blk dop hea bmp blk dop hea Laplace (median) translation-invariant 171 176 93 41 212 164 109 57 Laplace (median) 278 245 147 53 338 311 204 76 Quasi-Cauchy (median) 277 252 150 54 324 301 200 73 Gaussian (median) 328 252 158 56 400 361 241 87 Laplace (mean) 257 228 140 57 304 278 190 79 NeighBlock 462 406 148 67 436 485 207 125 NeighCoeff 324 320 145 60 316 345 207 91 QL 359 310 175 58 411 366 243 82 SURE (4 levels) 317 248 183 97 393 331 247 117 SURE (6 levels) 312 247 167 69 399 339 235 94 Univ soft (6 levels) 937 484 277 76 1444 931 534 121 FDR (q = 0.01) 331 307 169 60 387 382 231 83 FDR (q = 0.... ..."

Cited by 29

### TABLE 1 Average over 100 replications of summed squared errors over 1024 points for various models and methods. All the wavelet-based estimators use the translation-invariant wavelet transform. The standard error of each of the entries is at most 2% of the value reported

2005

Cited by 29

### Table 4: Mean square errors of the translation-invariant marginal maximum likelihood procedure, for each of the test functions of Donoho amp; Johnstone (1994) sampled at 1024 points, with various values of root signal to noise. The simulations are based on the same 100 replications as in Table 3, with standard errors given in brackets.

1998

"... In PAGE 10: ... A simulation study was carried out using exactly the same random realizations as for the simulation above. The results are shown in Table4 . The improvements in mean integrated square error over the xed basis MML method are substantial, typically around 40%.... ..."

Cited by 14