### Table 4: Recognition rates for three smoothness constraint weights of the FFD registration.

2005

Cited by 7

### Table 1: AAE improvement when using our smoothness constraint. For d = 0, our cost function degrades to standard SAD. Other parameters are B = 16, sw = 16, 1/32 pixel MV resolution, linear interpolation. Computation times are given for the whole sequence.

### Table 1. Recognition rates for three atlas-based segmentations and their combinations using la- bel voting (COI) and deformation averaging (COD) in seven subjects. The different individual segmentations were produced using different smoothness constraint weights of the nonrigid reg- istration between subject image and atlas

"... In PAGE 3: ... There is, however, no sub- stantial difference in terms of computational performance, since the rate-limiting step in atlas-based segmentation remains the nonrigid registration. Table1 shows the recognition rates (i.e.... In PAGE 4: ... For details of the segmentation and evaluation methods, we refer the interested reader to [10]. The results in Table1 show that both information fusion approaches achieved improvements over the individual segmentations. However, while label voting (COI) achieved recognition rates better than the best individual segmen- tation, averaging of the deformation fields (COD) produced recognition rates slightly better than the mean recognition rate of the individual segmentations.... ..."

### Table 1: Parameters in online smoothing model: This table summarizes the key parameters in the smoothing model.

"... In PAGE 7: ... Intuitively, after any invocation to the smoothing algorithm, the server slides over time units before it invokes the the online smoothing algorithm with a smoothing window starting time units past the beginning of the smoothing window for the first invocation. Table1 summarizes the key parameters, which guide our derivation of the smoothing constraints, as well as our performance evaluation in Section 4. 3.... In PAGE 9: ... Similarly, modest values of w, BC, BS, and P allow online smoothing to achieve most of the performance gains of the optimal offline algorithm. 4 Performance Evaluation The performance evaluation in this section studies the interaction between the parameters in Table1 to help in determining how to provision server, client, and network resources for online smoothing. The study focuses on bandwidth requirements, in terms of the peak rate, coefficient of variation (standard deviation normalized by the mean rate), and the effective bandwidth of the smoothed video stream.... ..."

Cited by 25

### Table 1: Assumptions in computer vision: Natural constraints (N), physical constraints (P) and synthetic constraints (S).

1997

"... In PAGE 4: ...Table 1: Assumptions in computer vision: Natural constraints (N), physical constraints (P) and synthetic constraints (S). surface smoothness) which are used to convert the ill-posed problem of vision to a well-posed problem (see Table1 (7)). Even well-posed problems may turn out to be computationally in- tractable because of the iterative algorithms used today to solve them (e.... ..."

Cited by 3

### Table 1. Bias for Three Smoothing Strategies (Whole Brain) GCV-Spline SPM-HRF No Smoothing

2003

"... In PAGE 18: ...s the SPM-HRF was a computational constraint (Friston et al., 2000). The results from the comparison of GCV-spline smoothing with the SPM-HRF and no smoothing of the simulated data show that optimal spline smoothing of each time series is, on average, signi cantly less biased than smoothing all time series with an identical SPM- HRF kernel or ignoring residual autocorrelations. The mean bias reported in Table1 for the SPM-HRF is deceptive in the context of fMRI studies since the majority of voxels with negative bias are located in regions other than grey matter. These negative bias voxels shift the mean bias closer to zero.... ..."

Cited by 1

### Table 7 Number of regions for segmentation of color Guitar

2005

"... In PAGE 16: ... and filing cabinet have not been well detected and are mixed irreversibly with the background. The numbers of regions are shown in Table7 . Finally, the proposed scalable multiresolution segmentation with the smoothness constraint at three different resolutions is shown in Figs.... ..."

### Table 4. Overall score for BaySpell using di erent smoothing methods. The last method, inter- polative smoothing, is the one presented here. Training was on 80% of Brown and testing on the other 20%. When using MLE likelihoods, we broke ties by choosing the word with the largest prior (ties arose when all words had probability 0.0). For Katz smoothing, we used absolute dis- counting (Ney et al., 1994), as Good-Turing discounting resulted in invalid discounts for our task. For Kneser-Ney smoothing, we used absolute discounting and the backo distribution based on the \marginal constraint quot;. For interpolation with a xed , Katz, and Kneser-Ney, we set the necessary parameters separately for each word Wi using deleted estimation. Smoothing method Reference Overall

1999

"... In PAGE 18: ... However, we investigated this brie y by comparing the performance of BaySpell with interpolative smoothing to its performance with MLE likelihoods (the naive method), as well as a number of alternative smoothing methods. Table4 gives the overall scores. While the overall score for BaySpell with interpolative smoothing was 93.... ..."

Cited by 57

### Table 4. Overall score for BaySpell using di erent smoothing methods. The last method, inter- polative smoothing, is the one presented here. Training was on 80% of Brown and testing on the other 20%. When using MLE likelihoods, we broke ties by choosing the word with the largest prior (ties arose when all words had probability 0.0). For Katz smoothing, we used absolute dis- counting (Ney et al., 1994), as Good-Turing discounting resulted in invalid discounts for our task. For Kneser-Ney smoothing, we used absolute discounting and the backo distribution based on the \marginal constraint quot;. For interpolation with a xed , Katz, and Kneser-Ney, we set the necessary parameters separately for each word Wi using deleted estimation. Smoothing method Reference Overall

1999

"... In PAGE 20: ... However, we investigated this brie y by comparing the performance of BaySpell with interpolative smoothing to its performance with MLE likelihoods (the naive method), as well as a number of alternative smoothing methods. Table4 gives the overall scores. While the overall score for BaySpell with interpolative smoothing was 93.... ..."

Cited by 57

### Table 2. The uniform controlled-scale extensions of Tikhonov stabilizers Solutions of the di erential equation 5 typically use nite di erences or - nite elements methods with iterative schemes such as Gauss-Seidel relaxation. Controlled-scale stabilizers involve inverting a banded positive de nite matrix whose bandwidth depends on the scale parameter r(u). The computational com- plexity for solving those systems is the same that for regular Tikhonov stabilizers but the rate of convergence is signi cantly increased since constraints propagate faster along the curve. For sparse data approximation, smoothness should not be evaluated over the discontinuity entailed by each data constraint. For appropriate approximation over data points Pi, the scale parameters ri should be picked such that smoothing does not occur across discontinuities (see Figure (1)).

1994

Cited by 10