Results 1 - 10
of
12,532
Table 1. The results of evaluation for image adaptation based on manual attention modeling.
2003
Cited by 4
Table 8: Result for dynamically adjusted weights where the weights are updated every hundred images to adapt to changing conditions
Table 3.1. Comparison of operations on grid-based maps and intensity images. Adapted from [26].
Table 2: Maximum difference (max), average difference (e) and variance (s) of an adaptive and a binary tree using rep- resentatives with respect to the exact error table (max. value found in error table being e = 4:74 10 4). Images for the adaptive tree are shown in Figure 8.
"... In PAGE 5: ...Figure 4: Edge-based error calculation by following tree edges only three different approaches to evaluate an error term within the tree: edge-based approximation: traversing only the edges found in the underlying tree structure intra-level approximation: traversing the edges found in the underlying tree structure plus edges between nodes on the same level inter-level approximation: traversing the edges found in the underlying tree structure plus edges between nodes on the same level and edges between nodes on consecutive levels We were especially interested in the relationship between evaluation mode and approximation error. (See Table2 and Figure 8). 3.... In PAGE 6: ...8% when compared to the exact look-up matrix (37673 en- tries). The maximum error between the look-up matrix and the adaptive tree is 1:01 10 4, with a maximum value of 4:74 10 4 in the look-up matrix itself ( Table2 ). Thus, the number of entries for the trees is much smaller than the num- ber of entries for a matrix.... In PAGE 6: ... Inter-level and intra-level evaluation generate nearly identical results, with inter-level results being slightly better. Table2 shows a c... ..."
Table 3. The percentages of correct classi cation for the colorful images for each cluster and for the whole dataset using the binary measure and the weighted measure. cluster binary measure weighted measure
"... In PAGE 8: ... The average consensus between human and automatic clustering using only color information was 45%, using only texture information it was 46%, and using both color and texture information it was 47%. In Table3 , the results from the binary and weighted measures of agreement, between human and automatic clustering are given. It is possible that no images are assigned to a particular human cluster because we adapted the same approach for the calculation of the consensus as described in Section 4: non-unique mapping of the clusters.... In PAGE 8: ... It is possible that no images are assigned to a particular human cluster because we adapted the same approach for the calculation of the consensus as described in Section 4: non-unique mapping of the clusters. The percentages marked with a * in Table3 are the result of the fact that no images were assigned to the particular cluster by the speci c automatic clustering. For the binary measure, there are two clusters on which one of the feature vectors had a percentage of more than 50%.... ..."
Table 1: Control interfaces
1998
"... In PAGE 5: ... These aspects are such things as the level of lossy compression in a JPEG image, the frame rate of video, or the colour depth and size of an image adapted to a specific display device. We use the obvious media hierarchy to represent the media, as in Figure 2, and provide control interfaces to the media, as in Table1 . These control interfaces can be remotely called across the network, transforming the media before it is downloaded.... ..."
Cited by 32
Table 1: Control interfaces
1998
"... In PAGE 5: ... These aspects are such things as the level of lossy compression in a JPEG image, the frame rate of video, or the colour depth and size of an image adapted to a specific display device. We use the obvious media hierarchy to represent the media, as in Figure 2, and provide control interfaces to the media, as in Table1 . These control interfaces can be remotely called across the network, transforming the media before it is downloaded.... ..."
Cited by 2
Table 1: Shows the standard deviation (SD), edge strength and edge spread on both images after each filtering. The edge strength and edge spread are taken from histograms in figure 3.
2004
"... In PAGE 4: ... The standard deviation for each of the filtered images is then taken at the same positions. The results are presented in Table1 . For the laboratory image, Adaptive smoothing gives the best results followed by the two other non-linear filters.... In PAGE 5: ... Two measurements are taken from these histograms which indicate edge strength and spread. These results are compiled in Table1 . While Savitzky-Golay and Gaussian filters spread the edge, the other three maintain and even enhance the edge characteristics.... ..."
Cited by 2
Table 7: Average rating for individual images for xed and adapted parameters. 2. Is there interaction between xed vs. adaptive parameter values and the edge detectors? The answer is yes. The fourth row of Table 5 lists the variance due to this interaction. This can be interpreted as saying that for some edge detector the di erence between ratings for adapted and xed parameters is greater than others. This is also clearly seen in Table 6, which shows that the di erence between xed versus adapted parameters is greatest for the Sarkar-Boyer (the performance was much better with adapted parameters then with xed) and least for the Nalwa-Binford (performace is indentical between xed and adaptive parameters). In fact, for the Nalwa-Binford, the best xed parameter choice was also the best adapted parameter choice for seven of the eight images. (Consistent with this conclusion, recall that Table 4 shows that the image and parameter interaction is weakest for the Nalwa-Binford.)
1996
"... In PAGE 25: ... We conducted further analysis within the data divided according to xed or adapted parameters. Table7 lists the average ratings of each edge detector for each image. Table 8 lists the ANOVA results within each parameter choice type.... In PAGE 28: ... That is, the ratings of edge detectors vary with the images. Table7 lists the mean ratings of the edge detectors on each image for adapted parameter choices. Notice that the ranking of the edge detectors does vary with image.... In PAGE 29: ... 2-4. (Please refer to the second chart in Table7 for the average ratings.) Fig.... ..."
Cited by 18
Table 1. The watermark data decoding performance (%).
"... In PAGE 5: ...presented in Table1 . To better evaluate the efficiency of the proposed method, the tests are performed on images which are watermarked by a) the method in [1] (column 1), b) the introduced adaptive method, including only standard deviation (SD) term in Eq.... In PAGE 5: ...As can be seen from Table1 , especially the decoding performance for busy images, New York and Baboon, increases considerably by using image adaptive and controlled watermark embedding. For relatively smooth images, Lena and Sailboat, this increase is small and the performance of the method in [1] is nearly same as that of modified method.... ..."
Results 1 - 10
of
12,532