Results 11 - 20
of
13,692
Table 3.2: Common inter-process synchronization and communication, and their confinement approaches under FVM.
2007
Table 2: Matching set confinement for 12 real filter sets; number of unique match conditions for each filter field is given in parentheses.
2005
"... In PAGE 8: ... Ta- ble 2 reports matching set confinement results from 12 real filter sets provided by Internet Service Providers (ISPs), a network equipment vendor, and research colleagues. Match condition redundancy can also be ob- served in Table2 , as the number of unique match conditions for each filter field is significantly less than the number of filters in each filter set. The label encoding technique employed by the Distributed Crossproducting of Field Labels (DCFL) algorithm leverages match condition redundancy and matching set confinement to construct fast, efficient aggregation data structures [1].... ..."
Table 1. Computations of confinement timeTcf for di erent di usion coe cient values and theoretical value of this confinement time. Note that the agreement is good as far as the confinement time is large. Indeed, the time step t = 5 10 3 to compute them is the same for the three runs which leads to di erent ratio Tcf= t. If this ratio is too small, the time step is not appropriate to accurately model the particle transport.
"... In PAGE 6: ... Once all particles have reached the jet surface, we calculate the average value of the confine- ment time. In Table1 , we present the result of the di erent computations. The good agreement between the numerical and the estimated confinement times is a clue indicating that the spatial transport of the particles in the jet is well treated as far as the time step is small enough to mimic the Brownian mo- tion of particles.... ..."
Table 2: Percent error classification. Numbers in brackets represent standard deviations. Wavelet1 uses the first 15 wavelet coefficients of the signal while Wavelet2 stands for a discrimination based on L2 distance of the distributions of the two classes. QDB1 means that first the quadratic discrim- ination best basis is found (see text for details) and then the first 12 or 15 coefficients of this basis are used. In Quadratic2, the discriminating coefficients based on the L2 distance of the distributions of the two classes are found from the same best basis.. In all cases the discrimination is done on each dimension separately.
"... In PAGE 8: ... We have achieved far better results on this frequency band, and thus provide more results to demonstrate the effect of different mother wavelets, different discrimination methods and different number of features. Table2 provides results of the experiments with the wide-band data. The top panel indicates that vast dimensionality reduction is essential for improved performance as the discriminating information is confined in a much smaller dimensional space.... ..."
Table 6. Comparative advantages of the three machine learning methods. It should be mentioned that this evaluation is relative and confined to the problem of cancer classification with gene expression levels we addressed. Bayesian networks Neural trees RBF networks
"... In PAGE 14: ...Methods of Microarray Data Analysis tree learning seems the best in finding out a small set of interesting genes for effective classification. Table6 summarizes the comparative characteristics of machine learning techniques we used in the experiments. Table 6.... ..."
Table 7 Size of Constraint Set from Garvan ES1 Data 332 Rules with one antecedent
"... In PAGE 14: ... The cases generated are confined to a permitted region less than 10-6 of the volume of the attribute space. Table7 shows the gross consistency check suggested in the previous section applied to the generated cases. A row in the table corresponding to an attribute is an analysis of those cases for which that attribute was the first attribute chosen.... ..."
Table 2 The recognition error rates (%) of the classifiers and the false alarm rates (%) with respect to the confusers
"... In PAGE 5: ... Two confusers, D7 and 2S1, were added to the testing set. The recognition results are listed in Table2 . It is shown that the three classification methods gave different recognition performance.... In PAGE 6: ... However for this to work, class discriminants should be confined to the local area of the pattern space where the samples are located, that is, the discriminants should be local. The confuser rejection results of Table2 showed that the SVM, when Gaussian kernel functions are employed, provides a bounded or local decision region in the input space, and thus obtains a better confuser rejection result. Furthermore, when a confuser is far away from the local decision region, it would be mapped by the SVM to a location close to the origin of the feature space, which promises a reliable rejection to the confuser.... ..."
Table 8: Confuser Rejection Classifier Rejection
"... In PAGE 22: ... The size of both confuser sets is 274. The rejection results are listed in Table8 . From the table, we conclude that our three classifiers give better results of confuser rejection than the template matcher.... In PAGE 24: ... However, for meaningful results, class discriminants should be confined to the local area of the pattern space where the samples are located, that is, the discriminants should be local. The confuser rejection results of Table8 showed that the SVM with Gaussian kernel, which imple- ments a bounded local decision region in the input space, in fact obtains the best confuser rejec- tion. The SVM maps a confuser far away from the local decision region onto a location close to the origin of the feature space, which promises a reliable rejection.... ..."
Results 11 - 20
of
13,692