Results 11 - 20
of
68,611
Table 1. Comparison of different discriminative trackers Tracker no pre-train online learning collaborative training easy to generalize # of features
Tables 8 and 9 present the results for a tournament between the best player from each set of experiments starting with random initial weights, R6,0, R6,20, ..., R50,20, and the best player from each set of experiments starting with pre- trained networks, P6,0, P6,20, ..., P50,20. Each of these players plays 19 games as black against each other player,
2006
Cited by 1
Table 4. Effects of training on end-exercise responses to HCL testing
"... In PAGE 56: ... Although, this concept continues to be a subject of great debate. Table4 shows the effect of training for each group from the pre-training HCL to the testing performed after wk 6. For each variable, namely end-exercise VO2, VE, heart rate and lactate, the training groups experienced a significant reduction from the pre-training HCL test.... ..."
Table 2. The number of input, hidden and output nodes per data set for each of the constituent networks used for the single network and ensemble systems (hidden nodes selected as in [13]).
"... In PAGE 3: ...In order to understand the generalisation performance of the SLE and SLM sys- tems, we compare the percentage test responses against those generated for single MLPs trained with and without early stopping, as well as simple ensembles formed from 2 to 20 MLPs pre-trained with early stopping. The architecture used for the various systems is shown in Table2... ..."
Table 3. The different architectures used for the SLM system, shown as the topology of the SOM and the single layer network. For the SOM this is the number of inputs and nodes in the map. For the single layer network this is the number of input and output nodes.
"... In PAGE 3: ... System MONK 1 MONK 2 MONK 3 WBCD MLP MLP (ES) SE (ES) SLE (ES) 6-3-1 6-2-1 6-4-1 9-5-2 In order to understand the generalisation performance of the SLE and SLM sys- tems, we compare the percentage test responses against those generated for single MLPs trained with and without early stopping, as well as simple ensembles formed from 2 to 20 MLPs pre-trained with early stopping. The architecture used for the various systems is shown in Table 2 and Table3... ..."
Table 9. The posture transition path (PTP) and posture path (PP) in experiment 2.
"... In PAGE 27: ...ig. 14. The image frames of experiment 2. The pre-trained HMM and Viterbi algorithm were applied to the observation model-state sequences to generate the posture transition path, which was further simpli- fied as the posture path (illustrated in Table9 ). The posture path of the sequence was {24* 25* 26* 27* 28* 30* 23* 24* 25* 26*}.... ..."
Table 3 Feature subsets per neural network
2007
"... In PAGE 17: ...ock. Each piece has a duration of 30s. The evaluation process consisted of three stages: At first, each participant was asked to complete a questionnaire regarding user background information, which was collected for statistical purposes and is summarized in Table 2. Secondly, each participant was given a pre-defined set of 11 pre-trained neural net- works with corresponding feature subsets as in Table3 . The process of selecting the feature subsets was not arbitrary.... ..."
TABLE 2. Network Algorithm - In the first box, the table illustrates the general method for creating belief networks based on expert knowledge. In the bottom box, the table outlines the heuristic algorithm employed by the program.
1997
Cited by 10
Table 1. Performance of Bayesian Belief Network
2004
"... In PAGE 4: ... Further Bayesian network classifier is constructed using the training data and then the classifier is used on the test data set to classify the data as an attack or normal. Table1 depicts the performance of Bayesian belief network by using the original 41 variable data set and the 17 variables reduced data set. The training and testing times for each classifier are decreased when 17 variable data set is used.... ..."
Cited by 2
Table 1. Performance of Bayesian Belief Network
2005
"... In PAGE 15: ....3.1. Modeling IDS Using Bayesian Classifier Furthermore, Bayesian network classifier is constructed using the training data and then the classifier is used on the test data set to classify the data as an attack or normal. Table1 depicts the performance of Bayesian belief network by using the original 41 variable data set and the 17 variables reduced data set. The training and testing times for each classifier are decreased when 17 variable data set is used.... ..."
Cited by 2
Results 11 - 20
of
68,611