• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Tools

Sorted by:
Try your query at:
Semantic Scholar Scholar Academic
Google Bing DBLP
Results 11 - 20 of 68,611
Next 10 →

Table 1. Comparison of different discriminative trackers Tracker no pre-train online learning collaborative training easy to generalize # of features

in Santa Cruz,CA,USA
by Feng Tang, Shane Brennan, Qi Zhao, Hai Tao

Tables 8 and 9 present the results for a tournament between the best player from each set of experiments starting with random initial weights, R6,0, R6,20, ..., R50,20, and the best player from each set of experiments starting with pre- trained networks, P6,0, P6,20, ..., P50,20. Each of these players plays 19 games as black against each other player,

in A coevolutionary model for the Virus game
by P. I. Cowling, M. H. Naveed, M. A. Hossain 2006
Cited by 1

Table 4. Effects of training on end-exercise responses to HCL testing

in Adaptation of the Slow Component of VO 2 Following 6 wk of High or Low Intensity Exercise Training By Jeffrey Vincent Ocel
by Virginia Polytechnic Institute, Shala E. Davis, Jeffrey Vincent Ocel, Jeffrey Vincent Ocel, Committee Chair, William G. Herbert
"... In PAGE 56: ... Although, this concept continues to be a subject of great debate. Table4 shows the effect of training for each group from the pre-training HCL to the testing performed after wk 6. For each variable, namely end-exercise VO2, VE, heart rate and lactate, the training groups experienced a significant reduction from the pre-training HCL test.... ..."

Table 2. The number of input, hidden and output nodes per data set for each of the constituent networks used for the single network and ensemble systems (hidden nodes selected as in [13]).

in In-situ Learning in Multi-net Systems
by Matthew Casey , Khurshid Ahmad
"... In PAGE 3: ...In order to understand the generalisation performance of the SLE and SLM sys- tems, we compare the percentage test responses against those generated for single MLPs trained with and without early stopping, as well as simple ensembles formed from 2 to 20 MLPs pre-trained with early stopping. The architecture used for the various systems is shown in Table2... ..."

Table 3. The different architectures used for the SLM system, shown as the topology of the SOM and the single layer network. For the SOM this is the number of inputs and nodes in the map. For the single layer network this is the number of input and output nodes.

in In-situ Learning in Multi-net Systems
by Matthew Casey , Khurshid Ahmad
"... In PAGE 3: ... System MONK 1 MONK 2 MONK 3 WBCD MLP MLP (ES) SE (ES) SLE (ES) 6-3-1 6-2-1 6-4-1 9-5-2 In order to understand the generalisation performance of the SLE and SLM sys- tems, we compare the percentage test responses against those generated for single MLPs trained with and without early stopping, as well as simple ensembles formed from 2 to 20 MLPs pre-trained with early stopping. The architecture used for the various systems is shown in Table 2 and Table3... ..."

Table 9. The posture transition path (PTP) and posture path (PP) in experiment 2.

in Skeleton-based Walking Motion Analysis Using Hidden Markov Model and Active Shape Models
by I-cheng Chang, Chung-lin Huang
"... In PAGE 27: ...ig. 14. The image frames of experiment 2. The pre-trained HMM and Viterbi algorithm were applied to the observation model-state sequences to generate the posture transition path, which was further simpli- fied as the posture path (illustrated in Table9 ). The posture path of the sequence was {24* 25* 26* 27* 28* 30* 23* 24* 25* 26*}.... ..."

Table 3 Feature subsets per neural network

in User Model User-Adap Inter DOI 10.1007/s11257-007-9035-8 ORIGINAL PAPER
by Lampropoulos George, A. Tsihrintzis 2007
"... In PAGE 17: ...ock. Each piece has a duration of 30s. The evaluation process consisted of three stages: At first, each participant was asked to complete a questionnaire regarding user background information, which was collected for statistical purposes and is summarized in Table 2. Secondly, each participant was given a pre-defined set of 11 pre-trained neural net- works with corresponding feature subsets as in Table3 . The process of selecting the feature subsets was not arbitrary.... ..."

TABLE 2. Network Algorithm - In the first box, the table illustrates the general method for creating belief networks based on expert knowledge. In the bottom box, the table outlines the heuristic algorithm employed by the program.

in Text Analysis for Constructing Design Representations
by Andy Dong, Alice M Agogino 1997
Cited by 10

Table 1. Performance of Bayesian Belief Network

in Hybrid Feature Selection for Modeling Intrusion Detection Systems
by Srilatha Chebrolu, Ajith Abraham, Johnson P Thomas 2004
"... In PAGE 4: ... Further Bayesian network classifier is constructed using the training data and then the classifier is used on the test data set to classify the data as an attack or normal. Table1 depicts the performance of Bayesian belief network by using the original 41 variable data set and the 17 variables reduced data set. The training and testing times for each classifier are decreased when 17 variable data set is used.... ..."
Cited by 2

Table 1. Performance of Bayesian Belief Network

in Distributed Intrusion Detection Systems: A Computational Intelligence Approach
by Ajith Abraham, Johnson Thomas 2005
"... In PAGE 15: ....3.1. Modeling IDS Using Bayesian Classifier Furthermore, Bayesian network classifier is constructed using the training data and then the classifier is used on the test data set to classify the data as an attack or normal. Table1 depicts the performance of Bayesian belief network by using the original 41 variable data set and the 17 variables reduced data set. The training and testing times for each classifier are decreased when 17 variable data set is used.... ..."
Cited by 2
Next 10 →
Results 11 - 20 of 68,611
Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University