• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Tools

Sorted by:
Try your query at:
Semantic Scholar Scholar Academic
Google Bing DBLP
Results 1 - 10 of 152,086
Next 10 →

Table 5. Regression on Pumadyn with 32 input non-linear with high noise. Training size 64 128 256 512 1024

in October 28, 2003 6:25 WSPC/INSTRUCTION FILE prbfn˙reg˙ijpr˙v1 International Journal of Pattern Recognition and Artificial Intelligence c ○ World Scientific Publishing Company On different model selection criteria in a Forward and Backward
by Shimon Cohen, Nathan Intrator

Table 3. Average results for small data sets

in Incorporating Knowledge in Evolutionary Prototype Selection
by Salvador García, José Ramón Cano, Francisco Herrera

Table 1. Parameter estimates by approximate method I for the small data set

in Maximum Likelihood Estimation on Large Phylogenies and Analysis of Adaptive Evolution in Human Influenza Virus A
by Ziheng Yang 2000
"... In PAGE 7: ...013), v 4 5.069 See notes for Table1 . Lists of positively selected sites are the same as in Table 1 Table 3.... In PAGE 7: ...059), v 4 3.142 157 159 186 193 194 219 226 See notes for Table1 . Estimates of k are around 3.... ..."
Cited by 10

Table 2. Parameter estimates by approximate method II for the small data set

in Maximum Likelihood Estimation on Large Phylogenies and Analysis of Adaptive Evolution in Human Influenza Virus A
by Ziheng Yang 2000
Cited by 10

Table 4: Features selected by the GA for the small data set.

in Choosing the Best Set of Bankruptcy Predictors
by Barbro Back, Kaisa Sere, Michiel C. Van Wezel 1995
"... In PAGE 10: ... The string 111000000101001111 occurred most frequently in a conver- ged population. Table4 shows all the available features, marked by a 1 if they were selected by the GA in the best string, and marked by a 0 if they were left out. This shows which features have been selected by the GA as being the most in uential for bankruptcy.... ..."
Cited by 4

Table 1: Deviation from the best model inferred (small data set)

in On Structural Inference for XML Data
by Raymond K. Wong, Jason Sankey 2003
Cited by 3

Table 1. Performance of the FAST-MCD and FSA algorithms on some small data sets.

in A Fast Algorithm for the Minimum Covariance Determinant Estimator
by Peter J. Rousseeuw, Katrien Van Driessen 1999
"... In PAGE 18: ...ere all regression data sets but we ran FAST-MCD only on the explanatory variables, i.e. not using the response variable. The rst column of Table1 lists the name of each data set, followed by n and p. We stayed with the default value of h = [(n + p + 1)=2].... In PAGE 18: ...ollowed by n and p. We stayed with the default value of h = [(n + p + 1)=2]. The next column shows the number of starting (p + 1)-subsets used in FAST-MCD, which is usually 500 except for two data sets where the number of possible (p + 1)-subsets out of n was fairly small, namely ?123 = 220 and ?183 = 816, so we used all of them. The next entry in Table1 is the result of FAST-MCD, given here as the nal h-subset. By comparing these with the exact MCD algorithm of Agull o (personal communication) it turns out that these h-subsets do yield the exact global minimum of the objective function.... In PAGE 21: ... In Tables 1 and 2 we have applied the FSA algorithm to the same data sets as FAST- MCD, using the same number of starts. For the small data sets in Table1 the FSA and FAST-MCD yielded identical results. This is no longer true in Table 2, where the FSA begins to nd nonrobust solutions.... ..."
Cited by 67

Table 1. Performance of the FAST-LTS and FSA algorithms on some small data sets.

in Computing LTS Regression for Large Data Sets
by Peter J. Rousseeuw, Katrien Van Driessen 1999
"... In PAGE 13: ... 5 Performance of FAST-LTS To get an idea of the performance of the overall algorithm, we start by applying FAST- LTS to some small regression data sets taken from (Rousseeuw and Leroy 1987). The rst column of Table1 lists the name of each data set, followed by n and p, where n is the number of observations and p stands for the number of coe cients including the intercept term. We stayed with the default value of h =[(n + p +1)=2].... In PAGE 13: ... The next column shows the number of starting p-subsets used in FAST-LTS, which is usually 500 except for two data sets where the number of possible p-subsets out of n was fairly small, namely ; 12 3 = 220 and ; 18 3 = 816, so we used all of them. The next entry in Table1 is the result of FAST-LTS, given here as the nal h-subset. By comparing these with the exact LTS algorithm of Agull o (personal communication) it turns out that these h-subsets do yield the exact global minimum of the objective function.... In PAGE 16: ... In Tables 1 and 2 wehave applied the FSA algorithm to the same data sets as FAST-LTS, using the same number of starts. For the small data sets in Table1 the FSA and FAST-LTS yielded identical results, but for the larger data sets in Table 2 the FSA obtains nonrobust solutions. This is because (1) The FSA starts from randomly drawn h-subsets H 1 .... ..."
Cited by 19

Table 6: Prediction of best net found by the GA using the small data set

in Choosing the Best Set of Bankruptcy Predictors
by Barbro Back, Kaisa Sere, Michiel C. Van Wezel 1995
Cited by 4

Table 5.3: Execution times in seconds on the KSR1 for the small data set.

in Evaluation of Numerical Applications Running With Shared Virtual Memory
by Rudolf Berrendorf, Michael Gerndt, Zakaria Lahjomri, Thierry Priol
Next 10 →
Results 1 - 10 of 152,086
Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University