• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Tools

Sorted by:
Try your query at:
Semantic Scholar Scholar Academic
Google Bing DBLP
Results 1 - 10 of 81,451
Next 10 →

Table 3. Distribution of Garbage/Noise Model

in Pronunciation Variants Modeling
by In Korean Spontaneous
"... In PAGE 2: ... Table 2. Classification of Korean spontaneous speech characteristics about disfluencies [8] Classification Example Noise Human garbage, pause, Environment noise Filled pause ye/ ne/ jeo/ eo/ mwo/ Disfluencies Repetition / Repair mat/ matsseumnikka yeyak/ yeyakasyeotsseumnida Based on the data gathered, 1,710 filled pauses occurred among the 36,142 spacing units (see Table3 for the distribution of noise). Mostly, the acceptance and affirmation expressions (ye) and (ne) covered 72.... ..."

Table 2.1: Label Noise Models for the Likelihood in Gaussian Process Classification.

in Learning Discriminative Models with Incomplete Data
by Ashish Kapoor 2006
Cited by 2

Table 3: Classification Accuracy shown as a percent- age (%) correct per class on the Independence and Dependence Classification models.

in ABSTRACT Query Intention Acquisition: A Case Study on Automatically Inferring Structured Queries
by Leif Azzopardi
"... In PAGE 5: ...8% of the query terms were essentially noise in the query. The classification accuracy performance is reported in Table3 for statistics on a class by class basis. As a base- line, we assumed a naive model that assigns query terms to the most probable class (i.... ..."

Table 3. Confirmation of the Results

in A Procedure For Robust Design: Minimizing Variations Caused By Noise Factors And Control Factors
by Wei Chen, Janet K. Allen, Kwok-Leung Tsui, Farrokh Mistree 1996
"... In PAGE 17: ... As explicit analytical equations are not available for this problem, to confirm the adequacy of the response model in predicting the mean and variance of system performance, random simulations are used. In Table3 , the results are compared to those from 100 random simulations and 500 random simulations. For these simulations, the values of the control factors are fixed at their solution point and the values of the noise factors vary within the given range.... In PAGE 17: ... It can be noted that the estimations for the mean of power, mean of efficiency and mean of savings are quite accurate. For variance, in column 2 of Table3 , the estimated values are provided when assuming the noise factors are normally distributed or uniformly distributed. Random simulations yield values which are close to those obtained when assuming the noise factors are uniformly distributed.... In PAGE 17: ... Note that the accuracy is satisfactory. - INSERT TABLE 3 HERE - Table3 . Confirmation of the Results When considering multiple aspects of quality, designers may have different preferences... ..."
Cited by 36

Table 2. Noise classification results and confusion matrix

in Environmental Noise Classification for Context-Aware Applications
by Ling Ma, Dan Smith, Ben Milner 2003
Cited by 2

Table 1: Classification Accuracies and Standard Deviations for all Models

in Detecting Bright Band using AI Techniques in Radar Hydrology
by D. R. Mcculloch, J. Lawry, M. A Rico-ramirez, I. D Cluckie
"... In PAGE 7: ...can see from Table1 that the continuous version of Naive Bayes certainly lacked performance compared to the three discretisation methods considered. This supports our analysis of the dataset and demonstrates the downfall of models that often assume a Gaussian distribution.... In PAGE 7: ... This supports our analysis of the dataset and demonstrates the downfall of models that often assume a Gaussian distribution. We can see from Table1 , that in general Fuzzy Naive Bayes outperforms Naive Bayes. This is because Fuzzy Naive Bayes is less susceptible to misclassification due to crisp partitions, and is more robust to noise and heavily overlapping attribute spaces.... ..."

Table 7: Effect of noise on classification performance on the SpamAssassin dataset. An order-6 adaptive PPM model and BogoFilter are compared. The best results in the 1-AUC statistic are in bold.

in Spam filtering using compression models
by Andrej Bratko, Bogdan Filipič 2005
"... In PAGE 14: ... We then stripped the messages of all headers except the subject header and re-ran the experiment. The results in Table7 show that compression models are very robust to noise. Even after 20% of all characters are distorted, rendering messages practically illegible, the PPM model retains a respectable performance.... ..."
Cited by 8

TABLE 1: Description of Economic Sectors in the Model

in Extending the Random-Utility-Based Multiregional Input-Output Model: Incorporating Land-Use Constraints, Domestic Demand and Network Congestion in a Model of Texas
by Kara M. Kockelman, Clare Boothe, Luce Assistant, Professor Civil Engineering 2004
Cited by 1

Table 1. Classification of Models

in unknown title
by unknown authors 1990
"... In PAGE 2: ... Here, we consider a classification based on three factors: (a) whether the model assumes dyad independence; (b) whether it is suitable for symmetric arrays (undirected graphs), with yij = yj,, or for asymmetric arrays (digraphs); (c) whether it is a block model, with parameters corre- sponding to an a priori grouping of individuals. This leads to the eightfold classification shown in Table1 . Models for most of the categories are already familiar, and we introduce some new models for the other cases.... In PAGE 2: ... Models for most of the categories are already familiar, and we introduce some new models for the other cases. All of the models in Table1 can conveniently be fitted with the pseu- dolikelihood method. In most of this article we restrict attention to models for a single relationship, represented by a binary array {yij).... In PAGE 3: ... Thus each logit model is a dyad model, and vice versa. We now consider the model classification of Table1 and express various cases in logit form. Holland and Leinhardt (1981) defined their p, model by Here rn = 4 2,2, yi1y,,, the number of mutual arcs, y++ is the total number of arcs, and so on.... ..."
Cited by 28

Table 2. Number of LOOCV errors for different noise identification methods.

in Probabilistic Noise Identification and Data Cleaning
by Jeremy Kubica, Andrew Moore 2002
"... In PAGE 5: ... Each test con- sisted of: removing a record from the training set, fully re- learning the models and corruption matrix from their default values, and classifying the removed record. The results are shown in Table2 . In addition to the randomized LENS al- gorithm, we also tested the following algorithms: Table 2.... In PAGE 5: ... Thus Simple Cell is similar to the approach pre- sented in [16] for a single iteration probabilistic noise iden- tification and Simple Record is similar to some approaches used for full point noise identification. The results in Table2 indicate that accounting for cor- ruptions and learning a model of the corruption process, as with the randomized greedy algorithm, can lead to an im- provement in classification accuracy. Further, the random-... ..."
Cited by 8
Next 10 →
Results 1 - 10 of 81,451
Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University