• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations

Tools

Sorted by:
Try your query at:
Semantic Scholar Scholar Academic
Google Bing DBLP
Results 1 - 10 of 600
Next 10 →

Estimation of the Misclassification Rate of Self-reported Visual Disability

by F. Djafari, H. M. Boisjoly, J. F. Boivin, P. Labelle Md, M. C. Boucher Md, M. Amyot Md, L. Cliche Rn, M. Charest Rt
"... Purpose: To estimate the misclassification rate of self-reported visual disabilities in a hospital-based population with known visual impairment. Methods: Subjects (N=570) were recruited among patients aged 50 years and more and classified to three categories of visual impairment level. The question ..."
Abstract - Add to MetaCart
Purpose: To estimate the misclassification rate of self-reported visual disabilities in a hospital-based population with known visual impairment. Methods: Subjects (N=570) were recruited among patients aged 50 years and more and classified to three categories of visual impairment level

K-NORM MISCLASSIFICATION RATE ESTIMATION FOR DECISION TREES

by Mingyu Zhong, Michael Georgiopoulos, Georgios C. Anagnostopoulos
"... The decision tree classifier is a well-known methodology for classification. It is widely accepted that a fully grown tree is usually over-fit to the training data and thus should be pruned back. In this paper, we analyze the overtraining issue theoretically using an the k-norm risk estimation appro ..."
Abstract - Add to MetaCart
approach with Lidstone’s Estimate. Our analysis allows the deeper understanding of decision tree classifiers, especially on how to estimate their misclassification rates using our equations. We propose a simple pruning algorithm based on our analysis and prove its superior properties, including its

Misclassification Rates in Hypertension Diagnosis due to Measurement Errors

by Camila Friedman-gerlicz, Claremont Mckenna College
"... Abstract. Using a mixture of two normal distributions, we estimate the false positive and false negative errors in the diagnosis of hypertension. Parameters in the mixture are estimated by the expectation-maximization (EM) algorithm. It is shown that both errors depend on cutoff points. Repeated mea ..."
Abstract - Add to MetaCart
Abstract. Using a mixture of two normal distributions, we estimate the false positive and false negative errors in the diagnosis of hypertension. Parameters in the mixture are estimated by the expectation-maximization (EM) algorithm. It is shown that both errors depend on cutoff points. Repeated measurements reduce both errors dramatically. The number of repeated measurements is recommended through a simulation study. 1

On the optimality of the simple Bayesian classifier under zero-one loss

by Pedro Domingos, Michael Pazzani - MACHINE LEARNING , 1997
"... The simple Bayesian classifier is known to be optimal when attributes are independent given the class, but the question of whether other sufficient conditions for its optimality exist has so far not been explored. Empirical results showing that it performs surprisingly well in many domains containin ..."
Abstract - Cited by 818 (27 self) - Add to MetaCart
-one loss (misclassification rate) even when this assumption is violated by a wide margin. The region of quadratic-loss optimality of the Bayesian classifier is in fact a second-order infinitesimal fraction of the region of zero-one optimality. This implies that the Bayesian classifier has a much greater

Balancing misclassification rates in classification-tree models of software quality. Empirical Software Engineering 5(4):313

by Xiaojing Yuan, Edward B. Allen , 2000
"... Abstract. Software product and process metrics can be useful predictors of which modules are likely to have faults during operations. Developers and managers can use such predictions by software quality models to focus enhancement efforts before release. However, in practice, software quality modeli ..."
Abstract - Cited by 4 (1 self) - Add to MetaCart
modeling methods in the literature may not produce a useful balance between the two kinds of misclassification rates, especially when there are few faulty modules. This paper presents a practical classification rule in the context of classification tree models that allows appro-priate emphasis on each type

Effect Size Estimation and Misclassification Rate Based Variable Selection in Linear Discriminant Analysis

by Bernd Klaus , 2013
"... Abstract: Supervised classifying of biological samples based on genetic information, (e.g., gene expression profiles) is an important problem in biostatistics. In order to find both accurate and interpretable classification rules variable selection is indispensable. This article explores how an ass ..."
Abstract - Add to MetaCart
is at the same time computationally efficient. I then show how to use effect sizes to perform variable selection based on the misclassification rate, which is the data independent expectation of the prediction error. Simulation studies and real data analyses illustrate that the proposed effect size estimation

Bayesian Inference for Genomic Data Integration Reduces Misclassification Rate in Predicting Protein- Protein Interactions

by Chuanhua Xing, David B. Dunson
"... Protein-protein interactions (PPIs) are essential to most fundamental cellular processes. There has been increasing interest in reconstructing PPIs networks. However, several critical difficulties exist in obtaining reliable predictions. Noticeably, false positive rates can be as high as.80%. Error ..."
Abstract - Add to MetaCart
the misclassification rate (both false positives and negatives) through automatically up-weighting data sources that are most informative, while down-weighting less informative and biased sources. Extensive studies indicate that NBEL is significantly more robust than the classic naïve Bayes to unreliable, error

Fast kernel classifier construction using orthogonal forward selection to minimize leave-one-out misclassification rate

by S. Chen, X. X. Wang, X. Hong, C. J. Harris - in Proc. Int. Conf. Intell. Comput
"... Abstract—A greedy technique is proposed to construct parsimonious kernel classifiers using the orthogonal forward selection method and boosting based on Fisher ratio for class separability measure. Unlike most kernel classification methods, which restrict kernel means to the training input data and ..."
Abstract - Cited by 15 (10 self) - Add to MetaCart
Abstract—A greedy technique is proposed to construct parsimonious kernel classifiers using the orthogonal forward selection method and boosting based on Fisher ratio for class separability measure. Unlike most kernel classification methods, which restrict kernel means to the training input data and use a fixed common variance for all the kernel terms, the proposed technique can tune both the mean vector and diagonal covariance matrix of individual kernel by incrementally maximizing Fisher ratio for class separability measure. An efficient weighted optimization method is developed based on boosting to append kernels one by one in an orthogonal forward selection procedure. Experimental results obtained using this construction technique demonstrate that it offers a viable alternative to the existing state-of-the-art kernel modeling methods for constructing sparse Gaussian radial basis function network classifiers that generalize well. Index Terms—Boosting, classification, Fisher ratio of class separability, forward selection, kernel classifier, orthogonal least square, radial basis function network. I.

World Health Organization Guidelines Is Associated With High Misclassification Rates and Drug Resistance Among HIV-Infected

by Cambodian Children, Leeann Schreier, Joseph I. Harwell, Rami Kantor
"... Background. Antiretroviral therapy (ART) in resource-limited settings (RLSs) is monitored clinically and im-munologically, according to World Health Organization (WHO) or national guidelines. Revised WHO pediatric guidelines were published in 2010, but their ability to accurately identify virologica ..."
Abstract - Add to MetaCart
), 20 % had>400 copies/mL. For children with WHO stage 1/2 HIV, misclassification as failure (met CD4 failure criteria, but VL undetectable) was 64 % for WHO 2006

10.1177/0013164405278579Educational and Psychological MeasurementFinch, Schneider / Misc assification Rates for Four Methods Misclassification Rates for Four Methods of Group Classification Impact of Predictor Distribution, Covariance Inequality, Effect S

by W. Holmes Finch, Mercedes K. Schneider , 2006
"... This study compares the classification accuracy of linear discriminant analysis (LDA), quadratic discriminant analysis (QDA), logistic regression (LR), and classification and regression trees (CART) under a variety of data conditions. Past research has generally found comparable performance of LDA a ..."
Abstract - Add to MetaCart
This study compares the classification accuracy of linear discriminant analysis (LDA), quadratic discriminant analysis (QDA), logistic regression (LR), and classification and regression trees (CART) under a variety of data conditions. Past research has generally found comparable performance of LDA and LR, with relatively less research on QDA and virtually none on CART. This study uses Monte Carlo simulations to assess the cross-validated predictive accuracy of these methods, while manipulating such factors as pre-dictor distribution, sample size, covariance matrix inequality, group separation, and group size ratio. The results indicate that QDA performs as well as or better than the other alternatives in virtually all conditions. Suggestions for practitioners are provided.
Next 10 →
Results 1 - 10 of 600
Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University