• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations

Tools

Sorted by:
Try your query at:
Semantic Scholar Scholar Academic
Google Bing DBLP
Results 1 - 10 of 13,202
Next 10 →

Imagenet classification with deep convolutional neural networks.

by Alex Krizhevsky , Ilya Sutskever , Geoffrey E Hinton - In Advances in the Neural Information Processing System, , 2012
"... Abstract We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5% and 17.0% which is considerably better than the pr ..."
Abstract - Cited by 1010 (11 self) - Add to MetaCart
Abstract We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5% and 17.0% which is considerably better than

Distance metric learning for large margin nearest neighbor classification

by Kilian Q. Weinberger, John Blitzer, Lawrence K. Saul - In NIPS , 2006
"... We show how to learn a Mahanalobis distance metric for k-nearest neighbor (kNN) classification by semidefinite programming. The metric is trained with the goal that the k-nearest neighbors always belong to the same class while examples from different classes are separated by a large margin. On seven ..."
Abstract - Cited by 695 (14 self) - Add to MetaCart
. On seven data sets of varying size and difficulty, we find that metrics trained in this way lead to significant improvements in kNN classification—for example, achieving a test error rate of 1.3 % on the MNIST handwritten digits. As in support vector machines (SVMs), the learning problem reduces to a

Improved Boosting Algorithms Using Confidence-rated Predictions

by Robert E. Schapire , Yoram Singer - MACHINE LEARNING , 1999
"... We describe several improvements to Freund and Schapire’s AdaBoost boosting algorithm, particularly in a setting in which hypotheses may assign confidences to each of their predictions. We give a simplified analysis of AdaBoost in this setting, and we show how this analysis can be used to find impr ..."
Abstract - Cited by 940 (26 self) - Add to MetaCart
out to be identical to one proposed by Kearns and Mansour. We focus next on how to apply the new boosting algorithms to multiclass classification problems, particularly to the multi-label case in which each example may belong to more than one class. We give two boosting methods for this problem, plus

Instance-based learning algorithms

by David W. Aha, Dennis Kibler, Marc K. Albert - Machine Learning , 1991
"... Abstract. Storing and using specific instances improves the performance of several supervised learning algorithms. These include algorithms that learn decision trees, classification rules, and distributed networks. However, no investigation has analyzed algorithms that use only specific instances to ..."
Abstract - Cited by 1389 (18 self) - Add to MetaCart
. This approach extends the nearest neighbor algorithm, which has large storage requirements. We describe how storage requirements can be significantly reduced with, at most, minor sacrifices in learning rate and classification accuracy. While the storage-reducing algorithm performs well on several realworld

Specification Analysis of Affine Term Structure Models

by Qiang Dai, Kenneth J. Singleton - JOURNAL OF FINANCE , 2000
"... This paper explores the structural differences and relative goodness-of-fits of affine term structure models (ATSMs55). Within the family of ATSMs there is a tradeoff between flexibility in modeling the conditional correlations and volatilities of the risk factors. This trade-off is formalized by ou ..."
Abstract - Cited by 596 (36 self) - Add to MetaCart
by our classification of N-factor affine family into N + 1 non-nested subfamilies of models. Specializing to three-factor ATSMs, our analysis suggests, based on theoretical considerations and empirical evidence, that some subfamilies of ATSMs are better suited than others to explaining historical

Probabilistic Outputs for Support Vector Machines and Comparisons to Regularized Likelihood Methods

by John C. Platt - ADVANCES IN LARGE MARGIN CLASSIFIERS , 1999
"... The output of a classifier should be a calibrated posterior probability to enable post-processing. Standard SVMs do not provide such probabilities. One method to create probabilities is to directly train a kernel classifier with a logit link function and a regularized maximum likelihood score. Howev ..."
Abstract - Cited by 1051 (0 self) - Add to MetaCart
. However, training with a maximum likelihood score will produce non-sparse kernel machines. Instead, we train an SVM, then train the parameters of an additional sigmoid function to map the SVM outputs into probabilities. This chapter compares classification error rate and likelihood scores for an SVM plus

An inventory for measuring depression

by A. T. Beck, C. H. Ward, J. Mock M. D - Archives of General Psychiatry , 1961
"... The difficulties inherent in obtaining con-sistent and adequate diagnoses for the pur-poses of research and therapy have been pointed out by a number of authors. Pasamanick12 in a recent article viewed the low interclinician agreement on diagnosis as an indictment of the present state of psychiatry ..."
Abstract - Cited by 1195 (0 self) - Add to MetaCart
and called for "the development of objective, measurable and verifiable criteria of classification based not on per-sonal or parochial considerations, but- on behavioral and other objectively measurable manifestations." Attempts by other investigators to subject clinical observations and judgments

N-grambased text categorization

by William B. Cavnar, John M. Trenkle - In Proc. of SDAIR-94, 3rd Annual Symposium on Document Analysis and Information Retrieval , 1994
"... Text categorization is a fundamental task in document processing, allowing the automated handling of enormous streams of documents in electronic form. One difficulty in handling some classes of documents is the presence of different kinds of textual errors, such as spelling and grammatical errors in ..."
Abstract - Cited by 445 (0 self) - Add to MetaCart
is small, fast and robust. This system worked very well for language classification, achieving in one test a 99.8 % correct classification rate on Usenet newsgroup articles written in different languages. The system also worked reasonably well for classifying articles from a number of different computer

Shape matching and object recognition using low distortion correspondence

by Alexander C. Berg, Tamara L. Berg, Jitendra Malik - In CVPR , 2005
"... We approach recognition in the framework of deformable shape matching, relying on a new algorithm for finding correspondences between feature points. This algorithm sets up correspondence as an integer quadratic programming problem, where the cost function has terms based on similarity of correspond ..."
Abstract - Cited by 419 (15 self) - Add to MetaCart
datasets. One is the Caltech 101 dataset (Fei-Fei, Fergus and Perona), an extremely challenging dataset with large intraclass variation. Our approach yields a 48 % correct classification rate, compared to Fei-Fei et al’s 16%. We also show results for localizing frontal and profile faces that are comparable

Svm-knn: Discriminative nearest neighbor classification for visual category recognition

by Hao Zhang, Alexander C. Berg, Michael Maire, Jitendra Malik - in CVPR , 2006
"... We consider visual category recognition in the framework of measuring similarities, or equivalently perceptual distances, to prototype examples of categories. This approach is quite flexible, and permits recognition based on color, texture, and particularly shape, in a homogeneous framework. While n ..."
Abstract - Cited by 342 (10 self) - Add to MetaCart
variety of distance functions can be used and our experiments show state-of-the-art performance on a number of benchmark data sets for shape and texture classification (MNIST, USPS, CUReT) and object recognition (Caltech-101). On Caltech-101 we achieved a correct classification rate of 59
Next 10 →
Results 1 - 10 of 13,202
Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University