• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations

Tools

Sorted by:
Try your query at:
Semantic Scholar Scholar Academic
Google Bing DBLP
Results 1 - 10 of 8,798
Next 10 →

Information-theoretic metric learning

by Jason Davis, Brian Kulis, Suvrit Sra, Inderjit Dhillon - in NIPS 2006 Workshop on Learning to Compare Examples , 2007
"... We formulate the metric learning problem as that of minimizing the differential relative entropy between two multivariate Gaussians under constraints on the Mahalanobis distance function. Via a surprising equivalence, we show that this problem can be solved as a low-rank kernel learning problem. Spe ..."
Abstract - Cited by 359 (15 self) - Add to MetaCart
. Specifically, we minimize the Burg divergence of a low-rank kernel to an input kernel, subject to pairwise distance constraints. Our approach has several advantages over existing methods. First, we present a natural information-theoretic formulation for the problem. Second, the algorithm utilizes the methods

Information-theoretic metric learning: 2–D linear projections of neural data for visualization

by Austin J. Brockmeier, Luis G. Sanchez Giraldo, Matthew S. Emigh, Jihye Bae, John S. Choi, Joseph T. Francis, Jose C. Principe - In Engineering in Medicine and Biology Society (EMBC), 2013 35th Annual International Conference of the IEEE , 2013
"... Abstract — Intracortical neural recordings are typically high-dimensional due to many electrodes, channels, or units and high sampling rates, making it very difficult to visually in-spect differences among responses to various conditions. By representing the neural response in a low-dimensional spac ..."
Abstract - Cited by 3 (3 self) - Add to MetaCart
pertaining to different behavior or stimuli. We find the projection as a solution to the information-theoretic optimization problem of maximizing the information between the projected data and the class labels. The method is applied to two datasets using different types of neural responses: motor cortex

Distance metric learning, with application to clustering with sideinformation,”

by Eric P Xing , Andrew Y Ng , Michael I Jordan , Stuart Russell - in Advances in Neural Information Processing Systems 15, , 2002
"... Abstract Many algorithms rely critically on being given a good metric over their inputs. For instance, data can often be clustered in many "plausible" ways, and if a clustering algorithm such as K-means initially fails to find one that is meaningful to a user, the only recourse may be for ..."
Abstract - Cited by 818 (13 self) - Add to MetaCart
to provide examples. In this paper, we present an algorithm that, given examples of similar (and, if desired, dissimilar) pairs of points in Ê Ò , learns a distance metric over Ê Ò that respects these relationships. Our method is based on posing metric learning as a convex optimization problem, which allows

Distance metric learning for large margin nearest neighbor classification

by Kilian Q. Weinberger, John Blitzer, Lawrence K. Saul - In NIPS , 2006
"... We show how to learn a Mahanalobis distance metric for k-nearest neighbor (kNN) classification by semidefinite programming. The metric is trained with the goal that the k-nearest neighbors always belong to the same class while examples from different classes are separated by a large margin. On seven ..."
Abstract - Cited by 695 (14 self) - Add to MetaCart
We show how to learn a Mahanalobis distance metric for k-nearest neighbor (kNN) classification by semidefinite programming. The metric is trained with the goal that the k-nearest neighbors always belong to the same class while examples from different classes are separated by a large margin

A Metrics Suite for Object Oriented Design

by Shyam R. Chidamber , Chris F. Kemerer , 1994
"... Given the central role that software development plays in the delivery and application of information technology, managers are increasingly focusing on process improvement in the software development area. This demand has spurred the provision of a number of new and/or improved approaches to softwa ..."
Abstract - Cited by 1108 (3 self) - Add to MetaCart
development processes, have generally been subject to serious criticisms, in-cluding the lack of a theoretical base. Following Wand and Weber, the theoretical base chosen for the metrics was the ontology of Bunge. Six design metrics are developed, and then analytically evaluated against Weyuker’s proposed set

An extensive empirical study of feature selection metrics for text classification

by George Forman, Isabelle Guyon, André Elisseeff - J. of Machine Learning Research , 2003
"... Machine learning for text classification is the cornerstone of document categorization, news filtering, document routing, and personalization. In text domains, effective feature selection is essential to make the learning task efficient and more accurate. This paper presents an empirical comparison ..."
Abstract - Cited by 496 (15 self) - Add to MetaCart
choice for all goals except precision, for which Information Gain yielded the best result most often. This analysis also revealed, for example, that Information Gain and Chi-Squared have correlated failures, and so they work poorly together. When choosing optimal pairs of metrics for each of the four

Learning Bayesian networks: The combination of knowledge and statistical data

by David Heckerman, David M. Chickering - Machine Learning , 1995
"... We describe scoring metrics for learning Bayesian networks from a combination of user knowledge and statistical data. We identify two important properties of metrics, which we call event equivalence and parameter modularity. These properties have been mostly ignored, but when combined, greatly simpl ..."
Abstract - Cited by 1158 (35 self) - Add to MetaCart
We describe scoring metrics for learning Bayesian networks from a combination of user knowledge and statistical data. We identify two important properties of metrics, which we call event equivalence and parameter modularity. These properties have been mostly ignored, but when combined, greatly

Selection of relevant features and examples in machine learning

by Avrim L. Blum, Pat Langley - ARTIFICIAL INTELLIGENCE , 1997
"... In this survey, we review work in machine learning on methods for handling data sets containing large amounts of irrelevant information. We focus on two key issues: the problem of selecting relevant features, and the problem of selecting relevant examples. We describe the advances that have been mad ..."
Abstract - Cited by 606 (2 self) - Add to MetaCart
In this survey, we review work in machine learning on methods for handling data sets containing large amounts of irrelevant information. We focus on two key issues: the problem of selecting relevant features, and the problem of selecting relevant examples. We describe the advances that have been

Optimal Brain Damage

by Yann Le Cun, John S. Denker, Sara A. Sola , 1990
"... We have used information-theoretic ideas to derive a class of practical and nearly optimal schemes for adapting the size of a neural network. By removing unimportant weights from a network, several improvements can be expected: better generalization, fewer training examples required, and improved sp ..."
Abstract - Cited by 510 (5 self) - Add to MetaCart
We have used information-theoretic ideas to derive a class of practical and nearly optimal schemes for adapting the size of a neural network. By removing unimportant weights from a network, several improvements can be expected: better generalization, fewer training examples required, and improved

Example-based learning for view-based human face detection

by Kah-kay Sung, Tomaso Poggio - IEEE Transactions on Pattern Analysis and Machine Intelligence , 1998
"... Abstract—We present an example-based learning approach for locating vertical frontal views of human faces in complex scenes. The technique models the distribution of human face patterns by means of a few view-based “face ” and “nonface ” model clusters. At each image location, a difference feature v ..."
Abstract - Cited by 690 (24 self) - Add to MetaCart
Abstract—We present an example-based learning approach for locating vertical frontal views of human faces in complex scenes. The technique models the distribution of human face patterns by means of a few view-based “face ” and “nonface ” model clusters. At each image location, a difference feature
Next 10 →
Results 1 - 10 of 8,798
Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University