Results 1 -
1 of
1
1Discrimination on the Grassmann Manifold: Fundamental Limits of Subspace Classifiers
"... IEEE Repurposing tools and intuitions from Shannon theory, we present fundamental limits on the reliable classification of linear and affine subspaces from noisy, linear features. Recognizing a syntactic equivalence between discrimination among subspaces and communication over vector wireless channe ..."
Abstract
- Add to MetaCart
(Show Context)
IEEE Repurposing tools and intuitions from Shannon theory, we present fundamental limits on the reliable classification of linear and affine subspaces from noisy, linear features. Recognizing a syntactic equivalence between discrimination among subspaces and communication over vector wireless channels, we propose two Shannon-inspired measures to characterize asymptotic classifier performance. First, we define the classification capacity, which characterizes necessary and sufficient relationships between the signal dimension, the number of features, and the number of classes to be discerned as all three quantities approach infinity. Second, we define the diversity-discrimination tradeoff which, by analogy with the diversity-multiplexing tradeoff of fading vector channels, characterizes relationships between the number of discernible classes and the misclassification probability as the signal-to-noise ratio approaches infinity. We derive inner and outer bounds on these measures, which are tight in many regimes. We further study the impact of feature design on the error performance. Numerical results, including a face recognition application, validate the results in practice. I.