• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations
Advanced Search Include Citations

Information theory and statistics (1959)

by S Kullback
Add To MetaCart

Tools

Sorted by:
Results 1 - 10 of 1,809
Next 10 →

On limits of wireless communications in a fading environment when using multiple antennas

by G. J. Foschini, M. J. Gans - Wireless Personal Communications , 1998
"... Abstract. This paper is motivated by the need for fundamental understanding of ultimate limits of bandwidth efficient delivery of higher bit-rates in digital wireless communications and to also begin to look into how these limits might be approached. We examine exploitation of multi-element array (M ..."
Abstract - Cited by 2426 (14 self) - Add to MetaCart
Abstract. This paper is motivated by the need for fundamental understanding of ultimate limits of bandwidth efficient delivery of higher bit-rates in digital wireless communications and to also begin to look into how these limits might be approached. We examine exploitation of multi-element array (MEA) technology, that is processing the spatial dimension (not just the time dimension) to improve wireless capacities in certain applications. Specifically, we present some basic information theory results that promise great advantages of using MEAs in wireless LANs and building to building wireless communication links. We explore the important case when the channel characteristic is not available at the transmitter but the receiver knows (tracks) the characteristic which is subject to Rayleigh fading. Fixing the overall transmitted power, we express the capacity offered by MEA technology and we see how the capacity scales with increasing SNR for a large but practical number, n, of antenna elements at both transmitter and receiver. We investigate the case of independent Rayleigh faded paths between antenna elements and find that with high probability extraordinary capacity is available. Compared to the baseline n = 1 case, which by Shannon’s classical formula scales as one more bit/cycle for every 3 dB of signal-to-noise ratio (SNR) increase, remarkably with MEAs, the scaling is almost like n more bits/cycle for each 3 dB increase in SNR. To illustrate how great this capacity is, even for small n, take the cases n = 2, 4 and 16 at an average received SNR of 21 dB. For over 99%

Multiresolution grayscale and rotation invariant texture classification with local binary patterns

by Timo Ojala, Matti Pietikäinen, Topi Mäenpää - IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE , 2002
"... This paper presents a theoretically very simple, yet efficient, multiresolution approach to gray-scale and rotation invariant texture classification based on local binary patterns and nonparametric discrimination of sample and prototype distributions. The method is based on recognizing that certain ..."
Abstract - Cited by 1299 (39 self) - Add to MetaCart
This paper presents a theoretically very simple, yet efficient, multiresolution approach to gray-scale and rotation invariant texture classification based on local binary patterns and nonparametric discrimination of sample and prototype distributions. The method is based on recognizing that certain local binary patterns, termed "uniform," are fundamental properties of local image texture and their occurrence histogram is proven to be a very powerful texture feature. We derive a generalized gray-scale and rotation invariant operator presentation that allows for detecting the "uniform" patterns for any quantization of the angular space and for any spatial resolution and presents a method for combining multiple operators for multiresolution analysis. The proposed approach is very robust in terms of gray-scale variations since the operator is, by definition, invariant against any monotonic transformation of the gray scale. Another advantage is computational simplicity as the operator can be realized with a few operations in a small neighborhood and a lookup table. Excellent experimental results obtained in true problems of rotation invariance, where the classifier is trained at one particular rotation angle and tested with samples from other rotation angles, demonstrate that good discrimination can be achieved with the occurrence statistics of simple rotation invariant local binary patterns. These operators characterize the spatial configuration of local image texture and the performance can be further improved by combining them with rotation invariant variance measures that characterize the contrast of local image texture. The joint distributions of these orthogonal measures are shown to be very powerful tools for rotation invariant texture analysis.

Sparse coding with an overcomplete basis set: a strategy employed by V1

by Bruno A. Olshausen, David J. Fieldt - Vision Research , 1997
"... The spatial receptive fields of simple cells in mammalian striate cortex have been reasonably well described physiologically and can be characterized as being localized, oriented, and ban@ass, comparable with the basis functions of wavelet transforms. Previously, we have shown that these receptive f ..."
Abstract - Cited by 958 (9 self) - Add to MetaCart
The spatial receptive fields of simple cells in mammalian striate cortex have been reasonably well described physiologically and can be characterized as being localized, oriented, and ban@ass, comparable with the basis functions of wavelet transforms. Previously, we have shown that these receptive field properties may be accounted for in terms of a strategy for producing a sparse distribution of output activity in response to natural images. Here, in addition to describing this work in a more expansive fashion, we examine the neurobiological implications of sparse coding. Of particular interest is the case when the code is overcomplete--i.e., when the number of code elements is greater than the effective dimensionality of the input space. Because the basis functions are non-orthogonal and not linearly independent of each other, sparsifying the code will recruit only those basis functions necessary for representing a given input, and so the input-output function will deviate from being purely linear. These deviations from linearity provide a potential explanation for the weak forms of non-linearity observed in the response properties of cortical simple cells, and they further make predictions about the expected interactions among units in
(Show Context)

Citation Context

...een the two distributions: KL= P*(I)log dI. (9) This measures the average information gain, per image drawn from P*(/), for judging in favor of the image being drawn from P*(/) as opposed to P(/lq~) (=-=Kullback, 1959-=-). The greater the difference between the two distributions, the greater will be KL, with KL = 0 if and only if the two distributions are equal. Because P*(/) is fixed, minimizing KL amounts to maximi...

The Earth Mover's Distance as a Metric for Image Retrieval

by Yossi Rubner, et al. , 2000
"... ..."
Abstract - Cited by 719 (5 self) - Add to MetaCart
Abstract not found

New results in linear filtering and prediction theory

by R. E. Kalman, R. S. Bucy - TRANS. ASME, SER. D, J. BASIC ENG , 1961
"... A nonlinear differential equation of the Riccati type is derived for the covariance matrix of the optimal filtering error. The solution of this "variance equation " completely specifies the optimal filter for either finite or infinite smoothing intervals and stationary or nonstationary sta ..."
Abstract - Cited by 607 (0 self) - Add to MetaCart
A nonlinear differential equation of the Riccati type is derived for the covariance matrix of the optimal filtering error. The solution of this "variance equation " completely specifies the optimal filter for either finite or infinite smoothing intervals and stationary or nonstationary statistics. The variance equation is closely related to the Hamiltonian (canonical) differential equations of the calculus of variations. Analytic solutions are available in some cases. The significance of the variance equation is illustrated by examples which duplicate, simplify, or extend earlier results in this field. The Duality Principle relating stochastic estimation and deterministic control problems plays an important role in the proof of theoretical results. In several examples, the estimation problem and its dual are discussed side-by-side. Properties of the variance equation are of great interest in the theory of adaptive systems. Some aspects of this are considered briefly.

Shape Distributions

by Robert Osada, Thomas Funkhouser, Bernard Chazelle, David Dobkin - ACM Transactions on Graphics , 2002
"... this paper, we propose and analyze a method for computing shape signatures for arbitrary (possibly degenerate) 3D polygonal models. The key idea is to represent the signature of an object as a shape distribution sampled from a shape function measuring global geometric properties of an object. The pr ..."
Abstract - Cited by 295 (2 self) - Add to MetaCart
this paper, we propose and analyze a method for computing shape signatures for arbitrary (possibly degenerate) 3D polygonal models. The key idea is to represent the signature of an object as a shape distribution sampled from a shape function measuring global geometric properties of an object. The primary motivation for this approach is to reduce the shape matching problem to the comparison of probability distributions, which is simpler than traditional shape matching methods that require pose registration, feature correspondence, or model fitting
(Show Context)

Citation Context

...paring two functions f and g representing probability distributions [Puzicha et al. 1999]. Examples include the Minkowski LN norms, Kolmogorov-Smirnov distance, Kullback-Leibler divergence distances [=-=Kullback 1968],-=- Match distances [Shen and Wong 1983; Werman et al. 1985], Earth Mover’s distance [Rubner et al. 1998], and Bhattacharyya distance [Bhattacharyya 1943]. Other methods, perhaps based on 2D curve matc...

A Maximum Entropy Approach to Adaptive Statistical Language Modeling

by Ronald Rosenfeld - Computer, Speech and Language , 1996
"... An adaptive statistical languagemodel is described, which successfullyintegrates long distancelinguistic information with other knowledge sources. Most existing statistical language models exploit only the immediate history of a text. To extract information from further back in the document's h ..."
Abstract - Cited by 293 (12 self) - Add to MetaCart
An adaptive statistical languagemodel is described, which successfullyintegrates long distancelinguistic information with other knowledge sources. Most existing statistical language models exploit only the immediate history of a text. To extract information from further back in the document's history, we propose and use trigger pairs as the basic information bearing elements. This allows the model to adapt its expectations to the topic of discourse. Next, statistical evidence from multiple sources must be combined. Traditionally, linear interpolation and its variants have been used, but these are shown here to be seriously deficient. Instead, we apply the principle of Maximum Entropy (ME). Each information source gives rise to a set of constraints, to be imposed on the combined estimate. The intersection of these constraints is the set of probability functions which are consistent with all the information sources. The function with the highest entropy within that set is the ME solution...

Bayesian surprise attracts human attention

by Laurent Itti, et al. , 2009
"... ..."
Abstract - Cited by 286 (8 self) - Add to MetaCart
Abstract not found

Axiomatic derivation of the principle of maximum entropy and the principle of minimum cross entropy

by John E. Shore, Rodney W. Johnson - IEEE Trans. Information Theory , 1980
"... dple of min imum cromentropy (mhlmum dire&d dfvergenoe) are shown tobeunfquelycomxtmethodsforhductiveinf~whennewinformn-t ionlsghninthefomlofexpe&edvalues.ReviousjILstit icatioaslLve ..."
Abstract - Cited by 280 (3 self) - Add to MetaCart
dple of min imum cromentropy (mhlmum dire&d dfvergenoe) are shown tobeunfquelycomxtmethodsforhductiveinf~whennewinformn-t ionlsghninthefomlofexpe&edvalues.ReviousjILstit icatioaslLve

Speaker recognition: A tutorial

by Joseph P. Campbell, Jr.
"... A tutorial on the design and development of automatic speaker-recognition systems is presented. Automatic speaker recognition is the use of a machine to recognize a person from a spoken phrase. These systems can operate in two modes: to identify a particular person or to verify a person’s claimed id ..."
Abstract - Cited by 269 (2 self) - Add to MetaCart
A tutorial on the design and development of automatic speaker-recognition systems is presented. Automatic speaker recognition is the use of a machine to recognize a person from a spoken phrase. These systems can operate in two modes: to identify a particular person or to verify a person’s claimed identity. Speech processing and the basic components of automatic speakerrecognition systems are shown and design tradeoffs are discussed. Then, a new automatic speaker-recognition system is given. This recognizer performs with 98.9 % correct identification. Last, the performances of various systems are compared.
Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University