Results 1  10
of
965
Rényi Divergence and KullbackLeibler Divergence
"... Abstract—Rényi divergence is related to Rényi entropy much like KullbackLeibler divergence is related to Shannon’s entropy, and comes up in many settings. It was introduced by Rényi as a measure of information that satisfies almost the same axioms as KullbackLeibler divergence, and depends on a ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Abstract—Rényi divergence is related to Rényi entropy much like KullbackLeibler divergence is related to Shannon’s entropy, and comes up in many settings. It was introduced by Rényi as a measure of information that satisfies almost the same axioms as KullbackLeibler divergence, and depends
The KullbackLeibler Divergence Rate between Markov Sources
 IEEE Trans. Information Theory
, 2004
"... Abstract—In this work, we provide a computable expression for the Kullback–Leibler divergence rate lim ( ) between two timeinvariant finitealphabet Markov sources of arbitrary order and arbitrary initial distributions described by the probability distributions and, respectively. We illustrate it n ..."
Abstract

Cited by 26 (0 self)
 Add to MetaCart
Abstract—In this work, we provide a computable expression for the Kullback–Leibler divergence rate lim ( ) between two timeinvariant finitealphabet Markov sources of arbitrary order and arbitrary initial distributions described by the probability distributions and, respectively. We illustrate
KullbackLeibler Divergence and the Central Limit Theorem
"... Abstract—This paper investigates the asymptotics of KullbackLeibler divergence between two probability distributions satisfying a Central Limit Theorem property. The basic problem is as follows. Let Xi, i ∈ N, be a sequence of independent random variables such that the sum Sn = ∑n i=1Xi has the sa ..."
Abstract
 Add to MetaCart
Abstract—This paper investigates the asymptotics of KullbackLeibler divergence between two probability distributions satisfying a Central Limit Theorem property. The basic problem is as follows. Let Xi, i ∈ N, be a sequence of independent random variables such that the sum Sn = ∑n i=1Xi has
KULLBACK LEIBLER DIVERGENCE IN MIXTURE MODEL
"... Multiresolution data arise when an object or a phenomenon is described at several levels of detail Multiresolution data is prevalent in many application areas F Examples include biology, computer vision Faster growth of multiresolution data is expected in future Over the years, data accumulates ..."
Abstract
 Add to MetaCart
Multiresolution data arise when an object or a phenomenon is described at several levels of detail Multiresolution data is prevalent in many application areas F Examples include biology, computer vision Faster growth of multiresolution data is expected in future Over the years, data accumulates in multiple resolutions because
Maximal KullbackLeibler Divergence Cluster Analysis
, 1987
"... In this paper we introduce a new procedure for performing a cluster analysis and prove a consistency result for the procedure. The method seems to perform well on data and a number of examples are presented. We will formulate the «clustering problem " in the following way. Suppose we observe X ..."
Abstract
 Add to MetaCart
pereieicoe " of the space X. The partition which best describes the clustering structure of the da.ta. is defined to be the one which maximises a certain criterion function. This criterion function is a weighted sum of KullbackLeibler divergences.
KullbackLeibler Divergence Estimation of Continuous Distributions
 Proceedings of IEEE International Symposium on Information Theory
, 2008
"... Abstract—We present a method for estimating the KL divergence between continuous densities and we prove it converges almost surely. Divergence estimation is typically solved estimating the densities first. Our main result shows this intermediate step is unnecessary and that the divergence can be eit ..."
Abstract

Cited by 23 (0 self)
 Add to MetaCart
Abstract—We present a method for estimating the KL divergence between continuous densities and we prove it converges almost surely. Divergence estimation is typically solved estimating the densities first. Our main result shows this intermediate step is unnecessary and that the divergence can
Fault tolerant learning using KullbackLeibler Divergence
 in Proc. TENCON’2007
, 2007
"... Abstract — In this paper, an objective function for training a fault tolerant neural network is derived based on the idea of KullbackLeibler (KL) divergence. The new objective function is then applied to a radial basis function (RBF) network that is with multiplicative weight noise. Simulation resu ..."
Abstract

Cited by 3 (3 self)
 Add to MetaCart
Abstract — In this paper, an objective function for training a fault tolerant neural network is derived based on the idea of KullbackLeibler (KL) divergence. The new objective function is then applied to a radial basis function (RBF) network that is with multiplicative weight noise. Simulation
Notes on kullbackleibler divergence and likelihood theory
 System Neurobiology Laboratory, Salk Institute for Biological Studies
, 2007
"... iv ..."
A KullbackLeibler divergence for Bayesian model diagnostics
 Open Journal of Statistics
, 2011
"... This paper considers a KullbackLeibler distance (KLD) which is asymptotically equivalent to the KLD by Goutis and Robert [1] when the reference model (in comparison to a competing fitted model) is correctly specified and that certain regularity conditions hold true (ref. Akaike [2]). We derive the ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
This paper considers a KullbackLeibler distance (KLD) which is asymptotically equivalent to the KLD by Goutis and Robert [1] when the reference model (in comparison to a competing fitted model) is correctly specified and that certain regularity conditions hold true (ref. Akaike [2]). We derive
Optimism in Reinforcement Learning and KullbackLeibler Divergence
"... Abstract. We consider modelbased reinforcement learning in finite Markov Decision Processes (MDPs), focussing on socalled optimistic strategies. In MDPs, optimism can be implemented by carrying out extended value iterations under a constraint of consistency with the estimated model transition pr ..."
Abstract
 Add to MetaCart
probabilities. The UCRL2 algorithm by Auer, Jaksch and Ortner (2009), which follows this strategy, has recently been shown to guarantee nearoptimal regret bounds. In this paper, we strongly argue in favor of using the KullbackLeibler (KL) divergence for this purpose. By studying the linear maximization
Results 1  10
of
965