Citations
10905 |
A mathematical theory of communication
- Shannon
- 1948
(Show Context)
Citation Context ...− log p − log n≥− p log p ∑ ∑ ∑ D D i D i D i n i= 1 n i= 1 i= 1 or n n 1 − p log p + log p ≤0 ∑ ∑ i D i D i i= 1 n i= 1 Thus, we have nH ( P) + ϕ1 ( p) ≤ 0 which gives the relation between Shannon’s =-=[13]-=- measure of entropy and Burg’s [4] entropy. C. Kapur’s [9] measure of directed divergence is given by n α ⎛ ⎛ α −1 ⎞ ⎞ ⎜ 1 ⎜∑pq i i ⎟ ⎟ i= 1 Dα ( PQ , ) = log ⎜ ⎝ ⎠ ⎟ D ≥ 0 (25) 1 1 n α − −α ⎜ n ⎟ ⎛ α... |
653 |
On measures of entropy and information
- Rényi
- 1961
(Show Context)
Citation Context ...= 1 ⎦ and showed that its lower bound lies between Rα (P) and Rα ( P) + 1 where Rα (P) is expressed as: n 1 R ( P) (1 ) log D pi ; 0, 1 i 1 α ⎡ ⎤ − α = − α ⎢∑ ⎥ α > α ≠ (5) ⎣ = ⎦ The above is Renyi’s =-=[12]-=- measure of entropy of order α . As α → 1, it is easily shown that L → L and ( ) R P approaches H ( P ). Guiasu and Picard [6] defined the weighted average length for a uniquely decipherable code as ⎛... |
142 | Quantification Methods of Classification Process, The Concept of structural α-entropy,
- Havrda, Charvat
- 1967
(Show Context)
Citation Context ...een proved that Shannon’s [13] entropy, Renyi’s [12] entropy of order , Kapur’s [8] entropy of order and type all provide lower bounds for different mean codeword lengths, while Havrada and Charvat’s =-=[7]-=-, Arimoto’s [1] and Behara and Chawla’s [3] measures of entropy provide lower bounds for some monotonic increasing functions of mean codeword lengths but not for mean codeword length’s themselves. Bel... |
77 |
Measures of Information and their Applications.
- Kapur
- 1994
(Show Context)
Citation Context ... i= 1 or n n 1 − p log p + log p ≤0 ∑ ∑ i D i D i i= 1 n i= 1 Thus, we have nH ( P) + ϕ1 ( p) ≤ 0 which gives the relation between Shannon’s [13] measure of entropy and Burg’s [4] entropy. C. Kapur’s =-=[9]-=- measure of directed divergence is given by n α ⎛ ⎛ α −1 ⎞ ⎞ ⎜ 1 ⎜∑pq i i ⎟ ⎟ i= 1 Dα ( PQ , ) = log ⎜ ⎝ ⎠ ⎟ D ≥ 0 (25) 1 1 n α − −α ⎜ n ⎟ ⎛ α ⎞ α qi p ⎜⎜∑ ⎟ ∑ i ⎟ ⎝⎝ i= 1 ⎠ i= 1 ⎠ −li D Putting qi = ... |
73 |
A device for quantizing, grouping, coding amplitude modulated pulses
- Kraft
- 1949
(Show Context)
Citation Context ...t is called code alphabet or set of code characters and sequence assigned to each x i , i = 1, 2, ....., n is called code word. Let ni be the length of code word associated with xi satisfying Kraft’s =-=[10]-=- inequality given by the following mathematical expression: l l l − − I 1 2 n D D ... D 1 − + + + ≤ (1) Where, D is the size of alphabet. In calculating the long run efficiency of communications, we c... |
56 |
A coding theorem and Renyi’s entropy,
- Campbell
- 1965
(Show Context)
Citation Context ... codes, Shannon’s [13] noiseless coding theorem which states that H( P) H( P) ≤ L < + 1 (3) log D log D Determines the lower and upper bounds on L in terms of Shannon’s [13] entropy H ( P ). Campbell =-=[5]-=- for the first R.K.Tili is with the Department of Mathematics, S.S.M.College, Dinanagar (India) r_kumartuli@yahoo.co.in. time introduced the idea of exponentiated mean codeword length for uniquely dec... |
48 |
The Relationship between Maximum Entropy Spectra and Maximum Likelihood spectra,
- Burg
- 1972
(Show Context)
Citation Context ...HEORY APPROACH In this section, we generate the following inequalities by using well known measures of directed divergence: 1000World Academy of Science, Engineering and Technology 51 2011 A. Burg’s =-=[4]-=- measure of entropy is given by ϕ 1 ( p) n = ∑ log p (20) i= 1 D i Also, Burg’s [4] measure of directed divergence is given by n ⎛ pi p ⎞ i D( P, Q) = ∑⎜ −logD−1⎟ ≥ 0 (21) i= 1 ⎝ qi qi ⎠ Putting q = o... |
19 |
On a measure of divergence between two statistical populations defined by their probability distributions
- Bhattacharya
- 1943
(Show Context)
Citation Context ...ntroduced in equation is a genuine mean codeword length as it satisfies the essential i i properties of being a mean codeword length. B. For the development of second mean, we consider Bhattacharya’s =-=[2]-=- measure of directed divergence is given by ( , ) n ∑ D PQ ( ) 2 = p − q or Putting i= 1 i n i= 1 i D( P, Q) = 1−∑ pi qi ≥0 (10) q i = n D ∑ i= 1 −li D 1 −li in equation (10), we get 1 ⎛ ⎞ ⎜ ≥ ⎝ i= 1 ... |
17 |
New nonadditive measures of entropy for discrete probability distributions.
- Sharma, Mittal
- 1975
(Show Context)
Citation Context ...quation (17) is Kapur’s [8] measure of entropy of orderα and type β but R.H.S. is neither a mean codeword length nor a monotonic increasing unction of mean codeword length. i ⎞ B. Sharma and Mittal’s =-=[14]-=- measure of entropy as a possible lower bound Sharma and Mittal’s [14] measure of entropy is given by α −1 ⎡ n ⎤ 1 ⎛ β 1 β ⎞ − φ ( P) = ⎢ p 1 1−α i − ⎥ 2 1⎢⎜∑ ⎟ , α ≠ 1, β ≠ 1 − ⎝ ⎥ i= 1 ⎢ ⎠ ⎣ ⎥⎦ α > ... |
9 |
New non-additive measures of relative information
- Sharma, Mittal
- 1977
(Show Context)
Citation Context ...ade in section IV. α α 997World Academy of Science, Engineering and Technology 51 2011 II. DEVELOPMENT OF TWO NEW MEAN CODEWORDS A. For the development of first mean, we consider Sharma and Mittal’s =-=[15]-=- measure of directed divergence given by s r ( ) D P Q Putting q = i ⎡ ⎤ 1 ⎛ = ⎢ − ≥ s −1 ⎢⎜ ⎢ ⎝ ⎠ ⎣ ⎥⎦ n D ∑ i= 1 −li D −li s−1 n r−1 r 1−r⎞ ∑ pi qi ⎟ i= 1 1⎥ ⎥ 0 (7) in equation (7), we get s−1 n ( ... |
5 |
Quantitative-Qualitative Measures of Information
- Longo
- 1972
(Show Context)
Citation Context ... L and ( ) R P approaches H ( P ). Guiasu and Picard [6] defined the weighted average length for a uniquely decipherable code as ⎛ ⎞ n ⎜ ⎟ ui ni pi L = ∑ ⎜ ⎟ (6) n i = 1 ⎜ ⎟ ⎜∑ui pi ⎟ ⎝ i = 1 ⎠ Longo =-=[11]-=- interpreted (6) as the average cost of transmitting letters xi with probability pi and utilityui and provided some practical interpretation of this length and also derived the lower and upper bounds ... |
4 |
Information Theoretical Considerations on Estimation Problems,
- Arimoto
- 1971
(Show Context)
Citation Context ... Shannon’s [13] entropy, Renyi’s [12] entropy of order , Kapur’s [8] entropy of order and type all provide lower bounds for different mean codeword lengths, while Havrada and Charvat’s [7], Arimoto’s =-=[1]-=- and Behara and Chawla’s [3] measures of entropy provide lower bounds for some monotonic increasing functions of mean codeword lengths but not for mean codeword length’s themselves. Below, we discuss ... |
4 |
Four families of measures of entropy
- Kapur
- 1986
(Show Context)
Citation Context ...H.S is a genuine mean codeword length as it satisfies the essential properties of being a mean codeword length. It has been proved that Shannon’s [13] entropy, Renyi’s [12] entropy of order , Kapur’s =-=[8]-=- entropy of order and type all provide lower bounds for different mean codeword lengths, while Havrada and Charvat’s [7], Arimoto’s [1] and Behara and Chawla’s [3] measures of entropy provide lower bo... |
2 |
Generalized -entropy
- Behara, Chawla
- 1974
(Show Context)
Citation Context ...yi’s [12] entropy of order , Kapur’s [8] entropy of order and type all provide lower bounds for different mean codeword lengths, while Havrada and Charvat’s [7], Arimoto’s [1] and Behara and Chawla’s =-=[3]-=- measures of entropy provide lower bounds for some monotonic increasing functions of mean codeword lengths but not for mean codeword length’s themselves. Below, we discuss the correspondence between s... |
2 |
Borne in ferictur de la longuerur utile de certains codes
- Guiasu, Picard
- 1971
(Show Context)
Citation Context ...i ; 0, 1 i 1 α ⎡ ⎤ − α = − α ⎢∑ ⎥ α > α ≠ (5) ⎣ = ⎦ The above is Renyi’s [12] measure of entropy of order α . As α → 1, it is easily shown that L → L and ( ) R P approaches H ( P ). Guiasu and Picard =-=[6]-=- defined the weighted average length for a uniquely decipherable code as ⎛ ⎞ n ⎜ ⎟ ui ni pi L = ∑ ⎜ ⎟ (6) n i = 1 ⎜ ⎟ ⎜∑ui pi ⎟ ⎝ i = 1 ⎠ Longo [11] interpreted (6) as the average cost of transmitting... |
2 |
Generalization of renyi's entropy of order a
- Varma
- 1966
(Show Context)
Citation Context ... ⎠ lies between n 1 ⎛ β− α+ 1 ⎞ log D pi α β ⎜∑⎟and − ⎝ i= 1 ⎠ n 1 ⎛ β− α+ 1 ⎞ log D pi 1 α β ⎜∑ ⎟+ , − ⎝ i= 1 ⎠ α − 1 < β < α, α ≥ 1 where n 1 ⎛ β− α+ 1 ⎞ log D pi α β ⎜∑ − ⎟ ⎝ i= 1 ⎠ is the Varma’s =-=[16]-=- measure of entropy. Proof: Here we use Holder’s inequality 1 1 n n n ⎛ p q p⎞ ⎛ q⎞ xy i i ≥ ⎜ xi ⎟ ⎜ yi ⎟ i= 1 i= 1 i= 1 ∑ ∑ ∑ , ⎝ ⎠ ⎝ ⎠ 1 1 + = 1, p q porq< 1 (13) Substituting ⎛β− α+ 1⎞ −⎜ ⎟ ⎝ α−β ... |