Results 1  10
of
92,426
On the discontinuity of the Shannon information measures
 IEEE Trans. Inform. Theory
, 2009
"... Abstract — It is well known that the Shannon information measures are continuous functions of the probability distribution when the support is finite. This, however, does not hold when the support is countably infinite. In this paper, we investigate the continuity of the Shannon information measures ..."
Abstract

Cited by 8 (3 self)
 Add to MetaCart
Abstract — It is well known that the Shannon information measures are continuous functions of the probability distribution when the support is finite. This, however, does not hold when the support is countably infinite. In this paper, we investigate the continuity of the Shannon information
Shannon information in complete genomes
"... Shannon information in the genomes of all completely sequenced prokaryotes and eukaryotes are measured in word lengths of two to ten letters. It is found that in a scaledependent way, the Shannon information in complete genomes are much greater than that in matching random sequences thousands of t ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
Shannon information in the genomes of all completely sequenced prokaryotes and eukaryotes are measured in word lengths of two to ten letters. It is found that in a scaledependent way, the Shannon information in complete genomes are much greater than that in matching random sequences thousands
Shannon Information in Genomes
"... Shannon information (SI), are defined for a DNA sequence in terms of probabilities of chemical words in the sequence and are computed for a set of complete genomes highly diverse in length and composition. We find the following: SI is inversely proportional to sequence length for a random sequence b ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
Shannon information (SI), are defined for a DNA sequence in terms of probabilities of chemical words in the sequence and are computed for a set of complete genomes highly diverse in length and composition. We find the following: SI is inversely proportional to sequence length for a random sequence
Shannon Information and Kolmogorov Complexity
, 2010
"... The elementary theories of Shannon information and Kolmogorov complexity are cmpared, the extent to which they have a common purpose, and where they are fundamentally different. The focus is on: Shannon entropy versus Kolmogorov complexity, the relation of both to universal coding, Shannon mutual in ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
The elementary theories of Shannon information and Kolmogorov complexity are cmpared, the extent to which they have a common purpose, and where they are fundamentally different. The focus is on: Shannon entropy versus Kolmogorov complexity, the relation of both to universal coding, Shannon mutual
Shannon information increase and rescue in friction
, 2008
"... On the standard microscopic model of friction we confirm the common belief that the irreversible entropy production originates from the increase of Shannon information. We reveal that the reversible microscopic dynamics would continuously violate the Gibbsian interchangeability of molecules. The spo ..."
Abstract
 Add to MetaCart
On the standard microscopic model of friction we confirm the common belief that the irreversible entropy production originates from the increase of Shannon information. We reveal that the reversible microscopic dynamics would continuously violate the Gibbsian interchangeability of molecules
Shannon Information Theory and Molecular Biology
"... The role and the contribution of Shannon Information Theory to the development of Molecular Biology has been the object of stimulating debates during the last thirty years. This seems to be connected with some semantic charms associated with the use of the word “information ” in the biological conte ..."
Abstract
 Add to MetaCart
The role and the contribution of Shannon Information Theory to the development of Molecular Biology has been the object of stimulating debates during the last thirty years. This seems to be connected with some semantic charms associated with the use of the word “information ” in the biological
Bayesian Experimental Design and Shannon Information
 In 1997 Proceedings of the Section on Bayesian Statistical Science
, 1997
"... The information theoretic approach to optimal design of experiments yields a simple design criterion: the optimal design minimizes the expected posterior entropy of the parameters. Unfortunately, this strategy is often computational infeasible for nonlinear problems and numerical approximations are ..."
Abstract

Cited by 8 (0 self)
 Add to MetaCart
The information theoretic approach to optimal design of experiments yields a simple design criterion: the optimal design minimizes the expected posterior entropy of the parameters. Unfortunately, this strategy is often computational infeasible for nonlinear problems and numerical approximations
Six new nonShannon information inequalities
 IN PROC. IEEE INT. SYMP. INF. THEORY
, 2006
"... All unconstrained information inequalities in three or fewer random variables are known to be “Shannontype”, in that they are nonnegative linear combinations of instances of the inequality. In 1998, Zhang and Yeung gave the first example of an information inequality in four variables that is not “ ..."
Abstract

Cited by 35 (1 self)
 Add to MetaCart
All unconstrained information inequalities in three or fewer random variables are known to be “Shannontype”, in that they are nonnegative linear combinations of instances of the inequality. In 1998, Zhang and Yeung gave the first example of an information inequality in four variables
Is fine structure constant related to Shannon information entropy?
"... Here is an attempt to derive fine structure constant based on Shannon information entropy. This derivation is inspired by a lecture by Prof. Anosov back in December 2008. He presented his ideas in a seminar held in Moscow State University, Moscow, but my note on his lecture is lost. So this is my si ..."
Abstract
 Add to MetaCart
Here is an attempt to derive fine structure constant based on Shannon information entropy. This derivation is inspired by a lecture by Prof. Anosov back in December 2008. He presented his ideas in a seminar held in Moscow State University, Moscow, but my note on his lecture is lost. So this is my
Secret sharing and nonshannon information inequalities
 in TCC 2009, LNCS
, 2011
"... Abstract. The known secretsharing schemes for most access structures are not efficient; even for a onebit secret the length of the shares in the schemes is 2 O(n) , where n is the number of participants in the access structure. It is a long standing open problem to improve these schemes or prove t ..."
Abstract

Cited by 10 (1 self)
 Add to MetaCart
that they cannot be improved. The best known lower bound is by Csirmaz (J. Cryptology 97), who proved that there exist access structures with n participants such that the size of the share of at least one party is n / log n times the secret size. Csirmaz’s proof uses Shannon information inequalities, which were
Results 1  10
of
92,426