HOME

TheInfoList



OR:

In information theory, the Rényi entropy is a quantity that generalizes various notions of entropy, including
Hartley entropy The Hartley function is a measure of uncertainty, introduced by Ralph Hartley in 1928. If a sample from a finite set ''A'' uniformly at random is picked, the information revealed after the outcome is known is given by the Hartley function : H_0(A) ...
,
Shannon entropy Shannon may refer to: People * Shannon (given name) * Shannon (surname) * Shannon (American singer), stage name of singer Shannon Brenda Greene (born 1958) * Shannon (South Korean singer), British-South Korean singer and actress Shannon Arrum W ...
, collision entropy, and min-entropy. The Rényi entropy is named after
Alfréd Rényi Alfréd Rényi (20 March 1921 – 1 February 1970) was a Hungarian mathematician known for his work in probability theory, though he also made contributions in combinatorics, graph theory, and number theory. Life Rényi was born in Budapest to A ...
, who looked for the most general way to quantify information while preserving additivity for independent events. In the context of fractal dimension estimation, the Rényi entropy forms the basis of the concept of generalized dimensions. The Rényi entropy is important in ecology and statistics as index of diversity. The Rényi entropy is also important in
quantum information Quantum information is the information of the state of a quantum system. It is the basic entity of study in quantum information theory, and can be manipulated using quantum information processing techniques. Quantum information refers to both ...
, where it can be used as a measure of entanglement. In the Heisenberg XY spin chain model, the Rényi entropy as a function of can be calculated explicitly because it is an automorphic function with respect to a particular subgroup of the modular group. In theoretical computer science, the min-entropy is used in the context of
randomness extractor A randomness extractor, often simply called an "extractor", is a function, which being applied to output from a weakly random entropy source, together with a short, uniformly random seed, generates a highly random output that appears independent f ...
s.


Definition

The Rényi entropy of order \alpha, where \alpha \geq 0 and \alpha \neq 1 , is defined as :\Eta_\alpha(X) = \frac\log\Bigg(\sum_^n p_i^\alpha\Bigg) . Here, X is a discrete random variable with possible outcomes in the set \mathcal = \ and corresponding probabilities p_i \doteq \Pr(X=x_i) for i=1,\dots,n. The
logarithm In mathematics, the logarithm is the inverse function to exponentiation. That means the logarithm of a number  to the base  is the exponent to which must be raised, to produce . For example, since , the ''logarithm base'' 10 ...
is conventionally taken to be base 2, especially in the context of information theory where bits are used. If the probabilities are p_i=1/n for all i=1,\dots,n, then all the Rényi entropies of the distribution are equal: \Eta_\alpha(X)=\log n. In general, for all discrete random variables X, \Eta_\alpha(X) is a non-increasing function in \alpha. Applications often exploit the following relation between the Rényi entropy and the ''p''-norm of the vector of probabilities: :\Eta_\alpha(X)=\frac \log \left(\, P\, _\alpha\right) . Here, the discrete probability distribution P=(p_1,\dots,p_n) is interpreted as a vector in \R^n with p_i\geq 0 and \sum_^ p_i =1. The Rényi entropy for any \alpha \geq 0 is Schur concave.


Special cases

As approaches zero, the Rényi entropy increasingly weighs all events with nonzero probability more equally, regardless of their probabilities. In the limit for → 0, the Rényi entropy is just the logarithm of the size of the support of . The limit for → 1 is the
Shannon entropy Shannon may refer to: People * Shannon (given name) * Shannon (surname) * Shannon (American singer), stage name of singer Shannon Brenda Greene (born 1958) * Shannon (South Korean singer), British-South Korean singer and actress Shannon Arrum W ...
. As approaches infinity, the Rényi entropy is increasingly determined by the events of highest probability.


Hartley or max-entropy

Provided the probabilities are nonzero, \Eta_0 is the logarithm of the cardinality of the alphabet (\mathcal) of X, sometimes called the
Hartley entropy The Hartley function is a measure of uncertainty, introduced by Ralph Hartley in 1928. If a sample from a finite set ''A'' uniformly at random is picked, the information revealed after the outcome is known is given by the Hartley function : H_0(A) ...
of X , :\Eta_0 (X) = \log n = \log , \mathcal, \,


Shannon entropy

The limiting value of \Eta_\alpha as → 1 is the
Shannon entropy Shannon may refer to: People * Shannon (given name) * Shannon (surname) * Shannon (American singer), stage name of singer Shannon Brenda Greene (born 1958) * Shannon (South Korean singer), British-South Korean singer and actress Shannon Arrum W ...
: :\Eta_1 (X) \equiv \lim_ \Eta_ (X) = - \sum_^n p_i \log p_i


Collision entropy

Collision entropy, sometimes just called "Rényi entropy", refers to the case = 2, :\Eta_2 (X) = - \log \sum_^n p_i^2 = - \log P(X = Y) , where and are independent and identically distributed. The collision entropy is related to the
index of coincidence In cryptography, coincidence counting is the technique (invented by William F. Friedman) of putting two texts side-by-side and counting the number of times that identical letters appear in the same position in both texts. This count, either as a r ...
.


Min-entropy

In the limit as \alpha \rightarrow \infty , the Rényi entropy \Eta_\alpha converges to the min-entropy \Eta_\infty: :\Eta_\infty(X) \doteq \min_i (-\log p_i) = -(\max_i \log p_i) = -\log \max_i p_i\,. Equivalently, the min-entropy \Eta_\infty(X) is the largest real number such that all events occur with probability at most 2^. The name ''min-entropy'' stems from the fact that it is the smallest entropy measure in the family of Rényi entropies. In this sense, it is the strongest way to measure the information content of a discrete random variable. In particular, the min-entropy is never larger than the
Shannon entropy Shannon may refer to: People * Shannon (given name) * Shannon (surname) * Shannon (American singer), stage name of singer Shannon Brenda Greene (born 1958) * Shannon (South Korean singer), British-South Korean singer and actress Shannon Arrum W ...
. The min-entropy has important applications for
randomness extractor A randomness extractor, often simply called an "extractor", is a function, which being applied to output from a weakly random entropy source, together with a short, uniformly random seed, generates a highly random output that appears independent f ...
s in theoretical computer science: Extractors are able to extract randomness from random sources that have a large min-entropy; merely having a large
Shannon entropy Shannon may refer to: People * Shannon (given name) * Shannon (surname) * Shannon (American singer), stage name of singer Shannon Brenda Greene (born 1958) * Shannon (South Korean singer), British-South Korean singer and actress Shannon Arrum W ...
does not suffice for this task.


Inequalities for different orders ''α''

That \Eta_\alpha is non-increasing in \alpha for any given distribution of probabilities p_i, which can be proven by differentiation, as :-\frac = \frac \sum_^n z_i \log(z_i / p_i), which is proportional to Kullback–Leibler divergence (which is always non-negative), where z_i = p_i^\alpha / \sum_^n p_j^\alpha. In particular cases inequalities can be proven also by Jensen's inequality: :\log n=\Eta_0\geq \Eta_1 \geq \Eta_2 \geq \Eta_\infty . For values of \alpha>1, inequalities in the other direction also hold. In particular, we have : \Eta_2 \le 2 \Eta_\infty . On the other hand, the Shannon entropy \Eta_1 can be arbitrarily high for a random variable X that has a given min-entropy. An example of this is given by the sequence of random variables X_n \sim \ for n \geq 1 such that P(X_n = 0) = 1/2 and P(X_n = x) = 1/(2n) since \Eta_\infty(X_n) = 2 but \Eta_1(X_n) = (\log 2 + \log 2n)/2.


Rényi divergence

As well as the absolute Rényi entropies, Rényi also defined a spectrum of divergence measures generalising the Kullback–Leibler divergence. The Rényi divergence of order or alpha-divergence of a distribution from a distribution is defined to be :D_\alpha (P \, Q) = \frac\log\Bigg(\sum_^n \frac\Bigg) \, when and . We can define the Rényi divergence for the special values by taking a limit, and in particular the limit gives the Kullback–Leibler divergence. Some special cases: :D_0(P \, Q) = - \log Q(\) : minus the log probability under that ; :D_(P \, Q) = -2 \log \sum_^n \sqrt : minus twice the logarithm of the Bhattacharyya coefficient; () :D_1(P \, Q) = \sum_^n p_i \log \frac : the Kullback–Leibler divergence; :D_2(P \, Q) = \log \Big\langle \frac \Big\rangle : the log of the expected ratio of the probabilities; :D_\infty(P \, Q) = \log \sup_i \frac : the log of the maximum ratio of the probabilities. The Rényi divergence is indeed a
divergence In vector calculus, divergence is a vector operator that operates on a vector field, producing a scalar field giving the quantity of the vector field's source at each point. More technically, the divergence represents the volume density of ...
, meaning simply that D_\alpha (P \, Q) is greater than or equal to zero, and zero only when . For any fixed distributions and , the Rényi divergence is nondecreasing as a function of its order , and it is continuous on the set of for which it is finite, or for the sake of brevity, the information of order obtained if the distribution is replaced by the distribution .


Financial interpretation

A pair of probability distributions can be viewed as a game of chance in which one of the distributions defines official odds and the other contains the actual probabilities. Knowledge of the actual probabilities allows a player to profit from the game. The expected profit rate is connected to the Rényi divergence as follows : = \frac\, D_1(b\, m) + \frac\, D_(b\, m) \,, where m is the distribution defining the official odds (i.e. the "market") for the game, b is the investor-believed distribution and R is the investor's risk aversion (the Arrow-Pratt relative risk aversion). If the true distribution is p (not necessarily coinciding with the investor's belief b), the long-term realized rate converges to the true expectation which has a similar mathematical structure : = \frac\,\Big( D_1(p\, m) - D_1(p\, b) \Big) + \frac\, D_(b\, m) \,.


Why ''α'' = 1 is special

The value , which gives the
Shannon entropy Shannon may refer to: People * Shannon (given name) * Shannon (surname) * Shannon (American singer), stage name of singer Shannon Brenda Greene (born 1958) * Shannon (South Korean singer), British-South Korean singer and actress Shannon Arrum W ...
and the Kullback–Leibler divergence, is special because it is only at that the chain rule of conditional probability holds exactly: :\Eta(A,X) = \Eta(A) + \mathbb_ \big A=a) \big/math> for the absolute entropies, and :D_\mathrm(p(x, a)p(a)\, m(x,a)) = D_\mathrm(p(a)\, m(a)) + \mathbb_\, for the relative entropies. The latter in particular means that if we seek a distribution which minimizes the divergence from some underlying prior measure , and we acquire new information which only affects the distribution of , then the distribution of remains , unchanged. The other Rényi divergences satisfy the criteria of being positive and continuous; being invariant under 1-to-1 co-ordinate transformations; and of combining additively when and are independent, so that if , then :\Eta_\alpha(A,X) = \Eta_\alpha(A) + \Eta_\alpha(X)\; and :D_\alpha(P(A)P(X)\, Q(A)Q(X)) = D_\alpha(P(A)\, Q(A)) + D_\alpha(P(X)\, Q(X)). The stronger properties of the quantities, which allow the definition of conditional information and
mutual information In probability theory and information theory, the mutual information (MI) of two random variables is a measure of the mutual dependence between the two variables. More specifically, it quantifies the " amount of information" (in units such ...
from communication theory, may be very important in other applications, or entirely unimportant, depending on those applications' requirements.


Exponential families

The Rényi entropies and divergences for an exponential family admit simple expressions : \Eta_\alpha(p_F(x;\theta)) = \frac \left(F(\alpha\theta)-\alpha F(\theta)+\log E_p ^right) and : D_\alpha(p:q) = \frac where : J_(\theta:\theta')= \alpha F(\theta)+(1-\alpha) F(\theta')- F(\alpha\theta+(1-\alpha)\theta') is a Jensen difference divergence.


Physical meaning

The Rényi entropy in quantum physics is not considered to be an observable, due to its nonlinear dependence on the density matrix. (This nonlinear dependence applies even in the special case of the Shannon entropy.) It can, however, be given an operational meaning through the two-time measurements (also known as full counting statistics) of energy transfers. The limit of the quantum mechanical Rényi entropy as \alpha\to 1 is the von Neumann entropy.


See also

* Diversity indices *
Tsallis entropy In physics, the Tsallis entropy is a generalization of the standard Boltzmann–Gibbs entropy. Overview The concept was introduced in 1988 by Constantino Tsallis as a basis for generalizing the standard statistical mechanics and is identical in fo ...
* Generalized entropy index


Notes


References

* * * * * * * * * * * * * * * * * * * * {{DEFAULTSORT:Renyi entropy Information theory Entropy and information