HOME

TheInfoList



OR:

In
information theory Information theory is the scientific study of the quantification, storage, and communication of information. The field was originally established by the works of Harry Nyquist and Ralph Hartley, in the 1920s, and Claude Shannon in the 1940s. ...
, joint
entropy Entropy is a scientific concept, as well as a measurable physical property, that is most commonly associated with a state of disorder, randomness, or uncertainty. The term and the concept are used in diverse fields, from classical thermodyna ...
is a measure of the uncertainty associated with a set of variables.


Definition

The joint
Shannon entropy Shannon may refer to: People * Shannon (given name) * Shannon (surname) * Shannon (American singer), stage name of singer Shannon Brenda Greene (born 1958) * Shannon (South Korean singer), British-South Korean singer and actress Shannon Arrum Will ...
(in
bit The bit is the most basic unit of information in computing and digital communications. The name is a portmanteau of binary digit. The bit represents a logical state with one of two possible values. These values are most commonly represented a ...
s) of two discrete random variables X and Y with images \mathcal X and \mathcal Y is defined as where x and y are particular values of X and Y, respectively, P(x,y) is the
joint probability Given two random variables that are defined on the same probability space, the joint probability distribution is the corresponding probability distribution on all possible pairs of outputs. The joint distribution can just as well be considered ...
of these values occurring together, and P(x,y) \log_2 (x,y)/math> is defined to be 0 if P(x,y)=0. For more than two random variables X_1, ..., X_n this expands to where x_1,...,x_n are particular values of X_1,...,X_n, respectively, P(x_1, ..., x_n) is the probability of these values occurring together, and P(x_1, ..., x_n) \log_2 (x_1, ..., x_n)/math> is defined to be 0 if P(x_1, ..., x_n)=0.


Properties


Nonnegativity

The joint entropy of a set of random variables is a nonnegative number. :\Eta(X,Y) \geq 0 :\Eta(X_1,\ldots, X_n) \geq 0


Greater than individual entropies

The joint entropy of a set of variables is greater than or equal to the maximum of all of the individual entropies of the variables in the set. :\Eta(X,Y) \geq \max \left Eta(X),\Eta(Y) \right/math> :\Eta \bigl(X_1,\ldots, X_n \bigr) \geq \max_ \Bigl\


Less than or equal to the sum of individual entropies

The joint entropy of a set of variables is less than or equal to the sum of the individual entropies of the variables in the set. This is an example of
subadditivity In mathematics, subadditivity is a property of a function that states, roughly, that evaluating the function for the sum of two elements of the domain always returns something less than or equal to the sum of the function's values at each element. ...
. This inequality is an equality if and only if X and Y are
statistically independent Independence is a fundamental notion in probability theory, as in statistics and the theory of stochastic processes. Two events are independent, statistically independent, or stochastically independent if, informally speaking, the occurrence of ...
. :\Eta(X,Y) \leq \Eta(X) + \Eta(Y) :\Eta(X_1,\ldots, X_n) \leq \Eta(X_1) + \ldots + \Eta(X_n)


Relations to other entropy measures

Joint entropy is used in the definition of
conditional entropy In information theory, the conditional entropy quantifies the amount of information needed to describe the outcome of a random variable Y given that the value of another random variable X is known. Here, information is measured in shannons, na ...
:\Eta(X, Y) = \Eta(X,Y) - \Eta(Y)\,, and \Eta(X_1,\dots,X_n) = \sum_^n \Eta(X_k, X_,\dots, X_1)It is also used in the definition of
mutual information In probability theory and information theory, the mutual information (MI) of two random variables is a measure of the mutual dependence between the two variables. More specifically, it quantifies the " amount of information" (in units such as ...
:\operatorname(X;Y) = \Eta(X) + \Eta(Y) - \Eta(X,Y)\, In
quantum information theory Quantum information is the information of the state of a quantum system. It is the basic entity of study in quantum information theory, and can be manipulated using quantum information processing techniques. Quantum information refers to both ...
, the joint entropy is generalized into the
joint quantum entropy The joint quantum entropy generalizes the classical joint entropy to the context of quantum information theory. Intuitively, given two quantum states \rho and \sigma, represented as density operators that are subparts of a quantum system, the jo ...
.


Joint differential entropy


Definition

The above definition is for discrete random variables and just as valid in the case of continuous random variables. The continuous version of discrete joint entropy is called ''joint differential (or continuous) entropy''. Let X and Y be a continuous random variables with a joint probability density function f(x,y). The differential joint entropy h(X,Y) is defined as For more than two continuous random variables X_1, ..., X_n the definition is generalized to: The
integral In mathematics, an integral assigns numbers to functions in a way that describes displacement, area, volume, and other concepts that arise by combining infinitesimal data. The process of finding integrals is called integration. Along with ...
is taken over the support of f. It is possible that the integral does not exist in which case we say that the differential entropy is not defined.


Properties

As in the discrete case the joint differential entropy of a set of random variables is smaller or equal than the sum of the entropies of the individual random variables: :h(X_1,X_2, \ldots,X_n) \le \sum_^n h(X_i) The following chain rule holds for two random variables: :h(X,Y) = h(X, Y) + h(Y) In the case of more than two random variables this generalizes to: :h(X_1,X_2, \ldots,X_n) = \sum_^n h(X_i, X_1,X_2, \ldots,X_) Joint differential entropy is also used in the definition of the
mutual information In probability theory and information theory, the mutual information (MI) of two random variables is a measure of the mutual dependence between the two variables. More specifically, it quantifies the " amount of information" (in units such as ...
between continuous random variables: :\operatorname(X,Y)=h(X)+h(Y)-h(X,Y)


References

{{Reflist Entropy and information de:Bedingte Entropie#Blockentropie