HOME

TheInfoList



OR:

In
probability theory Probability theory is the branch of mathematics concerned with probability. Although there are several different probability interpretations, probability theory treats the concept in a rigorous mathematical manner by expressing it through a set ...
, particularly information theory, the conditional mutual information is, in its most basic form, the expected value of the
mutual information In probability theory and information theory, the mutual information (MI) of two random variables is a measure of the mutual dependence between the two variables. More specifically, it quantifies the " amount of information" (in units such ...
of two random variables given the value of a third.


Definition

For random variables X, Y, and Z with support sets \mathcal, \mathcal and \mathcal, we define the conditional mutual information as This may be written in terms of the expectation operator: I(X;Y, Z) = \mathbb_Z P_ \otimes P_ )/math>. Thus I(X;Y, Z) is the expected (with respect to Z)
Kullback–Leibler divergence In mathematical statistics, the Kullback–Leibler divergence (also called relative entropy and I-divergence), denoted D_\text(P \parallel Q), is a type of statistical distance: a measure of how one probability distribution ''P'' is different fr ...
from the conditional joint distribution P_ to the product of the conditional marginals P_ and P_. Compare with the definition of
mutual information In probability theory and information theory, the mutual information (MI) of two random variables is a measure of the mutual dependence between the two variables. More specifically, it quantifies the " amount of information" (in units such ...
.


In terms of PMFs for discrete distributions

For discrete random variables X, Y, and Z with support sets \mathcal, \mathcal and \mathcal, the conditional mutual information I(X;Y, Z) is as follows : I(X;Y, Z) = \sum_ p_Z(z) \sum_ \sum_ p_(x,y, z) \log \frac where the marginal, joint, and/or conditional probability mass functions are denoted by p with the appropriate subscript. This can be simplified as


In terms of PDFs for continuous distributions

For (absolutely) continuous random variables X, Y, and Z with support sets \mathcal, \mathcal and \mathcal, the conditional mutual information I(X;Y, Z) is as follows : I(X;Y, Z) = \int_ \bigg( \int_ \int_ \log \left(\frac\right) p_(x,y, z) dx dy \bigg) p_Z(z) dz where the marginal, joint, and/or conditional
probability density function In probability theory, a probability density function (PDF), or density of a continuous random variable, is a function whose value at any given sample (or point) in the sample space (the set of possible values taken by the random variable) ca ...
s are denoted by p with the appropriate subscript. This can be simplified as


Some identities

Alternatively, we may write in terms of joint and conditional entropies as :\begin I(X;Y, Z) &= H(X,Z) + H(Y,Z) - H(X,Y,Z) - H(Z) \\ &= H(X, Z) - H(X, Y,Z) \\ &= H(X, Z)+H(Y, Z)-H(X,Y, Z). \end This can be rewritten to show its relationship to mutual information :I(X;Y, Z) = I(X;Y,Z) - I(X;Z) usually rearranged as the chain rule for mutual information :I(X;Y,Z) = I(X;Z) + I(X;Y, Z) or :I(X;Y, Z) = I(X;Y) - (I(X;Z) - I(X;Z, Y))\,. Another equivalent form of the above is :\begin I(X;Y, Z) &= H(Z, X) + H(X) + H(Z, Y) + H(Y) - H(Z, X,Y) - H(X,Y) - H(Z)\\ &= I(X;Y) + H(Z, X) + H(Z, Y) - H(Z, X,Y) - H(Z) \end\,. Another equivalent form of the conditional mutual information is :\begin I(X;Y, Z) = I(X,Z;Y,Z) - H(Z) \end\,. Like mutual information, conditional mutual information can be expressed as a
Kullback–Leibler divergence In mathematical statistics, the Kullback–Leibler divergence (also called relative entropy and I-divergence), denoted D_\text(P \parallel Q), is a type of statistical distance: a measure of how one probability distribution ''P'' is different fr ...
: : I(X;Y, Z) = D_ p(X, Z)p(Y, Z)p(Z) Or as an expected value of simpler Kullback–Leibler divergences: : I(X;Y, Z) = \sum_ p( Z=z ) D_ z) \, p(X, z)p(Y, z) /math>, : I(X;Y, Z) = \sum_ p( Y=y ) D_ y) \, p(X, Z)p(Z, y) /math>.


More general definition

A more general definition of conditional mutual information, applicable to random variables with continuous or other arbitrary distributions, will depend on the concept of regular conditional probability. (See also.) Let (\Omega, \mathcal F, \mathfrak P) be a
probability space In probability theory, a probability space or a probability triple (\Omega, \mathcal, P) is a mathematical construct that provides a formal model of a random process or "experiment". For example, one can define a probability space which models t ...
, and let the random variables X, Y, and Z each be defined as a Borel-measurable function from \Omega to some state space endowed with a topological structure. Consider the Borel measure (on the σ-algebra generated by the open sets) in the state space of each random variable defined by assigning each Borel set the \mathfrak P-measure of its preimage in \mathcal F. This is called the
pushforward measure In measure theory, a pushforward measure (also known as push forward, push-forward or image measure) is obtained by transferring ("pushing forward") a measure from one measurable space to another using a measurable function. Definition Given meas ...
X _* \mathfrak P = \mathfrak P\big(X^(\cdot)\big). The support of a random variable is defined to be the topological support of this measure, i.e. \mathrm\,X = \mathrm\,X _* \mathfrak P. Now we can formally define the conditional probability measure given the value of one (or, via the
product topology In topology and related areas of mathematics, a product space is the Cartesian product of a family of topological spaces equipped with a natural topology called the product topology. This topology differs from another, perhaps more natural-s ...
, more) of the random variables. Let M be a measurable subset of \Omega, (i.e. M \in \mathcal F,) and let x \in \mathrm\,X. Then, using the
disintegration theorem In mathematics, the disintegration theorem is a result in measure theory and probability theory. It rigorously defines the idea of a non-trivial "restriction" of a measure to a measure zero subset of the measure space in question. It is relate ...
: :\mathfrak P(M , X=x) = \lim_ \frac \qquad \textrm \qquad \mathfrak P(M, X) = \int_M d\mathfrak P\big(\omega, X=X(\omega)\big), where the limit is taken over the open neighborhoods U of x, as they are allowed to become arbitrarily smaller with respect to
set inclusion In mathematics, set ''A'' is a subset of a set ''B'' if all elements of ''A'' are also elements of ''B''; ''B'' is then a superset of ''A''. It is possible for ''A'' and ''B'' to be equal; if they are unequal, then ''A'' is a proper subset of ...
. Finally we can define the conditional mutual information via
Lebesgue integration In mathematics, the integral of a non-negative function of a single variable can be regarded, in the simplest case, as the area between the graph of that function and the -axis. The Lebesgue integral, named after French mathematician Henri Leb ...
: :I(X;Y, Z) = \int_\Omega \log \Bigl( \frac \Bigr) d \mathfrak P(\omega), where the integrand is the logarithm of a Radon–Nikodym derivative involving some of the conditional probability measures we have just defined.


Note on notation

In an expression such as I(A;B, C), A, B, and C need not necessarily be restricted to representing individual random variables, but could also represent the joint distribution of any collection of random variables defined on the same
probability space In probability theory, a probability space or a probability triple (\Omega, \mathcal, P) is a mathematical construct that provides a formal model of a random process or "experiment". For example, one can define a probability space which models t ...
. As is common in
probability theory Probability theory is the branch of mathematics concerned with probability. Although there are several different probability interpretations, probability theory treats the concept in a rigorous mathematical manner by expressing it through a set ...
, we may use the comma to denote such a joint distribution, e.g. I(A_0,A_1;B_1,B_2,B_3, C_0,C_1). Hence the use of the semicolon (or occasionally a colon or even a wedge \wedge) to separate the principal arguments of the mutual information symbol. (No such distinction is necessary in the symbol for
joint entropy In information theory, joint entropy is a measure of the uncertainty associated with a set of variables. Definition The joint Shannon entropy (in bits) of two discrete random variables X and Y with images \mathcal X and \mathcal Y is defined ...
, since the joint entropy of any number of random variables is the same as the entropy of their joint distribution.)


Properties


Nonnegativity

It is always true that :I(X;Y, Z) \ge 0, for discrete, jointly distributed random variables X, Y and Z. This result has been used as a basic building block for proving other
inequalities in information theory Inequalities are very important in the study of information theory. There are a number of different contexts in which these inequalities appear. Entropic inequalities Consider a tuple X_1,X_2,\dots,X_n of n finitely (or at most countably) supp ...
, in particular, those known as Shannon-type inequalities. Conditional mutual information is also non-negative for continuous random variables under certain regularity conditions.


Interaction information

Conditioning on a third random variable may either increase or decrease the mutual information: that is, the difference I(X;Y) - I(X;Y, Z), called the
interaction information The interaction information is a generalization of the mutual information for more than two variables. There are many names for interaction information, including ''amount of information'', ''information correlation'', ''co-information'', and sim ...
, may be positive, negative, or zero. This is the case even when random variables are pairwise independent. Such is the case when: X \sim \mathrm(0.5), Z \sim \mathrm(0.5), \quad Y=\left\{\begin{array}{ll} X & \text{if }Z=0\\ 1-X & \text{if }Z=1 \end{array}\right.in which case X, Y and Z are pairwise independent and in particular I(X;Y)=0, but I(X;Y, Z)=1.


Chain rule for mutual information

The chain rule (as derived above) provides two ways to decompose I(X;Y,Z): : \begin{align} I(X;Y,Z) &= I(X;Z) + I(X;Y, Z) \\ &= I(X;Y) + I(X;Z, Y) \end{align} The
data processing inequality The data processing inequality is an information theoretic concept which states that the information content of a signal cannot be increased via a local physical operation. This can be expressed concisely as 'post-processing cannot increase inform ...
is closely related to conditional mutual information and can be proven using the chain rule.


Interaction information

{{main, Interaction information The conditional mutual information is used to inductively define the interaction information, a generalization of mutual information, as follows: :I(X_1;\ldots;X_{n+1}) = I(X_1;\ldots;X_n) - I(X_1;\ldots;X_n, X_{n+1}), where :I(X_1;\ldots;X_n, X_{n+1}) = \mathbb{E}_{X_{n+1 X_{n+1 \, P_{X_1, X_{n+1 \otimes\cdots\otimes P_{X_n, X_{n+1 ) Because the conditional mutual information can be greater than or less than its unconditional counterpart, the interaction information can be positive, negative, or zero, which makes it hard to interpret.


References

Information theory Entropy and information