Independence is a fundamental notion in
probability theory
Probability theory is the branch of mathematics concerned with probability. Although there are several different probability interpretations, probability theory treats the concept in a rigorous mathematical manner by expressing it through a set o ...
, as in
statistics
Statistics (from German language, German: ''wikt:Statistik#German, Statistik'', "description of a State (polity), state, a country") is the discipline that concerns the collection, organization, analysis, interpretation, and presentation of ...
and the theory of
stochastic processes
In probability theory and related fields, a stochastic () or random process is a mathematical object usually defined as a family of random variables. Stochastic processes are widely used as mathematical models of systems and phenomena that appe ...
. Two
event
Event may refer to:
Gatherings of people
* Ceremony, an event of ritual significance, performed on a special occasion
* Convention (meeting), a gathering of individuals engaged in some common interest
* Event management, the organization of e ...
s are independent, statistically independent, or stochastically independent
if, informally speaking, the occurrence of one does not affect the probability of occurrence of the other or, equivalently, does not affect the
odds
Odds provide a measure of the likelihood of a particular outcome. They are calculated as the ratio of the number of events that produce that outcome to the number that do not. Odds are commonly used in gambling and statistics.
Odds also have ...
. Similarly, two
random variable
A random variable (also called random quantity, aleatory variable, or stochastic variable) is a mathematical formalization of a quantity or object which depends on random events. It is a mapping or a function from possible outcomes (e.g., the po ...
s are independent if the realization of one does not affect the
probability distribution
In probability theory and statistics, a probability distribution is the mathematical function that gives the probabilities of occurrence of different possible outcomes for an experiment. It is a mathematical description of a random phenomenon i ...
of the other.
When dealing with collections of more than two events, two notions of independence need to be distinguished. The events are called
pairwise independent In probability theory, a pairwise independent collection of random variables is a set of random variables any two of which are independent. Any collection of mutually independent random variables is pairwise independent, but some pairwise indepen ...
if any two events in the collection are independent of each other, while mutual independence (or collective independence) of events means, informally speaking, that each event is independent of any combination of other events in the collection. A similar notion exists for collections of random variables. Mutual independence implies pairwise independence, but not the other way around. In the standard literature of probability theory, statistics, and stochastic processes, independence without further qualification usually refers to mutual independence.
Definition
For events
Two events
Two events
and
are independent (often written as
or
, where the latter symbol often is also used for
conditional independence
In probability theory, conditional independence describes situations wherein an observation is irrelevant or redundant when evaluating the certainty of a hypothesis. Conditional independence is usually formulated in terms of conditional probabil ...
) if and only if their
joint probability
Given two random variables that are defined on the same probability space, the joint probability distribution is the corresponding probability distribution on all possible pairs of outputs. The joint distribution can just as well be considere ...
equals the product of their probabilities:
[
indicates that two independent events and have common elements in their ]sample space
In probability theory, the sample space (also called sample description space, possibility space, or outcome space) of an experiment or random trial is the set of all possible outcomes or results of that experiment. A sample space is usually den ...
so that they are not mutually exclusive
In logic and probability theory, two events (or propositions) are mutually exclusive or disjoint if they cannot both occur at the same time. A clear example is the set of outcomes of a single coin toss, which can result in either heads or tails ...
(mutually exclusive iff ). Why this defines independence is made clear by rewriting with conditional probabilities
In probability theory, conditional probability is a measure of the probability of an event occurring, given that another event (by assumption, presumption, assertion or evidence) has already occurred. This particular method relies on event B occur ...
as the probability at which the event occurs provided that the event has or is assumed to have occurred:
:
and similarly
:
Thus, the occurrence of does not affect the probability of , and vice versa. In other words, and are independent to each other. Although the derived expressions may seem more intuitive, they are not the preferred definition, as the conditional probabilities may be undefined if or are 0. Furthermore, the preferred definition makes clear by symmetry that when is independent of , is also independent of .
Log probability and information content
Stated in terms of log probability
In probability theory and computer science, a log probability is simply a logarithm of a probability. The use of log probabilities means representing probabilities on a logarithmic scale, instead of the standard , 1/math> unit interval.
Since t ...
, two events are independent if and only if the log probability of the joint event is the sum of the log probability of the individual events:
:
In information theory
Information theory is the scientific study of the quantification (science), quantification, computer data storage, storage, and telecommunication, communication of information. The field was originally established by the works of Harry Nyquist a ...
, negative log probability is interpreted as information content
In information theory, the information content, self-information, surprisal, or Shannon information is a basic quantity derived from the probability of a particular event occurring from a random variable. It can be thought of as an alternative wa ...
, and thus two events are independent if and only if the information content of the combined event equals the sum of information content of the individual events:
:
See for details.
Odds
Stated in terms of odds
Odds provide a measure of the likelihood of a particular outcome. They are calculated as the ratio of the number of events that produce that outcome to the number that do not. Odds are commonly used in gambling and statistics.
Odds also have ...
, two events are independent if and only if the odds ratio
An odds ratio (OR) is a statistic that quantifies the strength of the association between two events, A and B. The odds ratio is defined as the ratio of the odds of A in the presence of B and the odds of A in the absence of B, or equivalently (due ...
of and is unity (1). Analogously with probability, this is equivalent to the conditional odds being equal to the unconditional odds:
:
or to the odds of one event, given the other event, being the same as the odds of the event, given the other event not occurring:
:
The odds ratio can be defined as
:
or symmetrically for odds of given , and thus is 1 if and only if the events are independent.
More than two events
A finite set of events is pairwise independent if every pair of events is independent—that is, if and only if for all distinct pairs of indices ,
A finite set of events is mutually independent if every event is independent of any intersection of the other events[—that is, if and only if for every and for every k indices ,
This is called the ''multiplication rule'' for independent events. Note that it is not a single condition involving only the product of all the probabilities of all single events; it must hold true for all subsets of events.
For more than two events, a mutually independent set of events is (by definition) pairwise independent; but the converse is not necessarily true.][
]
For real valued random variables
Two random variables
Two random variables and are independent if and only if
In logic and related fields such as mathematics and philosophy, "if and only if" (shortened as "iff") is a biconditional logical connective between statements, where either both statements are true or both are false.
The connective is bicondi ...
(iff) the elements of the π-system generated by them are independent; that is to say, for every and , the events and are independent events (as defined above in ). That is, and with cumulative distribution function
In probability theory and statistics, the cumulative distribution function (CDF) of a real-valued random variable X, or just distribution function of X, evaluated at x, is the probability that X will take a value less than or equal to x.
Ev ...
s and , are independent iff
In logic and related fields such as mathematics and philosophy, "if and only if" (shortened as "iff") is a biconditional logical connective between statements, where either both statements are true or both are false.
The connective is bicon ...
the combined random variable has a joint
A joint or articulation (or articular surface) is the connection made between bones, ossicles, or other hard structures in the body which link an animal's skeletal system into a functional whole.Saladin, Ken. Anatomy & Physiology. 7th ed. McGraw ...
cumulative distribution function
or equivalently, if the probability densities and and the joint probability density exist,
:
More than two random variables
A finite set of random variables is pairwise independent In probability theory, a pairwise independent collection of random variables is a set of random variables any two of which are independent. Any collection of mutually independent random variables is pairwise independent, but some pairwise indepen ...
if and only if every pair of random variables is independent. Even if the set of random variables is pairwise independent, it is not necessarily mutually independent as defined next.
A finite set of random variables is mutually independent if and only if for any sequence of numbers , the events are mutually independent events (as defined above in ). This is equivalent to the following condition on the joint cumulative distribution function A finite set of random variables is mutually independent if and only if[
Notice that it is not necessary here to require that the probability distribution factorizes for all possible subsets as in the case for events. This is not required because e.g. implies .
The measure-theoretically inclined may prefer to substitute events for events in the above definition, where is any ]Borel set
In mathematics, a Borel set is any set in a topological space that can be formed from open sets (or, equivalently, from closed sets) through the operations of countable union, countable intersection, and relative complement. Borel sets are named ...
. That definition is exactly equivalent to the one above when the values of the random variables are real number
In mathematics, a real number is a number that can be used to measure a ''continuous'' one-dimensional quantity such as a distance, duration or temperature. Here, ''continuous'' means that values can have arbitrarily small variations. Every real ...
s. It has the advantage of working also for complex-valued random variables or for random variables taking values in any measurable space
In mathematics, a measurable space or Borel space is a basic object in measure theory. It consists of a set and a σ-algebra, which defines the subsets that will be measured.
Definition
Consider a set X and a σ-algebra \mathcal A on X. Then the ...
(which includes topological space
In mathematics, a topological space is, roughly speaking, a geometrical space in which closeness is defined but cannot necessarily be measured by a numeric distance. More specifically, a topological space is a set whose elements are called points ...
s endowed by appropriate σ-algebras).
For real valued random vectors
Two random vectors and are called independent if
where and denote the cumulative distribution functions of and and denotes their joint cumulative distribution function. Independence of and is often denoted by .
Written component-wise, and are called independent if
:
For stochastic processes
For one stochastic process
The definition of independence may be extended from random vectors to a stochastic process
In probability theory and related fields, a stochastic () or random process is a mathematical object usually defined as a family of random variables. Stochastic processes are widely used as mathematical models of systems and phenomena that appea ...
. Therefore, it is required for an independent stochastic process that the random variables obtained by sampling the process at any times are independent random variables for any .
Formally, a stochastic process is called independent, if and only if for all and for all
where Independence of a stochastic process is a property ''within'' a stochastic process, not between two stochastic processes.
For two stochastic processes
Independence of two stochastic processes is a property between two stochastic processes and that are defined on the same probability space . Formally, two stochastic processes and are said to be independent if for all and for all , the random vectors and are independent, i.e. if
Independent σ-algebras
The definitions above ( and ) are both generalized by the following definition of independence for σ-algebras. Let be a probability space and let and be two sub-σ-algebras of . and are said to be independent if, whenever and ,
:
Likewise, a finite family of σ-algebras , where is an index set
In mathematics, an index set is a set whose members label (or index) members of another set. For instance, if the elements of a set may be ''indexed'' or ''labeled'' by means of the elements of a set , then is an index set. The indexing consists ...
, is said to be independent if and only if
:
and an infinite family of σ-algebras is said to be independent if all its finite subfamilies are independent.
The new definition relates to the previous ones very directly:
* Two events are independent (in the old sense) if and only if
In logic and related fields such as mathematics and philosophy, "if and only if" (shortened as "iff") is a biconditional logical connective between statements, where either both statements are true or both are false.
The connective is bicondi ...
the σ-algebras that they generate are independent (in the new sense). The σ-algebra generated by an event is, by definition,
::
* Two random variables and defined over are independent (in the old sense) if and only if the σ-algebras that they generate are independent (in the new sense). The σ-algebra generated by a random variable taking values in some measurable space
In mathematics, a measurable space or Borel space is a basic object in measure theory. It consists of a set and a σ-algebra, which defines the subsets that will be measured.
Definition
Consider a set X and a σ-algebra \mathcal A on X. Then the ...
consists, by definition, of all subsets of of the form , where is any measurable subset of .
Using this definition, it is easy to show that if and are random variables and is constant, then and are independent, since the σ-algebra generated by a constant random variable is the trivial σ-algebra . Probability zero events cannot affect independence so independence also holds if is only Pr-almost surely
In probability theory, an event is said to happen almost surely (sometimes abbreviated as a.s.) if it happens with probability 1 (or Lebesgue measure 1). In other words, the set of possible exceptions may be non-empty, but it has probability 0. ...
constant.
Properties
Self-independence
Note that an event is independent of itself if and only if
:
Thus an event is independent of itself if and only if it almost surely
In probability theory, an event is said to happen almost surely (sometimes abbreviated as a.s.) if it happens with probability 1 (or Lebesgue measure 1). In other words, the set of possible exceptions may be non-empty, but it has probability 0. ...
occurs or its complement
A complement is something that completes something else.
Complement may refer specifically to:
The arts
* Complement (music), an interval that, when added to another, spans an octave
** Aggregate complementation, the separation of pitch-clas ...
almost surely occurs; this fact is useful when proving zero–one law In probability theory, a zero–one law is a result that states that an event must have probability 0 or 1 and no intermediate value. Sometimes, the statement is that the limit of certain probabilities must be 0 or 1.
It may refer to:
* Borel–C ...
s.
Expectation and covariance
If and are independent random variables, then the expectation operator
In probability theory, the expected value (also called expectation, expectancy, mathematical expectation, mean, average, or first moment) is a generalization of the weighted average. Informally, the expected value is the arithmetic mean of a l ...
has the property
:
and the covariance
In probability theory and statistics, covariance is a measure of the joint variability of two random variables. If the greater values of one variable mainly correspond with the greater values of the other variable, and the same holds for the les ...