In
mathematical statistics, the Fisher information (sometimes simply called information) is a way of measuring the amount of
information
Information is an abstract concept that refers to that which has the power to inform. At the most fundamental level information pertains to the interpretation of that which may be sensed. Any natural process that is not completely random ...
that an observable
random variable
A random variable (also called random quantity, aleatory variable, or stochastic variable) is a mathematical formalization of a quantity or object which depends on random events. It is a mapping or a function from possible outcomes (e.g., the po ...
''X'' carries about an unknown parameter ''θ'' of a distribution that models ''X''. Formally, it is the
variance
In probability theory and statistics, variance is the expectation of the squared deviation of a random variable from its population mean or sample mean. Variance is a measure of dispersion, meaning it is a measure of how far a set of numbers ...
of the
score
Score or scorer may refer to:
*Test score, the result of an exam or test
Business
* Score Digital, now part of Bauer Radio
* Score Entertainment, a former American trading card design and manufacturing company
* Score Media, a former Canadian m ...
, or the
expected value
In probability theory, the expected value (also called expectation, expectancy, mathematical expectation, mean, average, or first moment) is a generalization of the weighted average. Informally, the expected value is the arithmetic mean of a l ...
of the
observed information
In statistics, the observed information, or observed Fisher information, is the negative of the second derivative (the Hessian matrix) of the "log-likelihood" (the logarithm of the likelihood function). It is a sample-based version of the Fisher i ...
.
In
Bayesian statistics, the
asymptotic distribution
In mathematics and statistics, an asymptotic distribution is a probability distribution that is in a sense the "limiting" distribution of a sequence of distributions. One of the main uses of the idea of an asymptotic distribution is in providing a ...
of the
posterior mode
Mode ( la, modus meaning "manner, tune, measure, due measure, rhythm, melody") may refer to:
Arts and entertainment
* '' MO''D''E (magazine)'', a defunct U.S. women's fashion magazine
* ''Mode'' magazine, a fictional fashion magazine which is ...
depends on the Fisher information and not on the
prior
Prior (or prioress) is an ecclesiastical title for a superior in some religious orders. The word is derived from the Latin for "earlier" or "first". Its earlier generic usage referred to any monastic superior. In abbeys, a prior would be l ...
(according to the
Bernstein–von Mises theorem
In Bayesian inference, the Bernstein-von Mises theorem provides the basis for using Bayesian credible sets for confidence statements in parametric models. It states that under some conditions, a posterior distribution converges in the limit of in ...
, which was anticipated by
Laplace
Pierre-Simon, marquis de Laplace (; ; 23 March 1749 – 5 March 1827) was a French scholar and polymath whose work was important to the development of engineering, mathematics, statistics, physics, astronomy, and philosophy. He summarized ...
for
exponential families
In probability and statistics, an exponential family is a parametric set of probability distributions of a certain form, specified below. This special form is chosen for mathematical convenience, including the enabling of the user to calculate ...
). The role of the Fisher information in the asymptotic theory of
maximum-likelihood estimation
In statistics, maximum likelihood estimation (MLE) is a method of estimation theory, estimating the Statistical parameter, parameters of an assumed probability distribution, given some observed data. This is achieved by Mathematical optimization, ...
was emphasized by the statistician
Ronald Fisher (following some initial results by
Francis Ysidro Edgeworth
Francis Ysidro Edgeworth (8 February 1845 – 13 February 1926) was an Anglo-Irish philosopher and political economist who made significant contributions to the methods of statistics during the 1880s. From 1891 onward, he was appointed the ...
). The Fisher information is also used in the calculation of the
Jeffreys prior, which is used in Bayesian statistics.
The Fisher information matrix is used to calculate the
covariance matrices associated with
maximum-likelihood
In statistics, maximum likelihood estimation (MLE) is a method of estimating the parameters of an assumed probability distribution, given some observed data. This is achieved by maximizing a likelihood function so that, under the assumed stati ...
estimates. It can also be used in the formulation of test statistics, such as the
Wald test.
Statistical systems of a scientific nature (physical, biological, etc.) whose likelihood functions obey
shift invariance A shift invariant system is the discrete equivalent of a time-invariant system, defined such that if y(n) is the response of the system to x(n), then y(n-k) is the response of the system to x(n-k).Oppenheim, Schafer, 12 That is, in a shift-invariant ...
have been shown to obey maximum Fisher information. The level of the maximum depends upon the nature of the system constraints.
Definition
The Fisher information is a way of measuring the amount of information that an observable
random variable
A random variable (also called random quantity, aleatory variable, or stochastic variable) is a mathematical formalization of a quantity or object which depends on random events. It is a mapping or a function from possible outcomes (e.g., the po ...
carries about an unknown
parameter
A parameter (), generally, is any characteristic that can help in defining or classifying a particular system (meaning an event, project, object, situation, etc.). That is, a parameter is an element of a system that is useful, or critical, when ...
upon which the probability of
depends. Let
be the
probability density function
In probability theory, a probability density function (PDF), or density of a continuous random variable, is a function whose value at any given sample (or point) in the sample space (the set of possible values taken by the random variable) can ...
(or
probability mass function
In probability and statistics, a probability mass function is a function that gives the probability that a discrete random variable is exactly equal to some value. Sometimes it is also known as the discrete density function. The probability mass ...
) for
conditioned on the value of
. It describes the probability that we observe a given outcome of
, ''given'' a known value of
. If
is sharply peaked with respect to changes in
, it is easy to indicate the "correct" value of
from the data, or equivalently, that the data
provides a lot of information about the parameter
. If
is flat and spread-out, then it would take many samples of
to estimate the actual "true" value of
that ''would'' be obtained using the entire population being sampled. This suggests studying some kind of variance with respect to
.
Formally, the
partial derivative
In mathematics, a partial derivative of a function of several variables is its derivative with respect to one of those variables, with the others held constant (as opposed to the total derivative, in which all variables are allowed to vary). Part ...
with respect to
of the
natural logarithm
The natural logarithm of a number is its logarithm to the base of the mathematical constant , which is an irrational and transcendental number approximately equal to . The natural logarithm of is generally written as , , or sometimes, if ...
of the
likelihood function
The likelihood function (often simply called the likelihood) represents the probability of random variable realizations conditional on particular values of the statistical parameters. Thus, when evaluated on a given sample, the likelihood funct ...
is called the ''
score
Score or scorer may refer to:
*Test score, the result of an exam or test
Business
* Score Digital, now part of Bauer Radio
* Score Entertainment, a former American trading card design and manufacturing company
* Score Media, a former Canadian m ...
''. Under certain regularity conditions, if
is the true parameter (i.e.
is actually distributed as
), it can be shown that the
expected value
In probability theory, the expected value (also called expectation, expectancy, mathematical expectation, mean, average, or first moment) is a generalization of the weighted average. Informally, the expected value is the arithmetic mean of a l ...
(the first
moment
Moment or Moments may refer to:
* Present time
Music
* The Moments, American R&B vocal group Albums
* ''Moment'' (Dark Tranquillity album), 2020
* ''Moment'' (Speed album), 1998
* ''Moments'' (Darude album)
* ''Moments'' (Christine Guldbrand ...
) of the score, evaluated at the true parameter value
, is 0:
:
The Fisher information is defined to be the
variance
In probability theory and statistics, variance is the expectation of the squared deviation of a random variable from its population mean or sample mean. Variance is a measure of dispersion, meaning it is a measure of how far a set of numbers ...
of the score:
:
Note that
. A random variable carrying high Fisher information implies that the absolute value of the score is often high. The Fisher information is not a function of a particular observation, as the random variable ''X'' has been averaged out.
If is twice differentiable with respect to ''θ'', and under certain regularity conditions, then the Fisher information may also be written as
:
since
:
and
:
Thus, the Fisher information may be seen as the curvature of the
support curve (the graph of the log-likelihood). Near the
maximum likelihood
In statistics, maximum likelihood estimation (MLE) is a method of estimation theory, estimating the Statistical parameter, parameters of an assumed probability distribution, given some observed data. This is achieved by Mathematical optimization, ...
estimate, low Fisher information therefore indicates that the maximum appears "blunt", that is, the maximum is shallow and there are many nearby values with a similar log-likelihood. Conversely, high Fisher information indicates that the maximum is sharp.
Regularity conditions
The regularity conditions are as follows:
# The partial derivative of ''f''(''X''; ''θ'') with respect to ''θ'' exists
almost everywhere. (It can fail to exist on a null set, as long as this set does not depend on ''θ''.)
# The integral of ''f''(''X''; ''θ'') can be differentiated under the integral sign with respect to ''θ''.
# The
support
Support may refer to:
Arts, entertainment, and media
* Supporting character
Business and finance
* Support (technical analysis)
* Child support
* Customer support
* Income Support
Construction
* Support (structure), or lateral support, a ...
of ''f''(''X''; ''θ'') does not depend on ''θ''.
If ''θ'' is a vector then the regularity conditions must hold for every component of ''θ''. It is easy to find an example of a density that does not satisfy the regularity conditions: The density of a Uniform(0, ''θ'') variable fails to satisfy conditions 1 and 3. In this case, even though the Fisher information can be computed from the definition, it will not have the properties it is typically assumed to have.
In terms of likelihood
Because the
likelihood of ''θ'' given ''X'' is always proportional to the probability ''f''(''X''; ''θ''), their logarithms necessarily differ by a constant that is independent of ''θ'', and the derivatives of these logarithms with respect to ''θ'' are necessarily equal. Thus one can substitute in a log-likelihood ''l''(''θ''; ''X'') instead of in the definitions of Fisher Information.
Samples of any size
The value ''X'' can represent a single sample drawn from a single distribution or can represent a collection of samples drawn from a collection of distributions. If there are ''n'' samples and the corresponding ''n'' distributions are
statistically independent then the Fisher information will necessarily be the sum of the single-sample Fisher information values, one for each single sample from its distribution. In particular, if the ''n'' distributions are
independent and identically distributed then the Fisher information will necessarily be ''n'' times the Fisher information of a single sample from the common distribution.
Informal derivation of the Cramér–Rao bound
The
Cramér–Rao bound
In estimation theory and statistics, the Cramér–Rao bound (CRB) expresses a lower bound on the variance of unbiased estimators of a deterministic (fixed, though unknown) parameter, the variance of any such estimator is at least as high as the in ...
states that the inverse of the Fisher information is a lower bound on the variance of any
unbiased estimator of ''θ''. H.L. Van Trees (1968) and
B. Roy Frieden
Bernard Roy Frieden (born September 10, 1936) is an American mathematical physicist.
Frieden obtained a Ph.D. in Optics from The Institute of Optics at the University of Rochester. His doctoral thesis advisor was Robert E. Hopkins. Frieden is now ...
(2004) provide the following method of deriving the
Cramér–Rao bound
In estimation theory and statistics, the Cramér–Rao bound (CRB) expresses a lower bound on the variance of unbiased estimators of a deterministic (fixed, though unknown) parameter, the variance of any such estimator is at least as high as the in ...
, a result which describes use of the Fisher information.
Informally, we begin by considering an
unbiased estimator . Mathematically, "unbiased" means that
:
This expression is zero independent of ''θ'', so its partial derivative with respect to ''θ'' must also be zero. By the
product rule, this partial derivative is also equal to
:
For each ''θ'', the likelihood function is a probability density function, and therefore
. By using the
chain rule on the partial derivative of
and then dividing and multiplying by
, one can verify that
:
Using these two facts in the above, we get
:
Factoring the integrand gives
:
Squaring the expression in the integral, the
Cauchy–Schwarz inequality
The Cauchy–Schwarz inequality (also called Cauchy–Bunyakovsky–Schwarz inequality) is considered one of the most important and widely used inequalities in mathematics.
The inequality for sums was published by . The corresponding inequality fo ...
yields
:
The second bracketed factor is defined to be the Fisher Information, while the first bracketed factor is the expected mean-squared error of the estimator
. By rearranging, the inequality tells us that
:
In other words, the precision to which we can estimate ''θ'' is fundamentally limited by the Fisher information of the likelihood function.
Single-parameter Bernoulli experiment
A
Bernoulli trial
In the theory of probability and statistics, a Bernoulli trial (or binomial trial) is a random experiment with exactly two possible outcomes, "success" and "failure", in which the probability of success is the same every time the experiment is c ...
is a random variable with two possible outcomes, "success" and "failure", with success having a probability of ''θ''. The outcome can be thought of as determined by a coin toss, with the probability of heads being ''θ'' and the probability of tails being .
Let ''X'' be a Bernoulli trial. The Fisher information contained in ''X'' may be calculated to be
:
Because Fisher information is additive, the Fisher information contained in ''n'' independent
Bernoulli trial
In the theory of probability and statistics, a Bernoulli trial (or binomial trial) is a random experiment with exactly two possible outcomes, "success" and "failure", in which the probability of success is the same every time the experiment is c ...
s is therefore
:
This is the reciprocal of the
variance
In probability theory and statistics, variance is the expectation of the squared deviation of a random variable from its population mean or sample mean. Variance is a measure of dispersion, meaning it is a measure of how far a set of numbers ...
of the mean number of successes in ''n''
Bernoulli trial
In the theory of probability and statistics, a Bernoulli trial (or binomial trial) is a random experiment with exactly two possible outcomes, "success" and "failure", in which the probability of success is the same every time the experiment is c ...
s, so in this case, the Cramér–Rao bound is an equality.
Matrix form
When there are ''N'' parameters, so that ''θ'' is an
vector then the Fisher information takes the form of an
matrix
Matrix most commonly refers to:
* ''The Matrix'' (franchise), an American media franchise
** ''The Matrix'', a 1999 science-fiction action film
** "The Matrix", a fictional setting, a virtual reality environment, within ''The Matrix'' (franchis ...
. This matrix is called the Fisher information matrix (FIM) and has typical element
:
The FIM is a
positive semidefinite matrix
In mathematics, a symmetric matrix M with real entries is positive-definite if the real number z^\textsfMz is positive for every nonzero real column vector z, where z^\textsf is the transpose of More generally, a Hermitian matrix (that is, a co ...
. If it is positive definite, then it defines a
Riemannian metric on the ''N''-
dimension
In physics and mathematics, the dimension of a Space (mathematics), mathematical space (or object) is informally defined as the minimum number of coordinates needed to specify any Point (geometry), point within it. Thus, a Line (geometry), lin ...
al
parameter space The parameter space is the space of possible parameter values that define a particular mathematical model, often a subset of finite-dimensional Euclidean space. Often the parameters are inputs of a function, in which case the technical term for the ...
. The topic
information geometry
Information geometry is an interdisciplinary field that applies the techniques of differential geometry to study probability theory and statistics. It studies statistical manifolds, which are Riemannian manifolds whose points correspond to prob ...
uses this to connect Fisher information to
differential geometry
Differential geometry is a mathematical discipline that studies the geometry of smooth shapes and smooth spaces, otherwise known as smooth manifolds. It uses the techniques of differential calculus, integral calculus, linear algebra and multili ...
, and in that context, this metric is known as the
Fisher information metric In information geometry, the Fisher information metric is a particular Riemannian metric which can be defined on a smooth statistical manifold, ''i.e.'', a smooth manifold whose points are probability measures defined on a common probability space. ...
.
Under certain regularity conditions, the Fisher information matrix may also be written as
:
The result is interesting in several ways:
*It can be derived as the
Hessian
A Hessian is an inhabitant of the German state of Hesse.
Hessian may also refer to:
Named from the toponym
*Hessian (soldier), eighteenth-century German regiments in service with the British Empire
**Hessian (boot), a style of boot
**Hessian f ...
of the
relative entropy.
*It can be used as a Riemannian metric for defining Fisher-Rao geometry when it is positive-definite.
*It can be understood as a metric induced from the
Euclidean metric, after appropriate change of variable.
*In its complex-valued form, it is the
Fubini–Study metric
In mathematics, the Fubini–Study metric is a Kähler metric on projective Hilbert space, that is, on a complex projective space CP''n'' endowed with a Hermitian form. This metric was originally described in 1904 and 1905 by Guido Fubini and Edua ...
.
*It is the key part of the proof of
Wilks' theorem
In statistics Wilks' theorem offers an asymptotic distribution of the log-likelihood ratio statistic, which can be used to produce confidence intervals for maximum-likelihood estimates or as a test statistic for performing the likelihood-ratio test ...
, which allows confidence region estimates for
maximum likelihood estimation (for those conditions for which it applies) without needing the
Likelihood Principle.
*In cases where the analytical calculations of the FIM above are difficult, it is possible to form an average of easy Monte Carlo estimates of the
Hessian
A Hessian is an inhabitant of the German state of Hesse.
Hessian may also refer to:
Named from the toponym
*Hessian (soldier), eighteenth-century German regiments in service with the British Empire
**Hessian (boot), a style of boot
**Hessian f ...
of the negative log-likelihood function as an estimate of the FIM. The estimates may be based on values of the negative log-likelihood function or the gradient of the negative log-likelihood function; no analytical calculation of the Hessian of the negative log-likelihood function is needed.
Orthogonal parameters
We say that two parameters ''θ
i'' and ''θ
j'' are orthogonal if the element of the ''i''th row and ''j''th column of the Fisher information matrix is zero. Orthogonal parameters are easy to deal with in the sense that their
maximum likelihood estimates are independent and can be calculated separately. When dealing with research problems, it is very common for the researcher to invest some time searching for an orthogonal parametrization of the densities involved in the problem.
Singular statistical model
If the Fisher information matrix is positive definite for all , then the corresponding
statistical model
A statistical model is a mathematical model that embodies a set of statistical assumptions concerning the generation of Sample (statistics), sample data (and similar data from a larger Statistical population, population). A statistical model repres ...
is said to be ''regular''; otherwise, the statistical model is said to be ''singular''. Examples of singular statistical models include the following: normal mixtures, binomial mixtures, multinomial mixtures, Bayesian networks, neural networks, radial basis functions, hidden Markov models, stochastic context-free grammars, reduced rank regressions, Boltzmann machines.
In
machine learning
Machine learning (ML) is a field of inquiry devoted to understanding and building methods that 'learn', that is, methods that leverage data to improve performance on some set of tasks. It is seen as a part of artificial intelligence.
Machine ...
, if a statistical model is devised so that it extracts hidden structure from a random phenomenon, then it naturally becomes singular.
Multivariate normal distribution
The FIM for a ''N''-variate
multivariate normal distribution,
has a special form. Let the ''K''-dimensional vector of parameters be
and the vector of random normal variables be
. Assume that the mean values of these random variables are
, and let
be the
covariance matrix
In probability theory and statistics, a covariance matrix (also known as auto-covariance matrix, dispersion matrix, variance matrix, or variance–covariance matrix) is a square matrix giving the covariance between each pair of elements of ...
. Then, for
, the (''m'', ''n'') entry of the FIM is:
:
where
denotes the
transpose of a vector,
denotes the
trace
Trace may refer to:
Arts and entertainment Music
* Trace (Son Volt album), ''Trace'' (Son Volt album), 1995
* Trace (Died Pretty album), ''Trace'' (Died Pretty album), 1993
* Trace (band), a Dutch progressive rock band
* The Trace (album), ''The ...
of a
square matrix
In mathematics, a square matrix is a matrix with the same number of rows and columns. An ''n''-by-''n'' matrix is known as a square matrix of order Any two square matrices of the same order can be added and multiplied.
Square matrices are often ...
, and:
:
Note that a special, but very common, case is the one where
, a constant. Then
:
In this case the Fisher information matrix may be identified with the coefficient matrix of the
normal equations
In statistics, ordinary least squares (OLS) is a type of linear least squares method for choosing the unknown parameters in a linear regression model (with fixed level-one effects of a linear function of a set of explanatory variables) by the p ...
of
least squares
The method of least squares is a standard approach in regression analysis to approximate the solution of overdetermined systems (sets of equations in which there are more equations than unknowns) by minimizing the sum of the squares of the res ...
estimation theory.
Another special case occurs when the mean and covariance depend on two different vector parameters, say, ''β'' and ''θ''. This is especially popular in the analysis of spatial data, which often uses a linear model with correlated residuals. In this case,
:
where
:
Properties
Chain rule
Similar to the
entropy
Entropy is a scientific concept, as well as a measurable physical property, that is most commonly associated with a state of disorder, randomness, or uncertainty. The term and the concept are used in diverse fields, from classical thermodynam ...
or
mutual information
In probability theory and information theory, the mutual information (MI) of two random variables is a measure of the mutual dependence between the two variables. More specifically, it quantifies the " amount of information" (in units such ...
, the Fisher information also possesses a chain rule decomposition. In particular, if ''X'' and ''Y'' are jointly distributed random variables, it follows that:
:
where
and
is the Fisher information of ''Y'' relative to
calculated with respect to the conditional density of ''Y'' given a specific value ''X'' = ''x''.
As a special case, if the two random variables are
independent
Independent or Independents may refer to:
Arts, entertainment, and media Artist groups
* Independents (artist group), a group of modernist painters based in the New Hope, Pennsylvania, area of the United States during the early 1930s
* Independ ...
, the information yielded by the two random variables is the sum of the information from each random variable separately:
:
Consequently, the information in a random sample of ''n''
independent and identically distributed observations is ''n'' times the information in a sample of size 1.
F-divergence
Given a convex function