Observed Information
   HOME

TheInfoList



OR:

In
statistics Statistics (from German language, German: ''wikt:Statistik#German, Statistik'', "description of a State (polity), state, a country") is the discipline that concerns the collection, organization, analysis, interpretation, and presentation of ...
, the observed information, or observed Fisher information, is the negative of the second derivative (the
Hessian matrix In mathematics, the Hessian matrix or Hessian is a square matrix of second-order partial derivatives of a scalar-valued function, or scalar field. It describes the local curvature of a function of many variables. The Hessian matrix was developed ...
) of the "
log-likelihood The likelihood function (often simply called the likelihood) represents the probability of random variable realizations conditional on particular values of the statistical parameters. Thus, when evaluated on a given sample, the likelihood functi ...
" (the logarithm of the
likelihood function The likelihood function (often simply called the likelihood) represents the probability of random variable realizations conditional on particular values of the statistical parameters. Thus, when evaluated on a given sample, the likelihood funct ...
). It is a sample-based version of the
Fisher information In mathematical statistics, the Fisher information (sometimes simply called information) is a way of measuring the amount of information that an observable random variable ''X'' carries about an unknown parameter ''θ'' of a distribution that model ...
.


Definition

Suppose we observe
random variable A random variable (also called random quantity, aleatory variable, or stochastic variable) is a mathematical formalization of a quantity or object which depends on random events. It is a mapping or a function from possible outcomes (e.g., the po ...
s X_1,\ldots,X_n, independent and identically distributed with density ''f''(''X''; θ), where θ is a (possibly unknown) vector. Then the log-likelihood of the parameters \theta given the data X_1,\ldots,X_n is :\ell(\theta , X_1,\ldots,X_n) = \sum_^n \log f(X_i, \theta) . We define the observed information matrix at \theta^ as :\mathcal(\theta^*) = - \left. \nabla \nabla^ \ell(\theta) \_ ::= - \left. \left( \begin \tfrac & \tfrac & \cdots & \tfrac \\ \tfrac & \tfrac & \cdots & \tfrac \\ \vdots & \vdots & \ddots & \vdots \\ \tfrac & \tfrac & \cdots & \tfrac \\ \end \right) \ell(\theta) \_ In many instances, the observed information is evaluated at the maximum-likelihood estimate.


Alternative definition

Andrew Gelman Andrew Eric Gelman (born February 11, 1965) is an American statistician and professor of statistics and political science at Columbia University. Gelman received bachelor of science degrees in mathematics and in physics from MIT, where he was a ...
, David Dunson and
Donald Rubin Donald is a masculine given name derived from the Gaelic name ''Dòmhnall''.. This comes from the Proto-Celtic *''Dumno-ualos'' ("world-ruler" or "world-wielder"). The final -''d'' in ''Donald'' is partly derived from a misinterpretation of the ...
define observed information instead in terms of the parameters'
posterior probability The posterior probability is a type of conditional probability that results from updating the prior probability with information summarized by the likelihood via an application of Bayes' rule. From an epistemological perspective, the posterior p ...
, p(\theta, y): I(\theta) = - \frac \log p(\theta, y)


Fisher information

The
Fisher information In mathematical statistics, the Fisher information (sometimes simply called information) is a way of measuring the amount of information that an observable random variable ''X'' carries about an unknown parameter ''θ'' of a distribution that model ...
\mathcal(\theta) is the
expected value In probability theory, the expected value (also called expectation, expectancy, mathematical expectation, mean, average, or first moment) is a generalization of the weighted average. Informally, the expected value is the arithmetic mean of a l ...
of the observed information given a single observation X distributed according to the hypothetical model with parameter \theta: :\mathcal(\theta) = \mathrm(\mathcal(\theta)).


Applications

In a notable article,
Bradley Efron Bradley Efron (; born May 24, 1938) is an American statistician. Efron has been president of the American Statistical Association (2004) and of the Institute of Mathematical Statistics (1987–1988).Cochran, J. (1 September 2015), "ASA Lead ...
and David V. Hinkley argued that the observed information should be used in preference to the
expected information Expected may refer to: *Expectation (epistemic) *Expected value *Expected shortfall *Expected utility hypothesis *Expected return *Expected loss ;See also *Unexpected (disambiguation) Unexpected may refer to: Film and television * ''Unexpecte ...
when employing normal approximations for the distribution of maximum-likelihood estimates.


See also

*
Fisher information matrix In mathematical statistics, the Fisher information (sometimes simply called information) is a way of measuring the amount of information that an observable random variable ''X'' carries about an unknown parameter ''θ'' of a distribution that model ...
*
Fisher information metric In information geometry, the Fisher information metric is a particular Riemannian metric which can be defined on a smooth statistical manifold, ''i.e.'', a smooth manifold whose points are probability measures defined on a common probability space ...


References

{{reflist Information theory Estimation theory