HOME

TheInfoList



OR:

In
probability theory Probability theory or probability calculus is the branch of mathematics concerned with probability. Although there are several different probability interpretations, probability theory treats the concept in a rigorous mathematical manner by expre ...
, an empirical measure is a random measure arising from a particular realization of a (usually finite) sequence of
random variable A random variable (also called random quantity, aleatory variable, or stochastic variable) is a Mathematics, mathematical formalization of a quantity or object which depends on randomness, random events. The term 'random variable' in its mathema ...
s. The precise definition is found below. Empirical measures are relevant to
mathematical statistics Mathematical statistics is the application of probability theory and other mathematical concepts to statistics, as opposed to techniques for collecting statistical data. Specific mathematical techniques that are commonly used in statistics inc ...
. The motivation for studying empirical measures is that it is often impossible to know the true underlying
probability measure In mathematics, a probability measure is a real-valued function defined on a set of events in a σ-algebra that satisfies Measure (mathematics), measure properties such as ''countable additivity''. The difference between a probability measure an ...
P. We collect observations X_1, X_2, \dots , X_n and compute relative frequencies. We can estimate P, or a related distribution function F by means of the empirical measure or empirical distribution function, respectively. These are uniformly good estimates under certain conditions. Theorems in the area of empirical processes provide rates of this convergence.


Definition

Let X_1, X_2, \dots be a sequence of
independent Independent or Independents may refer to: Arts, entertainment, and media Artist groups * Independents (artist group), a group of modernist painters based in Pennsylvania, United States * Independentes (English: Independents), a Portuguese artist ...
identically distributed
random variable A random variable (also called random quantity, aleatory variable, or stochastic variable) is a Mathematics, mathematical formalization of a quantity or object which depends on randomness, random events. The term 'random variable' in its mathema ...
s with values in the state space ''S'' with probability distribution ''P''. Definition :The ''empirical measure'' ''P''''n'' is defined for measurable subsets of ''S'' and given by ::P_n(A) = \sum_^n I_A(X_i)=\frac\sum_^n \delta_(A) ::where I_A is the
indicator function In mathematics, an indicator function or a characteristic function of a subset of a set is a function that maps elements of the subset to one, and all other elements to zero. That is, if is a subset of some set , then the indicator functio ...
and \delta_X is the
Dirac measure In mathematics, a Dirac measure assigns a size to a set based solely on whether it contains a fixed element ''x'' or not. It is one way of formalizing the idea of the Dirac delta function, an important tool in physics and other technical fields. ...
. Properties *For a fixed measurable set ''A'', ''nP''''n''(''A'') is a binomial random variable with mean ''nP''(''A'') and variance ''nP''(''A'')(1 − ''P''(''A'')). **In particular, ''P''''n''(''A'') is an
unbiased estimator In statistics, the bias of an estimator (or bias function) is the difference between this estimator's expected value and the true value of the parameter being estimated. An estimator or decision rule with zero bias is called ''unbiased''. In stat ...
of ''P''(''A''). *For a fixed partition A_i of ''S'', random variables Y_i=nP_n(A_i) form a
multinomial distribution In probability theory, the multinomial distribution is a generalization of the binomial distribution. For example, it models the probability of counts for each side of a ''k''-sided die rolled ''n'' times. For ''n'' statistical independence, indepen ...
with ''event probabilities'' P(A_i) **The
covariance matrix In probability theory and statistics, a covariance matrix (also known as auto-covariance matrix, dispersion matrix, variance matrix, or variance–covariance matrix) is a square matrix giving the covariance between each pair of elements of ...
of this multinomial distribution is Cov(Y_i,Y_j)=nP(A_i)(\delta_-P(A_j)). Definition :\bigl(P_n(c)\bigr)_ is the ''empirical measure'' indexed by \mathcal, a collection of measurable subsets of ''S''. To generalize this notion further, observe that the empirical measure P_n maps
measurable function In mathematics, and in particular measure theory, a measurable function is a function between the underlying sets of two measurable spaces that preserves the structure of the spaces: the preimage of any measurable set is measurable. This is in ...
s f:S\to \mathbb to their '' empirical mean'', :f\mapsto P_n f=\int_S f \, dP_n=\frac\sum_^n f(X_i) In particular, the empirical measure of ''A'' is simply the empirical mean of the indicator function, ''P''''n''(''A'') = ''P''''n'' ''I''''A''. For a fixed measurable function f, P_nf is a random variable with mean \mathbbf and variance \frac\mathbb(f -\mathbb f)^2. By the strong
law of large numbers In probability theory, the law of large numbers is a mathematical law that states that the average of the results obtained from a large number of independent random samples converges to the true value, if it exists. More formally, the law o ...
, ''P''n(''A'') converges to ''P''(''A'')
almost surely In probability theory, an event is said to happen almost surely (sometimes abbreviated as a.s.) if it happens with probability 1 (with respect to the probability measure). In other words, the set of outcomes on which the event does not occur ha ...
for fixed ''A''. Similarly P_nf converges to \mathbb f almost surely for a fixed measurable function f. The problem of uniform convergence of ''P''''n'' to ''P'' was open until
Vapnik Vladimir Naumovich Vapnik (; born 6 December 1936) is a statistician, researcher, and academic. He is one of the main developers of the Vapnik–Chervonenkis theory of statistical learning and the co-inventor of the support-vector machine method a ...
and Chervonenkis solved it in 1968. If the class \mathcal (or \mathcal) is Glivenko–Cantelli with respect to ''P'' then ''P''''n'' converges to ''P'' uniformly over c\in\mathcal (or f\in \mathcal). In other words, with probability 1 we have :\, P_n-P\, _\mathcal=\sup_, P_n(c)-P(c), \to 0, :\, P_n-P\, _\mathcal=\sup_, P_nf-\mathbbf, \to 0.


Empirical distribution function

The ''empirical distribution function'' provides an example of empirical measures. For real-valued iid random variables X_1,\dots,X_n it is given by :F_n(x)=P_n((-\infty,x])=P_nI_. In this case, empirical measures are indexed by a class \mathcal=\. It has been shown that \mathcal is a uniform Glivenko–Cantelli class, in particular, :\sup_F\, F_n(x)-F(x)\, _\infty\to 0 with probability 1.


See also

*
Empirical risk minimization In statistical learning theory, the principle of empirical risk minimization defines a family of learning algorithms based on evaluating performance over a known and fixed dataset. The core idea is based on an application of the law of large num ...
* Poisson random measure


References


Further reading

* * * * *{{cite journal , first=J. , last=Wolfowitz , title=Generalization of the theorem of Glivenko–Cantelli , journal=Annals of Mathematical Statistics , volume=25 , issue=1 , pages=131–138 , year=1954 , jstor=2236518 , doi=10.1214/aoms/1177728852, doi-access=free Measures (measure theory) Empirical process