HOME

TheInfoList



OR:

In
statistics Statistics (from German language, German: ', "description of a State (polity), state, a country") is the discipline that concerns the collection, organization, analysis, interpretation, and presentation of data. In applying statistics to a s ...
, a consistent estimator or asymptotically consistent estimator is an
estimator In statistics, an estimator is a rule for calculating an estimate of a given quantity based on Sample (statistics), observed data: thus the rule (the estimator), the quantity of interest (the estimand) and its result (the estimate) are distinguish ...
—a rule for computing estimates of a parameter ''θ''0—having the property that as the number of data points used increases indefinitely, the resulting sequence of estimates converges in probability to ''θ''0. This means that the distributions of the estimates become more and more concentrated near the true value of the parameter being estimated, so that the probability of the estimator being arbitrarily close to ''θ''0 converges to one. In practice one constructs an estimator as a function of an available sample of
size Size in general is the Magnitude (mathematics), magnitude or dimensions of a thing. More specifically, ''geometrical size'' (or ''spatial size'') can refer to three geometrical measures: length, area, or volume. Length can be generalized ...
''n'', and then imagines being able to keep collecting data and expanding the sample ''ad infinitum''. In this way one would obtain a sequence of estimates indexed by ''n'', and consistency is a property of what occurs as the sample size “grows to infinity”. If the sequence of estimates can be mathematically shown to converge in probability to the true value ''θ''0, it is called a consistent estimator; otherwise the estimator is said to be inconsistent. Consistency as defined here is sometimes referred to as weak consistency. When we replace convergence in probability with almost sure convergence, then the estimator is said to be strongly consistent. Consistency is related to
bias Bias is a disproportionate weight ''in favor of'' or ''against'' an idea or thing, usually in a way that is inaccurate, closed-minded, prejudicial, or unfair. Biases can be innate or learned. People may develop biases for or against an individ ...
; see bias versus consistency.


Definition

Formally speaking, an
estimator In statistics, an estimator is a rule for calculating an estimate of a given quantity based on Sample (statistics), observed data: thus the rule (the estimator), the quantity of interest (the estimand) and its result (the estimate) are distinguish ...
''Tn'' of parameter ''θ'' is said to be weakly consistent, if it converges in probability to the true value of the parameter: : \underset\;T_n = \theta. i.e. if, for all ''ε'' > 0 : \lim_\Pr\big(, T_n-\theta, > \varepsilon\big) = 0. An
estimator In statistics, an estimator is a rule for calculating an estimate of a given quantity based on Sample (statistics), observed data: thus the rule (the estimator), the quantity of interest (the estimand) and its result (the estimate) are distinguish ...
''Tn'' of parameter ''θ'' is said to be strongly consistent, if it converges almost surely to the true value of the parameter: : \Pr\big(\lim_T_n = \theta\big) = 1. A more rigorous definition takes into account the fact that ''θ'' is actually unknown, and thus, the convergence in probability must take place for every possible value of this parameter. Suppose is a family of distributions (the
parametric model In statistics, a parametric model or parametric family or finite-dimensional model is a particular class of statistical models. Specifically, a parametric model is a family of probability distributions that has a finite number of parameters. Defi ...
), and is an infinite sample from the distribution ''pθ''. Let be a sequence of estimators for some parameter ''g''(''θ''). Usually, ''Tn'' will be based on the first ''n'' observations of a sample. Then this sequence is said to be (weakly) consistent if : \underset\;T_n(X^) = g(\theta),\ \ \text\ \theta\in\Theta. This definition uses ''g''(''θ'') instead of simply ''θ'', because often one is interested in estimating a certain function or a sub-vector of the underlying parameter. In the next example, we estimate the location parameter of the model, but not the scale:


Examples


Sample mean of a normal random variable

Suppose one has a sequence of
statistically independent Independence is a fundamental notion in probability theory, as in statistics and the theory of stochastic processes. Two event (probability theory), events are independent, statistically independent, or stochastically independent if, informally s ...
observations from a normal ''N''(''μ'', ''σ''2) distribution. To estimate ''μ'' based on the first ''n'' observations, one can use the
sample mean The sample mean (sample average) or empirical mean (empirical average), and the sample covariance or empirical covariance are statistics computed from a sample of data on one or more random variables. The sample mean is the average value (or me ...
: ''Tn'' = (''X''1 + ... + ''Xn'')/''n''. This defines a sequence of estimators, indexed by the sample size ''n''. From the properties of the normal distribution, we know the sampling distribution of this statistic: ''T''''n'' is itself normally distributed, with mean ''μ'' and variance ''σ''2/''n''. Equivalently, \scriptstyle (T_n-\mu)/(\sigma/\sqrt) has a standard normal distribution: : \Pr\!\left T_n-\mu, \geq\varepsilon\,\right= \Pr\!\left \frac \geq \sqrt\varepsilon/\sigma \right= 2\left(1-\Phi\left(\frac\right)\right) \to 0 as ''n'' tends to infinity, for any fixed . Therefore, the sequence ''Tn'' of sample means is consistent for the population mean ''μ'' (recalling that \Phi is the cumulative distribution of the standard normal distribution).


Establishing consistency

The notion of asymptotic consistency is very close, almost synonymous to the notion of convergence in probability. As such, any theorem, lemma, or property which establishes convergence in probability may be used to prove the consistency. Many such tools exist: * In order to demonstrate consistency directly from the definition one can use the inequality :: \Pr\!\big (T_n-\theta)\geq\varepsilon\big\leq \frac, the most common choice for function ''h'' being either the absolute value (in which case it is known as Markov inequality), or the quadratic function (respectively Chebyshev's inequality). * Another useful result is the continuous mapping theorem: if ''Tn'' is consistent for ''θ'' and ''g''(·) is a real-valued function continuous at the point ''θ'', then ''g''(''Tn'') will be consistent for ''g''(''θ''): :: T_n\ \xrightarrow\ \theta\ \quad\Rightarrow\quad g(T_n)\ \xrightarrow\ g(\theta) * Slutsky's theorem can be used to combine several different estimators, or an estimator with a non-random convergent sequence. If ''Tn'' →''d''''α'', and ''Sn'' →''p''''β'', then :: \begin & T_n + S_n \ \xrightarrow\ \alpha+\beta, \\ & T_n S_n \ \xrightarrow\ \alpha \beta, \\ & T_n / S_n \ \xrightarrow\ \alpha/\beta, \text\beta\neq0 \end * If estimator ''Tn'' is given by an explicit formula, then most likely the formula will employ sums of random variables, and then the
law of large numbers In probability theory, the law of large numbers is a mathematical law that states that the average of the results obtained from a large number of independent random samples converges to the true value, if it exists. More formally, the law o ...
can be used: for a sequence of random variables and under suitable conditions, :: \frac\sum_^n g(X_i) \ \xrightarrow\ \operatorname ,g(X)\,/math> * If estimator ''Tn'' is defined implicitly, for example as a value that maximizes certain objective function (see extremum estimator), then a more complicated argument involving stochastic equicontinuity has to be used.


Bias versus consistency


Unbiased but not consistent

An estimator can be
unbiased Bias is a disproportionate weight ''in favor of'' or ''against'' an idea or thing, usually in a way that is inaccurate, closed-minded, prejudicial, or unfair. Biases can be innate or learned. People may develop biases for or against an individ ...
but not consistent. For example, for an iid sample one can use ''T''(''X'') = ''x'' as the estimator of the mean E 'X'' Note that here the sampling distribution of ''T'' is the same as the underlying distribution (for any ''n,'' as it ignores all points but the last). So E 'T''(''X'')= E 'X''for any ''n,'' hence it is unbiased, but it does not converge to any value. However, if a sequence of estimators is unbiased ''and'' converges to a value, then it is consistent, as it must converge to the correct value.


Biased but consistent

Alternatively, an estimator can be biased but consistent. For example, if the mean is estimated by \sum x_i + it is biased, but as n \rightarrow \infty, it approaches the correct value, and so it is consistent. Important examples include the
sample variance In probability theory and statistics, variance is the expected value of the squared deviation from the mean of a random variable. The standard deviation (SD) is obtained as the square root of the variance. Variance is a measure of dispersion, ...
and sample standard deviation. Without
Bessel's correction In statistics, Bessel's correction is the use of ''n'' − 1 instead of ''n'' in the formula for the sample variance and sample standard deviation, where ''n'' is the number of observations in a sample. This method corrects the bias in ...
(that is, when using the sample size n instead of the
degrees of freedom In many scientific fields, the degrees of freedom of a system is the number of parameters of the system that may vary independently. For example, a point in the plane has two degrees of freedom for translation: its two coordinates; a non-infinite ...
n-1), these are both negatively biased but consistent estimators. With the correction, the corrected sample variance is unbiased, while the corrected sample standard deviation is still biased, but less so, and both are still consistent: the correction factor converges to 1 as sample size grows. Here is another example. Let T_n be a sequence of estimators for \theta. :\Pr(T_n) = \begin 1 - 1/n, & \mbox\, T_n = \theta \\ 1/n, & \mbox\, T_n = n\delta + \theta \end We can see that T_n \xrightarrow \theta, \operatorname _n= \theta + \delta , and the bias does not converge to zero.


See also

* Efficient estimator * Fisher consistency — alternative, although rarely used concept of consistency for the estimators *
Regression dilution Regression dilution, also known as regression attenuation, is the biasing of the linear regression slope towards zero (the underestimation of its absolute value), caused by errors in the independent variable. Consider fitting a straight line ...
*
Statistical hypothesis testing A statistical hypothesis test is a method of statistical inference used to decide whether the data provide sufficient evidence to reject a particular hypothesis. A statistical hypothesis test typically involves a calculation of a test statistic. T ...
* Instrumental variables estimation


Notes


References

* * * * *.


External links

* by Mark Thoma {{DEFAULTSORT:Consistent estimator Estimator Asymptotic theory (statistics)