Large sample theory
   HOME

TheInfoList



OR:

In
statistics Statistics (from German language, German: ''wikt:Statistik#German, Statistik'', "description of a State (polity), state, a country") is the discipline that concerns the collection, organization, analysis, interpretation, and presentation of ...
, asymptotic theory, or large sample theory, is a framework for assessing properties of estimators and statistical tests. Within this framework, it is often assumed that the sample size may grow indefinitely; the properties of estimators and tests are then evaluated under the limit of . In practice, a limit evaluation is considered to be approximately valid for large finite sample sizes too.Höpfner, R. (2014), Asymptotic Statistics, Walter de Gruyter. 286 pag. ,


Overview

Most statistical problems begin with a dataset of size . The asymptotic theory proceeds by assuming that it is possible (in principle) to keep collecting additional data, thus that the sample size grows infinitely, i.e. . Under the assumption, many results can be obtained that are unavailable for samples of finite size. An example is the weak law of large numbers. The law states that for a sequence of independent and identically distributed (IID)
random variable A random variable (also called random quantity, aleatory variable, or stochastic variable) is a mathematical formalization of a quantity or object which depends on random events. It is a mapping or a function from possible outcomes (e.g., the po ...
s , if one value is drawn from each random variable and the average of the first values is computed as , then the converge in probability to the population mean as .A. DasGupta (2008), ''Asymptotic Theory of Statistics and Probability'', Springer. , In asymptotic theory, the standard approach is . For some
statistical model A statistical model is a mathematical model that embodies a set of statistical assumptions concerning the generation of Sample (statistics), sample data (and similar data from a larger Statistical population, population). A statistical model repres ...
s, slightly different approaches of asymptotics may be used. For example, with panel data, it is commonly assumed that one dimension in the data remains fixed, whereas the other dimension grows: and , or vice versa. Besides the standard approach to asymptotics, other alternative approaches exist: * Within the
local asymptotic normality In statistics, local asymptotic normality is a property of a sequence of statistical models, which allows this sequence to be asymptotically approximated by a normal location model, after a rescaling of the parameter. An important example when t ...
framework, it is assumed that the value of the "true parameter" in the model varies slightly with , such that the -th model corresponds to . This approach lets us study the regularity of estimators. * When statistical tests are studied for their power to distinguish against the alternatives that are close to the null hypothesis, it is done within the so-called "local alternatives" framework: the null hypothesis is and the alternative is . This approach is especially popular for the
unit root test In statistics, a unit root test tests whether a time series variable is non-stationary and possesses a unit root. The null hypothesis is generally defined as the presence of a unit root and the alternative hypothesis is either Stationary process, s ...
s. * There are models where the dimension of the parameter space slowly expands with , reflecting the fact that the more observations there are, the more structural effects can be feasibly incorporated in the model. * In kernel density estimation and kernel regression, an additional parameter is assumed—the bandwidth . In those models, it is typically taken that as . The rate of convergence must be chosen carefully, though, usually . In many cases, highly accurate results for finite samples can be obtained via numerical methods (i.e. computers); even in such cases, though, asymptotic analysis can be useful. This point was made by , as follows.


Modes of convergence of random variables


Asymptotic properties


Estimators


'' Consistency''

A sequence of estimates is said to be ''consistent'', if it
converges in probability In probability theory, there exist several different notions of convergence of random variables. The convergence of sequences of random variables to some limit random variable is an important concept in probability theory, and its applications to ...
to the true value of the parameter being estimated: : \hat\theta_n\ \xrightarrow\ \theta_0. That is, roughly speaking with an infinite amount of data the estimator (the formula for generating the estimates) would almost surely give the correct result for the parameter being estimated.


'' Asymptotic distribution''

If it is possible to find sequences of non-random constants , (possibly depending on the value of ), and a non-degenerate distribution such that : b_n(\hat\theta_n - a_n)\ \xrightarrow\ G , then the sequence of estimators \textstyle\hat\theta_n is said to have the '' asymptotic distribution'' ''G''. Most often, the estimators encountered in practice are asymptotically normal, meaning their asymptotic distribution is the
normal distribution In statistics, a normal distribution or Gaussian distribution is a type of continuous probability distribution for a real-valued random variable. The general form of its probability density function is : f(x) = \frac e^ The parameter \mu ...
, with , , and : : \sqrt(\hat\theta_n - \theta_0)\ \xrightarrow\ \mathcal(0, V).


''Asymptotic confidence regions''


Asymptotic theorems

* Central limit theorem * Continuous mapping theorem * Glivenko–Cantelli theorem * Law of large numbers * Law of the iterated logarithm * Slutsky's theorem * Delta method


See also

*Asymptotic analysis *Exact statistics *Large deviations theory


References


Bibliography

* * * * * * * * * * * * * {{Authority control Asymptotic theory (statistics),