HOME

TheInfoList



OR:

Neyman construction, named after
Jerzy Neyman Jerzy Neyman (April 16, 1894 – August 5, 1981; born Jerzy Spława-Neyman; ) was a Polish mathematician and statistician who spent the first part of his professional career at various institutions in Warsaw, Poland and then at University Colleg ...
, is a
frequentist Frequentist inference is a type of statistical inference based in frequentist probability, which treats “probability” in equivalent terms to “frequency” and draws conclusions from sample-data by means of emphasizing the frequency or pro ...
method to construct an interval at a
confidence level In frequentist statistics, a confidence interval (CI) is a range of estimates for an unknown parameter. A confidence interval is computed at a designated ''confidence level''; the 95% confidence level is most common, but other levels, such as 9 ...
C, \, such that if we repeat the experiment many times the interval will contain the true value of some parameter a fraction C\, of the time.


Theory

Assume X_,X_,...X_ are random variables with joint pdf f(x_,x_,...x_ , \theta_,\theta_,...,\theta_), which depends on k unknown parameters. For convenience, let \Theta be the sample space defined by the n random variables and subsequently define a sample point in the sample space as X=(X_,X_,...X_)
Neyman originally proposed defining two functions L(x) and U(x) such that for any sample point,X, *L(X)\leq U(X) \forall X\in\Theta * L and U are single valued and defined. Given an observation, X^', the probability that \theta_ lies between L(X^') and U(X^') is defined as P(L(X^')\leq\theta_\leq U(X^') , X^') with probability of 0 or 1. These calculated probabilities fail to draw meaningful inference about \theta_ since the probability is simply zero or unity. Furthermore, under the frequentist construct the model parameters are unknown constants and not permitted to be random variables. For example if \theta_=5, then P(2 \leq 5\leq 10)=1. Likewise, if \theta_=11, then P(2 \leq 11 \leq 10)=0 As Neyman describes in his 1937 paper, suppose that we consider all points in the sample space, that is, \forall X\in\Theta, which are a system of random variables defined by the joint pdf described above. Since L and U are functions of X they too are random variables and one can examine the meaning of the following probability statement:
:Under the frequentist construct the model parameters are unknown constants and not permitted to be random variables. Considering all the sample points in the sample space as random variables defined by the joint pdf above, that is all X\in\Theta it can be shown that L and U are functions of random variables and hence random variables. Therefore one can look at the probability of L(X) and U(X) for some X\in\Theta. If \theta_^' is the true value of \theta_, we can define L and U such that the probability L(X) \leq\theta_^' and \theta_^'\leq U(X) is equal to pre-specified
confidence level In frequentist statistics, a confidence interval (CI) is a range of estimates for an unknown parameter. A confidence interval is computed at a designated ''confidence level''; the 95% confidence level is most common, but other levels, such as 9 ...
, C. That is, P(L(X)\leq\theta_^'\leq U(X) , \theta_^')=C where 0\leq C \leq1 and L(X) and U(X) are the upper and lower confidence limits for \theta_


Coverage probability

The
coverage probability In statistics, the coverage probability is a technique for calculating a confidence interval which is the proportion of the time that the interval contains the true value of interest. For example, suppose our interest is in the mean number of mon ...
, C, for Neyman construction is the frequency of experiments in which the confidence interval contains the actual value of interest. Generally, the coverage probability is set to a 95\% confidence. For Neyman construction, the coverage probability is set to some value C where 0 < C < 1. This value C tells how confident we are that the true value will be contained in the interval.


Implementation

A Neyman construction can be carried out by performing multiple experiments that construct data sets corresponding to a given value of the parameter. The experiments are fitted with conventional methods, and the space of fitted parameter values constitutes the band which the confidence interval can be selected from.


Classic example

Suppose X \sim N( \theta,\sigma^2), where \theta and \sigma^2 are unknown constants where we wish to estimate \theta. We can define (2) single value functions, L and U, defined by the process above such that given a pre-specified confidence level, C, and random sample X^*=(x_1,x_2,...x_n) : L(X^*)=\bar - t \frac : U(X^*)=\bar + t \frac where s/\sqrt is the
standard error The standard error (SE) of a statistic (usually an estimate of a parameter) is the standard deviation of its sampling distribution or an estimate of that standard deviation. If the statistic is the sample mean, it is called the standard error o ...
, and the
sample mean The sample mean (or "empirical mean") and the sample covariance are statistics computed from a sample of data on one or more random variables. The sample mean is the average value (or mean value) of a sample of numbers taken from a larger popu ...
and standard deviation are: : \bar=\frac \sum_^n x_i=\frac(x_1,x_2,...x_n) : s=\sqrt The factor t follows a ''t'' distribution with (n-1) degrees of freedom, t~t(/2,n-1)


Another Example

X_1, X_2, ... , X_n are iid random variables, and let T = (X_1, X_2,..., X_n) . Suppose T\sim N(\mu, \sigma^2) . Now to construct a confidence interval with C level of confidence. We know \bar is sufficient for \mu . So, : p(-Z_\frac \le \frac \le Z_\frac ) = C : p(-Z_\frac \sigma^2 \le \bar - \mu \le Z_\frac \sigma^2 ) = C : p(\bar - Z_\frac \sigma^2 \le \mu \le \bar + Z_\frac \sigma^2 ) = C This produces a 100(C)\% confidence interval for \mu where, : L(T) = \bar - Z_\frac \sigma^2 : U(T) = \bar + Z_\frac \sigma^2 .


See also

*
Probability interpretations The word probability has been used in a variety of ways since it was first applied to the mathematical study of games of chance. Does probability measure the real, physical, tendency of something to occur, or is it a measure of how strongly one b ...


References

{{Reflist Estimation methods