HOME

TheInfoList



OR:

Empirical likelihood (EL) is a nonparametric method that requires fewer assumptions about the error distribution while retaining some of the merits in likelihood-based inference. The estimation method requires that the data are
independent and identically distributed In probability theory and statistics, a collection of random variables is independent and identically distributed if each random variable has the same probability distribution as the others and all are mutually independent. This property is usual ...
(iid). It performs well even when the distribution is asymmetric or censored. EL methods can also handle constraints and prior information on parameters. Art Owen pioneered work in this area with his 1988 paper.


Estimation Procedure

EL estimates are calculated by maximizing the empirical
likelihood function The likelihood function (often simply called the likelihood) represents the probability of random variable realizations conditional on particular values of the statistical parameters. Thus, when evaluated on a given sample, the likelihood funct ...
subject to constraints based on the estimating function and the trivial assumption that the probability weights of the likelihood function sum to 1. This procedure is represented as: : \max_ \sum_^n \ln \pi_i subject to the constraints : s.t. \sum_^n\pi_i = 1, \sum_^n\pi_i h(y_i;\theta) = 0,\forall i\in quad0\le\pi_i. The value of the theta parameter can be found by solving the Lagrangian function : \mathcal = \sum_^n \ln \pi_ + \mu (1- \sum_^n \pi_)-n\tau' \sum_^n \pi_ h(y_;\theta). There is a clear analogy between this maximization problem and the one solved for maximum entropy. The empirical-likelihood method can also be also employed for
discrete distribution In probability theory and statistics, a probability distribution is the mathematical function that gives the probabilities of occurrence of different possible outcomes for an experiment. It is a mathematical description of a random phenomenon ...
s: : F(x_) = \ p_, \ i = 1,...,n where :\ p_ \geq 0,\sum_^n\ p_ =1. Then the likelihood L(p_,...,p_)= \prod_^n \ p_ is referred to as an empirical likelihood.


Empirical Likelihood Ratio (ELR)

An empirical likelihood ratio function is defined and used to obtain confidence intervals parameter of interest θ similar to parametric likelihood ratio confidence intervals. Let L(F) be the empirical likelihood of function F, then the ELR would be: R(F)=L(F)/L(F_). Consider sets of the form C = \. Under such conditions a test of T(F)=t rejects when t does not belong to C, that is, when no distribution F with T(F)=t has likelihood L(F) \geq rL(F_). The central result is for the mean of X. Clearly, some restrictions on F are needed, or else C = \reals^p whenever r < 1. To see this, let: F = \epsilon \delta_ + (1- \epsilon) F_ If \epsilon is small enough and \epsilon >0, then R(F) \geq r. But then, as x ranges through \reals^p, so does the mean of F, tracing out C = \reals^p. The problem can be solved by restricting to distributions F that are supported in a bounded set. It turns out to be possible to restrict attention t distributions with support in the sample, in other words, to distribution F \ll F_. Such method is convenient since the statistician might not be willing to specify a bounded support for F, and since t converts the construction of C into a finite dimensional problem.


Other Applications

The use of empirical likelihood is not limited to confidence intervals. In quantile estimation, an EL-based categorization procedure helps determine the shape of the true discrete distribution at level p, and also provides a way of formulating a consistent estimator. In addition, EL can be used in place of parametric likelihood to form
model selection Model selection is the task of selecting a statistical model from a set of candidate models, given data. In the simplest cases, a pre-existing set of data is considered. However, the task can also involve the design of experiments such that the ...
criteria.


See also

* Bootstrapping (statistics) * Jackknife (statistics)


References

{{reflist Probability distribution fitting