In
statistics
Statistics (from German language, German: ', "description of a State (polity), state, a country") is the discipline that concerns the collection, organization, analysis, interpretation, and presentation of data. In applying statistics to a s ...
, the Neyman–Pearson lemma describes the existence and uniqueness of the likelihood ratio as a
uniformly most powerful test in certain contexts. It was introduced by
Jerzy Neyman
Jerzy Spława-Neyman (April 16, 1894 – August 5, 1981; ) was a Polish mathematician and statistician who first introduced the modern concept of a confidence interval into statistical hypothesis testing and, with Egon Pearson, revised Ronald Fis ...
and
Egon Pearson
Egon Sharpe Pearson (11 August 1895 – 12 June 1980) was one of three children of Karl Pearson and Maria, née Sharpe, and, like his father, a British statistician.
Career
Pearson was educated at Winchester College and Trinity College ...
in a paper in 1933. The Neyman–Pearson lemma is part of the Neyman–Pearson theory of statistical testing, which introduced concepts such as
errors of the second kind,
power function
In mathematics, exponentiation, denoted , is an operation involving two numbers: the ''base'', , and the ''exponent'' or ''power'', . When is a positive integer, exponentiation corresponds to repeated multiplication of the base: that is, i ...
, and inductive behavior.
[The Fisher, Neyman–Pearson Theories of Testing Hypotheses: One Theory or Two?: Journal of the American Statistical Association: Vol 88, No 424]
The Fisher, Neyman–Pearson Theories of Testing Hypotheses: One Theory or Two?: Journal of the American Statistical Association: Vol 88, No 424
/ref>[Wald: Chapter II: The Neyman–Pearson Theory of Testing a Statistical Hypothesis]
Wald: Chapter II: The Neyman–Pearson Theory of Testing a Statistical Hypothesis
/ref>[The Empire of Chance]
The Empire of Chance
/ref> The previous Fisherian theory of significance testing postulated only one hypothesis. By introducing a competing hypothesis, the Neyman–Pearsonian flavor of statistical testing allows investigating the two types of errors. The trivial cases where one always rejects or accepts the null hypothesis are of little interest but it does prove that one must not relinquish control over one type of error while calibrating the other. Neyman and Pearson accordingly proceeded to restrict their attention to the class of all level tests while subsequently minimizing type II error, traditionally denoted by . Their seminal paper of 1933, including the Neyman–Pearson lemma, comes at the end of this endeavor, not only showing the existence of tests with the most power
Power may refer to:
Common meanings
* Power (physics), meaning "rate of doing work"
** Engine power, the power put out by an engine
** Electric power, a type of energy
* Power (social and political), the ability to influence people or events
Math ...
that retain a prespecified level of type I error (), but also providing a way to construct such tests. The Karlin-Rubin theorem extends the Neyman–Pearson lemma to settings involving composite hypotheses with monotone likelihood ratios.
Statement
Consider a test with hypotheses and , where the probability density function
In probability theory, a probability density function (PDF), density function, or density of an absolutely continuous random variable, is a Function (mathematics), function whose value at any given sample (or point) in the sample space (the s ...
(or probability mass function
In probability and statistics, a probability mass function (sometimes called ''probability function'' or ''frequency function'') is a function that gives the probability that a discrete random variable is exactly equal to some value. Sometimes i ...
) is for .
For any hypothesis test with rejection set , and any