Kuiper's test is used in
statistics
Statistics (from German language, German: ', "description of a State (polity), state, a country") is the discipline that concerns the collection, organization, analysis, interpretation, and presentation of data. In applying statistics to a s ...
to
test whether a data sample comes from a given
distribution (one-sample Kuiper test), or whether two data samples came from the same unknown distribution (two-sample Kuiper test). It is named after Dutch mathematician
Nicolaas Kuiper.
Kuiper's test is closely related to the better-known
Kolmogorov–Smirnov test (or K-S test as it is often called). As with the K-S test, the discrepancy statistics ''D''
+ and ''D''
− represent the absolute sizes of the most positive and most negative differences between the two
cumulative distribution functions that are being compared. The trick with Kuiper's test is to use the quantity ''D''
+ + ''D''
− as the test statistic. This small change makes Kuiper's test as sensitive in the tails as at the
median
The median of a set of numbers is the value separating the higher half from the lower half of a Sample (statistics), data sample, a statistical population, population, or a probability distribution. For a data set, it may be thought of as the “ ...
and also makes it invariant under cyclic transformations of the independent variable. The
Anderson–Darling test is another test that provides equal sensitivity at the tails as the median, but it does not provide the cyclic invariance.
This invariance under cyclic transformations makes Kuiper's test invaluable when testing for
cyclic variations by time of year or day of the week or time of day, and more generally for testing the fit of, and differences between,
circular probability distributions.
One-sample Kuiper test

The one-sample test statistic,
, for Kuiper's test is defined as follows. Let ''F'' be the continuous
cumulative distribution function which is to be the
null hypothesis. Denote by ''F''
''n'' the
empirical distribution function for ''n''
independent and identically distributed (i.i.d.) observations ''X
i'', which is defined as
:
:where
is the
indicator function
In mathematics, an indicator function or a characteristic function of a subset of a set is a function that maps elements of the subset to one, and all other elements to zero. That is, if is a subset of some set , then the indicator functio ...
, equal to 1 if
and equal to 0 otherwise.
Then the one-sided
Kolmogorov–Smirnov statistic for the given
cumulative distribution function ''F''(''x'') is
:
:
where
is the
supremum function. And finally the one-sample Kuiper test is defined as,
:
or equivalently
:
where
is the
infimum function.
Tables for the critical points of the test statistic
are available, and these include certain cases where the distribution being tested is not fully known, so that parameters of the family of distributions are
estimated.
The
asymptotic distribution of the statistic
is given by,
:
For
, a reasonable approximation is obtained from the first term of the series as follows
:
Two-sample Kuiper test
The Kuiper test may also be used to test whether a pair of random samples, either on the real line or the circle coming from a common but unknown distribution. In this case, the Kuiper statistic is
:
where
and
are the
empirical distribution functions of the first and the second sample respectively,
is the
supremum function, and
is the
infimum function.
Example
We could test the hypothesis that computers fail more during some times of the year than others. To test this, we would collect the dates on which the test set of computers had failed and build an
empirical distribution function. The
null hypothesis is that the failures are
uniformly distributed. Kuiper's statistic does not change if we change the beginning of the year and does not require that we bin failures into months or the like.
[Watson, G.S. (1961) "Goodness-Of-Fit Tests on a Circle", '' Biometrika'', 48 (1/2), 109–114 ] Another test statistic having this property is the Watson statistic,[ Pearson, E.S., Hartley, H.O. (1972) ''Biometrika Tables for Statisticians, Volume 2'', CUP. (Page 118)] which is related to the Cramér–von Mises test.
However, if failures occur mostly on weekends, many uniform-distribution tests such as K-S and Kuiper would miss this, since weekends are spread throughout the year. This inability to distinguish distributions with a comb-like shape from continuous uniform distributions is a key problem with all statistics based on a variant of the K-S test. Kuiper's test, applied to the event times modulo one week, is able to detect such a pattern. Using event times that have been modulated with the K-S test can result in different results depending on how the data is phased. In this example, the K-S test may detect the non-uniformity if the data is set to start the week on Saturday, but fail to detect the non-uniformity if the week starts on Wednesday.
See also
* Kolmogorov–Smirnov test
References
{{Reflist
Statistical tests
Nonparametric statistics
Directional statistics
1960 introductions