HOME

TheInfoList



OR:

The Wilcoxon signed-rank test is a
non-parametric Nonparametric statistics is a type of statistical analysis that makes minimal assumptions about the underlying distribution of the data being studied. Often these models are infinite-dimensional, rather than finite dimensional, as in parametric sta ...
rank test for
statistical hypothesis testing A statistical hypothesis test is a method of statistical inference used to decide whether the data provide sufficient evidence to reject a particular hypothesis. A statistical hypothesis test typically involves a calculation of a test statistic. T ...
used either to test the
location In geography, location or place is used to denote a region (point, line, or area) on Earth's surface. The term ''location'' generally implies a higher degree of certainty than ''place'', the latter often indicating an entity with an ambiguous bou ...
of a population based on a sample of data, or to compare the locations of two populations using two matched samples., p. 350 The one-sample version serves a purpose similar to that of the one-sample Student's ''t''-test. For two matched samples, it is a
paired difference test A paired difference test, better known as a paired comparison, is a type of location test that is used when comparing two sets of paired sample, paired measurements to assess whether their expected value, population means differ. A paired differen ...
like the paired Student's ''t''-test (also known as the "''t''-test for matched pairs" or "''t''-test for dependent samples"). The Wilcoxon test is a good alternative to the
t-test Student's ''t''-test is a statistical test used to test whether the difference between the response of two groups is Statistical significance, statistically significant or not. It is any statistical hypothesis testing, statistical hypothesis test ...
when the
normal distribution In probability theory and statistics, a normal distribution or Gaussian distribution is a type of continuous probability distribution for a real-valued random variable. The general form of its probability density function is f(x) = \frac ...
of the differences between paired individuals cannot be assumed. Instead, it assumes a weaker hypothesis that the distribution of this difference is symmetric around a central value and it aims to test whether this center value differs significantly from zero. The Wilcoxon test is a more powerful alternative to the
sign test The sign test is a statistical test for consistent differences between pairs of observations, such as the weight of subjects before and after treatment. Given pairs of observations (such as weight pre- and post-treatment) for each subject, the sign ...
because it considers the magnitude of the differences, but it requires this moderately strong assumption of symmetry.


History

The test is named after Frank Wilcoxon (1892–1965) who, in a single paper, proposed both it and the rank-sum test for two independent samples. The test was popularized by
Sidney Siegel Sidney Siegel (4 January 1916 in New York City – 29 November 1961) was an American psychologist who became especially well known for his work in popularizing non-parametric statistics for use in the behavioral sciences. He was a co-developer of ...
(1956) in his influential textbook on non-parametric statistics. Siegel used the symbol ''T'' for the test statistic, and consequently, the test is sometimes referred to as the Wilcoxon ''T''-test.


Test procedure

There are two variants of the signed-rank test. From a theoretical point of view, the one-sample test is more fundamental because the paired sample test is performed by converting the data to the situation of the one-sample test. However, most practical applications of the signed-rank test arise from paired data. For a paired sample test, the data consists of a sample (X_1, Y_1), \dots, (X_n, Y_n). Each data point in the sample is a pair of measurements. In the simplest case, the measurements are on an
interval scale Level of measurement or scale of measure is a classification that describes the nature of information within the values assigned to variables. Psychologist Stanley Smith Stevens developed the best-known classification with four levels, or scale ...
. Then they may be converted to
real number In mathematics, a real number is a number that can be used to measure a continuous one- dimensional quantity such as a duration or temperature. Here, ''continuous'' means that pairs of values can have arbitrarily small differences. Every re ...
s, and the paired sample test is converted to a one-sample test by replacing each pair of numbers (X_i, Y_i) by its difference X_i - Y_i. In general, it must be possible to rank the differences between the pairs. This requires that the data be on an ''ordered metric'' scale, a type of scale that carries more information than an ordinal scale but may have less than an interval scale.Siegel, p. 76 The data for a one-sample test is a sample in which each observation is a real number: X_1, \dots, X_n. Assume for simplicity that the observations in the sample have distinct absolute values and that no observation equals zero. (Zeros and ties introduce several Complications; see below.) The test is performed as follows:Conover, p. 353 # Compute , X_1, , \dots, , X_n, . # Sort , X_1, , \dots, , X_n, , and use this sorted list to assign ranks R_1, \dots, R_n: The rank of the smallest observation is one, the rank of the next smallest is two, and so on. # Let \sgn denote the
sign function In mathematics, the sign function or signum function (from '' signum'', Latin for "sign") is a function that has the value , or according to whether the sign of a given real number is positive or negative, or the given number is itself zer ...
: \sgn(x) = 1 if x > 0 and \sgn(x) = -1 if x < 0. The
test statistic Test statistic is a quantity derived from the sample for statistical hypothesis testing.Berger, R. L.; Casella, G. (2001). ''Statistical Inference'', Duxbury Press, Second Edition (p.374) A hypothesis test is typically specified in terms of a tes ...
is the ''signed-rank sum'' T: T = \sum_^N \sgn(X_i)R_i. # Produce a p-value by comparing T to its distribution under the null hypothesis. The ranks are defined so that R_i is the number of j for which , X_j, \le , X_i, . Additionally, if \sigma : \ \to \ is such that , X_, < \dots < , X_, , then R_ = i for all i. The signed-rank sum T is closely related to two other test statistics. The ''positive-rank sum'' T^+ and the ''negative-rank sum'' T^- are defined byPratt and Gibbons, p. 148 \begin T^+ &= \sum_ R_i, \\ T^- &= \sum_ R_i. \end Because T^+ + T^- equals the sum of all the ranks, which is 1 + 2 + \dots + n = n(n + 1)/2, these three statistics are related by: \begin T^+ &= \frac - T^- = \frac + \frac, \\ T^- &= \frac - T^+ = \frac - \frac, \\ T &= T^+ - T^- = 2T^+ - \frac = \frac - 2T^-. \end Because T, T^+, and T^- carry the same information, any of them may be used as the test statistic. The positive-rank sum and negative-rank sum have alternative interpretations that are useful for the theory behind the test. Define the ''Walsh average'' W_ to be \tfrac12(X_i + X_j). Then: \begin T^+ = \#\, \\ T^- = \#\. \end


Null and alternative hypotheses


One-sample test

The one-sample Wilcoxon signed-rank test can be used to test whether data comes from a symmetric population with a specified center (which corresponds to
median The median of a set of numbers is the value separating the higher half from the lower half of a Sample (statistics), data sample, a statistical population, population, or a probability distribution. For a data set, it may be thought of as the “ ...
,
mean A mean is a quantity representing the "center" of a collection of numbers and is intermediate to the extreme values of the set of numbers. There are several kinds of means (or "measures of central tendency") in mathematics, especially in statist ...
and pseudomedian). If the population center is known, then it can be used to test whether data is symmetric about its center., pp. 32, 50 To explain the null and alternative hypotheses formally, assume that the data consists of
independent and identically distributed Independent or Independents may refer to: Arts, entertainment, and media Artist groups * Independents (artist group), a group of modernist painters based in Pennsylvania, United States * Independentes (English: Independents), a Portuguese artist ...
samples from a distribution F. If F can be assumed symmetric, then the null and alternative hypotheses are the following: ; Null hypothesis ''H''0 : F is symmetric about \mu = 0. ; One-sided alternative hypothesis ''H''1 : F is symmetric about \mu < 0. ; One-sided alternative hypothesis ''H''2 : F is symmetric about \mu > 0. ; Two-sided alternative hypothesis ''H''3 : F is symmetric about \mu \neq 0. If in addition \Pr(X = \mu) = 0, then \mu is a median of F. If this median is unique, then the Wilcoxon signed-rank sum test becomes a test for the location of the median. When the mean of F is defined, then the mean is \mu, and the test is also a test for the location of the mean. The restriction that the alternative distribution is symmetric is highly restrictive, but for one-sided tests it can be weakened. Say that F is ''stochastically smaller than a distribution symmetric about zero'' if an F-distributed random variable X satisfies \Pr(X < -x) \ge \Pr(X > x) for all x \ge 0. Similarly, F is ''stochastically larger than a distribution symmetric about zero'' if \Pr(X < -x) \le \Pr(X > x) for all x \ge 0. Then the Wilcoxon signed-rank sum test can also be used for the following null and alternative hypotheses:Hettmansperger, pp. 49–50 ; Null hypothesis ''H''0 : F is symmetric about \mu = 0. ; One-sided alternative hypothesis ''H''1 : F is stochastically smaller than a distribution symmetric about zero. ; One-sided alternative hypothesis ''H''2 : F is stochastically larger than a distribution symmetric about zero. The hypothesis that the data are IID can be weakened. Each data point may be taken from a different distribution, as long as all the distributions are assumed to be continuous and symmetric about a common point \mu_0. The data points are not required to be independent as long as the conditional distribution of each observation given the others is symmetric about \mu_0.


Paired data test

Because the paired data test arises from taking paired differences, its null and alternative hypotheses can be derived from those of the one-sample test. In each case, they become assertions about the behavior of the differences X_i - Y_i. Let F(x, y) be the joint cumulative distribution of the pairs (X_i, Y_i). In this case, the null and alternative hypotheses are:, pp. 39–41 ; Null hypothesis ''H''0 : The observations X_i - Y_i are symmetric about \mu = 0. ; One-sided alternative hypothesis ''H''1 : The observations X_i - Y_i are symmetric about \mu < 0. ; One-sided alternative hypothesis ''H''2 : The observations X_i - Y_i are symmetric about \mu > 0. ; Two-sided alternative hypothesis ''H''3 : The observations X_i - Y_i are symmetric about \mu \neq 0. These can also be expressed more directly in terms of the original pairs:Pratt and Gibbons, p. 147 ; Null hypothesis ''H''0 : The observations (X_i, Y_i) are ''exchangeable'', meaning that (X_i, Y_i) and (Y_i, X_i) have the same distribution. Equivalently, F(x,y)=F(y,x). ; One-sided alternative hypothesis ''H''1 : For some \mu < 0, the pairs (X_i, Y_i) and (Y_i + \mu, X_i - \mu) have the same distribution. ; One-sided alternative hypothesis ''H''2 : For some \mu > 0, the pairs (X_i, Y_i) and (Y_i + \mu, X_i - \mu) have the same distribution. ; Two-sided alternative hypothesis ''H''3 : For some \mu \neq 0, the pairs (X_i, Y_i) and (Y_i + \mu, X_i - \mu) have the same distribution. The null hypothesis of exchangeability can arise from a matched pair experiment with a treatment group and a control group. Randomizing the treatment and control within each pair makes the observations exchangeable. For an exchangeable distribution, X_i - Y_i has the same distribution as Y_i - X_i, and therefore, under the null hypothesis, the distribution is symmetric about zero. Because the one-sample test can be used as a one-sided test for stochastic dominance, the paired difference Wilcoxon test can be used to compare the following hypotheses: ; Null hypothesis ''H''0 : The observations (X_i, Y_i) are exchangeable. ; One-sided alternative hypothesis ''H''1 : The differences X_i - Y_i are stochastically smaller than a distribution symmetric about zero, that is, for every x \ge 0, \Pr(X_i < Y_i - x) \ge \Pr(X_i > Y_i + x). ; One-sided alternative hypothesis ''H''2 : The differences X_i - Y_i are stochastically larger than a distribution symmetric about zero, that is, for every x \ge 0, \Pr(X_i < Y_i - x) \le \Pr(X_i > Y_i + x).


Zeros and ties

In real data, it sometimes happens that there is an observation X_i in the sample which equals zero or a pair (X_i, Y_i) with X_i = Y_i. It can also happen that there are tied observations. This means that for some i \neq j, we have X_i = X_j (in the one-sample case) or X_i - Y_i = X_j - Y_j (in the paired sample case). This is particularly common for discrete data. When this happens, the test procedure defined above is usually undefined because there is no way to uniquely rank the data. (The sole exception is if there is a single observation X_i which is zero and no other zeros or ties.) Because of this, the test statistic needs to be modified.


Zeros

Wilcoxon's original paper did not address the question of observations (or, in the paired sample case, differences) that equal zero. However, in later surveys, he recommended removing zeros from the sample. Then the standard signed-rank test could be applied to the resulting data, as long as there were no ties. This is now called the ''reduced sample procedure.'' Pratt observed that the reduced sample procedure can lead to paradoxical behavior. He gives the following example. Suppose that we are in the one-sample situation and have the following thirteen observations: :0, 2, 3, 4, 6, 7, 8, 9, 11, 14, 15, 17, −18. The reduced sample procedure removes the zero. To the remaining data, it assigns the signed ranks: :1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, −12. This has a one-sided ''p''-value of 55/2^, and therefore the sample is not significantly positive at any significance level \alpha < 55/2^ \approx 0.0134. Pratt argues that one would expect that decreasing the observations should certainly not make the data appear more positive. However, if the zero observation is decreased by an amount less than 2, or if all observations are decreased by an amount less than 1, then the signed ranks become: :−1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, −13. This has a one-sided ''p''-value of 109/2^. Therefore the sample would be judged significantly positive at any significance level \alpha > 109/2^ \approx 0.0133. The paradox is that, if \alpha is between 109/2^ and 55/2^, then ''decreasing'' an insignificant sample causes it to appear significantly ''positive''. Pratt therefore proposed the ''signed-rank zero procedure.'' This procedure includes the zeros when ranking the observations in the sample. However, it excludes them from the test statistic, or equivalently it defines \sgn(0) = 0. Pratt proved that the signed-rank zero procedure has several desirable behaviors not shared by the reduced sample procedure: # Increasing the observed values does not make a significantly positive sample insignificant, and it does not make an insignificant sample significantly negative. # If the distribution of the observations is symmetric, then the values of \mu which the test does not reject form an interval. # A sample is significantly positive, not significant, or significantly negative, if and only if it is so when the zeros are assigned arbitrary non-zero signs, if and only if it is so when the zeros are replaced with non-zero values which are smaller in absolute value than any non-zero observation. # For a fixed significance threshold \alpha, and for a test which is randomized to have level exactly \alpha, the probability of calling a set of observations significantly positive (respectively, significantly negative) is a non-decreasing (respectively, non-increasing) function of the observations. Pratt remarks that, when the signed-rank zero procedure is combined with the average rank procedure for resolving ties, the resulting test is a consistent test against the alternative hypothesis that, for all i \neq j, \Pr(X_i + X_j > 0) and \Pr(X_i + X_j < 0) differ by at least a fixed constant that is independent of i and j. The signed-rank zero procedure has the disadvantage that, when zeros occur, the null distribution of the test statistic changes, so tables of ''p''-values can no longer be used. When the data is on a
Likert scale A Likert scale ( ,) is a psychometric scale named after its inventor, American social psychologist Rensis Likert, which is commonly used in research questionnaires. It is the most widely used approach to scaling responses in survey research, s ...
with equally spaced categories, the signed-rank zero procedure is more likely to maintain the Type I error rate than the reduced sample procedure. From the viewpoint of statistical efficiency, there is no perfect rule for handling zeros. Conover found examples of null and alternative hypotheses that show that neither Wilcoxon's and Pratt's methods are uniformly better than the other. When comparing a discrete uniform distribution to a distribution where probabilities linearly increase from left to right, Pratt's method outperforms Wilcoxon's. When testing a binomial distribution centered at zero to see whether the parameter of each Bernoulli trial is \tfrac12, Wilcoxon's method outperforms Pratt's.


Ties

When the data does not have ties, the ranks R_i are used to calculate the test statistic. In the presence of ties, the ranks are not defined. There are two main approaches to resolving this. The most common procedure for handling ties, and the one originally recommended by Wilcoxon, is called the ''average rank'' or ''midrank procedure.'' This procedure assigns numbers between 1 and ''n'' to the observations, with two observations getting the same number if and only if they have the same absolute value. These numbers are conventionally called ranks even though the set of these numbers is not equal to \ (except when there are no ties). The rank assigned to an observation is the average of the possible ranks it would have if the ties were broken in all possible ways. Once the ranks are assigned, the test statistic is computed in the same way as usual. For example, suppose that the observations satisfy , X_3, < , X_2, = , X_5, < , X_6, < , X_1, = , X_4, = , X_7, . In this case, X_3 is assigned rank 1, X_2 and X_5 are assigned rank (2 + 3) / 2 = 2.5, X_6 is assigned rank 4, and X_1, X_4, and X_7 are assigned rank (5 + 6 + 7) / 3 = 6. Formally, suppose that there is a set of observations all having the same absolute value v, that k - 1 observations have absolute value less than v, and that \ell observations have absolute value less than or equal to v. If the ties among the observations with absolute value v were broken, then these observations would occupy ranks k through \ell. The average rank procedure therefore assigns them the rank (k + \ell) / 2. Under the average rank procedure, the null distribution is different in the presence of ties. The average rank procedure also has some disadvantages that are similar to those of the reduced sample procedure for zeros. It is possible that a sample can be judged significantly positive by the average rank procedure; but increasing some of the values so as to break the ties, or breaking the ties in any way whatsoever, results in a sample that the test judges to be not significant.Pratt, p. 660 However, increasing all the observed values by the same amount cannot turn a significantly positive result into an insignificant one, nor an insignificant one into a significantly negative one. Furthermore, if the observations are distributed symmetrically, then the values of \mu which the test does not reject form an interval. The other common option for handling ties is a tiebreaking procedure. In a tiebreaking procedure, the observations are assigned distinct ranks in the set \. The rank assigned to an observation depends on its absolute value and the tiebreaking rule. Observations with smaller absolute values are always given smaller ranks, just as in the standard rank-sum test. The tiebreaking rule is used to assign ranks to observations with the same absolute value. One advantage of tiebreaking rules is that they allow the use of standard tables for computing ''p''-values. ''Random tiebreaking'' breaks the ties at random. Under random tiebreaking, the null distribution is the same as when there are no ties, but the result of the test depends not only on the data but on additional random choices. Averaging the ranks over the possible random choices results in the average rank procedure. One could also report the probability of rejection over all random choices. Random tiebreaking has the advantage that the probability that a sample is judged significantly positive does not decrease when some observations are increased. ''Conservative tiebreaking'' breaks the ties in favor of the null hypothesis. When performing a one-sided test in which negative values of T tend to be more significant, ties are broken by assigning lower ranks to negative observations and higher ranks to positive ones. When the test makes positive values of T significant, ties are broken the other way, and when large absolute values of T are significant, ties are broken so as to make , T, as small as possible. Pratt observes that when ties are likely, the conservative tiebreaking procedure "presumably has low power, since it amounts to breaking all ties in favor of the null hypothesis." The average rank procedure can disagree with tiebreaking procedures. Pratt gives the following example. Suppose that the observations are: :1, 1, 1, 1, 2, 3, −4. The average rank procedure assigns these the signed ranks :2.5, 2.5, 2.5, 2.5, 5, 6, −7. This sample is significantly positive at the one-sided level \alpha = 14 / 2^7. On the other hand, any tiebreaking rule will assign the ranks :1, 2, 3, 4, 5, 6, −7. At the same one-sided level \alpha = 14 / 2^7, this is not significant. Two other options for handling ties are based around averaging the results of tiebreaking. In the ''average statistic'' method, the test statistic T is computed for every possible way of breaking ties, and the final statistic is the mean of the tie-broken statistics. In the ''average probability'' method, the ''p''-value is computed for every possible way of breaking ties, and the final ''p''-value is the mean of the tie-broken ''p''-values.


Computing the null distribution

Computing ''p''-values requires knowing the distribution of T under the null hypothesis. There is no closed formula for this distribution. However, for small values of n, the distribution may be computed exactly. Under the null hypothesis that the data is symmetric about zero, each X_i is exactly as likely to be positive as it is negative. Therefore the probability that T = t under the null hypothesis is equal to the number of sign combinations that yield T = t divided by the number of possible sign combinations 2^n. This can be used to compute the exact distribution of T under the null hypothesis. Computing the distribution of T by considering all possibilities requires computing 2^n sums, which is intractable for all but the smallest n. However, there is an efficient recursion for the distribution of T^+. Define u_n(t^+) to be the number of sign combinations for which T^+ = t^+. This is equal to the number of subsets of \ which sum to t^+. The base cases of the recursion are u_0(0) = 1, u_0(t^+) = 0 for all t^+ \neq 0, and u_n(t^+) = 0 for all t < 0 or t > n(n + 1)/2. The recursive formula is u_n(t^+) = u_(t^+) + u_(t^+ - n). The formula is true because every subset of \ which sums to t^+ either does not contain n, in which case it is also a subset of \, or it does contain n, in which case removing n from the subset produces a subset of \ which sums to t^+ - n. Under the null hypothesis, the probability mass function of T^+ satisfies \Pr(T^+ = t^+) = u_n(t^+) / 2^n. The function u_n is closely related to the integer partition function.Pratt and Gibbons, p. 187 If p_n(t^+) is the probability that T^+ = t^+ under the null hypothesis when there are n observations in the sample, then p_n(t^+) satisfies a similar recursion: 2p_n(t^+) = p_(t^+) + p_(t^+ - n) with similar boundary conditions. There is also a recursive formula for the cumulative distribution function \Pr(T^+ \le t^+). For very large n, even the above recursion is too slow. In this case, the null distribution can be approximated. The null distributions of T, T^+, and T^- are asymptotically normal with means and variances: \begin \mathbf ^+&= \mathbf ^-= \frac, \\ \mathbf &= 0, \\ \operatorname(T^+) &= \operatorname(T^-) = \frac, \\ \operatorname(T) &= \frac. \end Better approximations can be produced using Edgeworth expansions. Using a fourth-order Edgeworth expansion shows that: \Pr(T^+ \le k) \approx \Phi(t) + \phi(t)\Big(\frac\Big)(t^3 - 3t), where t = \frac. The technical underpinnings of these expansions are rather involved, because conventional Edgeworth expansions apply to sums of IID continuous random variables, while T^+ is a sum of non-identically distributed discrete random variables. The final result, however, is that the above expansion has an error of O(n^), just like a conventional fourth-order Edgeworth expansion. The moment generating function of T has the exact formula: M(t) = \frac\prod_^n (1 + e^). When zeros are present and the signed-rank zero procedure is used, or when ties are present and the average rank procedure is used, the null distribution of T changes. Cureton derived a normal approximation for this situation. Suppose that the original number of observations was n and the number of zeros was z. The tie correction is c = \sum t^3 - t, where the sum is over all the sizes t of each group of tied observations. The expectation of T is still zero, while the expectation of T^+ is \mathbf ^+= \frac - \frac. If \sigma^2 = \frac, then \begin \operatorname(T) &= \sigma^2, \\ \operatorname(T^+) &= \sigma^2 / 4. \end


Alternative statistics

Wilcoxon originally defined the Wilcoxon rank-sum statistic to be \min(T^+, T^-). Early authors such as Siegel followed Wilcoxon. This is appropriate for two-sided hypothesis tests, but it cannot be used for one-sided tests. Instead of assigning ranks between 1 and ''n'', it is also possible to assign ranks between 0 and n - 1. These are called ''modified ranks''. The modified signed-rank sum T_0, the modified positive-rank sum T_0^+, and the modified negative-rank sum T_0^- are defined analogously to T, T^+, and T^- but with the modified ranks in place of the ordinary ranks. The probability that the sum of two independent F-distributed random variables is positive can be estimated as 2T_0^+/(n(n - 1)). When consideration is restricted to continuous distributions, this is a minimum variance unbiased estimator of p_2.


Example

\sgn is the
sign function In mathematics, the sign function or signum function (from '' signum'', Latin for "sign") is a function that has the value , or according to whether the sign of a given real number is positive or negative, or the given number is itself zer ...
, \operatorname is the
absolute value In mathematics, the absolute value or modulus of a real number x, is the non-negative value without regard to its sign. Namely, , x, =x if x is a positive number, and , x, =-x if x is negative (in which case negating x makes -x positive), ...
, and R_i is the
rank A rank is a position in a hierarchy. It can be formally recognized—for example, cardinal, chief executive officer, general, professor—or unofficial. People Formal ranks * Academic rank * Corporate title * Diplomatic rank * Hierarchy ...
. Notice that pairs 3 and 9 are tied in absolute value. They would be ranked 1 and 2, so each gets the average of those ranks, 1.5. :W = 1.5+1.5-3-4-5-6+7+8+9=9 :, W, < W_ = 15 : \therefore \text H_0 that the median of pairwise differences is different from zero. :The p-value for this result is 0.6113


Effect size

To compute an
effect size In statistics, an effect size is a value measuring the strength of the relationship between two variables in a population, or a sample-based estimate of that quantity. It can refer to the value of a statistic calculated from a sample of data, the ...
for the signed-rank test, one can use the rank-biserial correlation. If the test statistic ''T'' is reported, the rank correlation r is equal to the test statistic ''T'' divided by the total rank sum ''S'', or ''r'' = ''T''/''S''. Using the above example, the test statistic is ''T'' = 9. The sample size of 9 has a total rank sum of ''S'' = (1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9) = 45. Hence, the rank correlation is 9/45, so ''r'' = 0.20. If the test statistic ''T'' is reported, an equivalent way to compute the rank correlation is with the difference in proportion between the two rank sums, which is the Kerby (2014) simple difference formula. To continue with the current example, the sample size is 9, so the total rank sum is 45. ''T'' is the smaller of the two rank sums, so ''T'' is 3 + 4 + 5 + 6 = 18. From this information alone, the remaining rank sum can be computed, because it is the total sum ''S'' minus ''T'', or in this case 45 − 18 = 27. Next, the two rank-sum proportions are 27/45 = 60% and 18/45 = 40%. Finally, the rank correlation is the difference between the two proportions (.60 minus .40), hence ''r'' = .20.


Software implementations

* R includes an implementation of the test as , where x and y are vectors of equal length.
ALGLIB
includes implementation of the Wilcoxon signed-rank test in C++, C#, Delphi, Visual Basic, etc. *
GNU Octave GNU Octave is a scientific programming language for scientific computing and numerical computation. Octave helps in solving linear and nonlinear problems numerically, and for performing other numerical experiments using a language that is mostly ...
implements various one-tailed and two-tailed versions of the test in the wilcoxon_test function.
SciPy
includes an implementation of the Wilcoxon signed-rank test in Python.

includes an implementation of the Wilcoxon signed-rank test in C# for .NET applications.

implements this test using "Wilcoxon rank sum test" as also returns a logical value indicating the test decision. The result h = 1 indicates a rejection of the null hypothesis, and h = 0 indicates a failure to reject the null hypothesis at the 5% significance level.
Julia
HypothesisTests package includes the Wilcoxon signed-rank test as . * SAS PROC UNIVARIATE includes the Wilcoxon-Signed Rank Test in the frame titles "Tests for Location" as "Signed Rank". Even though this procedure calculates an S-Statistic rather than a W-Statistic, the resulting p-value can still be used for this test. Also SAS with PROC NPAR1WAY contains many non-parametric test and also sports exact test using a bayesian mcmc approach. *
SAS Documentation


See also

* Mann–Whitney–Wilcoxon test *
Sign test The sign test is a statistical test for consistent differences between pairs of observations, such as the weight of subjects before and after treatment. Given pairs of observations (such as weight pre- and post-treatment) for each subject, the sign ...


References


External links


Wilcoxon Signed-Rank Test in RA table of critical values for the Wilcoxon signed-rank testBrief guide by experimental psychologist Karl L. Weunsch
– Nonparametric effect size estimators (Copyright 2015 by Karl L. Weunsch) *Kerby, D. S. (2014). The simple difference formula: An approach to teaching nonparametric correlation. ''Comprehensive Psychology'', volume 3, article 1. doi:10.2466/11.IT.3.1
link to article
{{DEFAULTSORT:Wilcoxon Signed-Rank Test Statistical tests Nonparametric statistics U-statistics