Fleiss' Kappa
   HOME

TheInfoList



OR:

Fleiss' kappa (named after Joseph L. Fleiss) is a
statistical measure In statistics, as opposed to its general use in mathematics, a parameter is any quantity of a statistical population that summarizes or describes an aspect of the population, such as a mean or a standard deviation. If a population exactly follows ...
for assessing the reliability of agreement between a fixed number of
rater Within the field of clinical trials, rating is the process by which a human evaluator subjectively judges the response of a patient to a medical treatment. The rating can include more than one treatment response. The assessor is normally an indep ...
s when assigning categorical ratings to a number of items or classifying items. This contrasts with other kappas such as
Cohen's kappa Cohen's kappa coefficient ('κ', lowercase Greek kappa) is a statistic that is used to measure inter-rater reliability (and also intra-rater reliability) for qualitative (categorical) items. It is generally thought to be a more robust measure than ...
, which only work when assessing the agreement between not more than two raters or the intra-rater reliability (for one appraiser versus themself). The measure calculates the degree of agreement in classification over that which would be expected by chance. Fleiss' kappa can be used with binary or nominal-scale. It can also be applied to
ordinal data Ordinal data is a categorical, statistical data type where the variables have natural, ordered categories and the distances between the categories are not known. These data exist on an ordinal scale, one of four Level of measurement, levels of m ...
(ranked data): the MiniTab online documentation gives an example. However, this document notes: "When you have ordinal ratings, such as defect severity ratings on a scale of 1–5, Kendall's coefficients, which account for ordering, are usually more appropriate statistics to determine association than kappa alone." Keep in mind however, that Kendall rank coefficients are only appropriate for rank data.


Introduction

Fleiss' kappa is a generalisation of Scott's pi statistic, a
statistical Statistics (from German language, German: ', "description of a State (polity), state, a country") is the discipline that concerns the collection, organization, analysis, interpretation, and presentation of data. In applying statistics to a s ...
measure of
inter-rater reliability In statistics, inter-rater reliability (also called by various similar names, such as inter-rater agreement, inter-rater concordance, inter-observer reliability, inter-coder reliability, and so on) is the degree of agreement among independent obse ...
. It is also related to Cohen's kappa statistic and
Youden's J statistic Youden's J statistic (also called Youden's index) is a single statistic that captures the performance of a dichotomy, dichotomous diagnostic test. In meteorology, this statistic is referred to as Peirce Skill Score (PSS), Hanssen–Kuipers Discrim ...
which may be more appropriate in certain instances. Whereas Scott's pi and Cohen's kappa work for only two raters, Fleiss' kappa works for any number of raters giving categorical ratings, to a fixed number of items, at the condition that for each item raters are randomly sampled. It can be interpreted as expressing the extent to which the observed amount of agreement among raters exceeds what would be expected if all raters made their ratings completely randomly. It is important to note that whereas Cohen's kappa assumes the same two raters have rated a set of items, Fleiss' kappa specifically allows that although there are a fixed number of raters (e.g., three), different items may be rated by different individuals. That is, Item 1 is rated by Raters A, B, and C; but Item 2 could be rated by Raters D, E, and F. The condition of random sampling among raters makes Fleiss' kappa not suited for cases where all raters rate all patients. Agreement can be thought of as follows, if a fixed number of people assign numerical ratings to a number of items then the kappa will give a measure for how consistent the ratings are. The kappa, \kappa\,, can be defined as, (1) \kappa = \frac The factor 1 - \bar gives the degree of agreement that is attainable above chance, and, \bar - \bar gives the degree of agreement actually achieved above chance. If the raters are in complete agreement then \kappa = 1~. If there is no agreement among the raters (other than what would be expected by chance) then \kappa \le 0. An example of using Fleiss' kappa may be the following: consider several psychiatrists who are asked to look at ten patients. For each patient, 14 psychiatrists give one of possibly five diagnoses. These are compiled into a matrix, and Fleiss' kappa can be computed from this
matrix Matrix (: matrices or matrixes) or MATRIX may refer to: Science and mathematics * Matrix (mathematics), a rectangular array of numbers, symbols or expressions * Matrix (logic), part of a formula in prenex normal form * Matrix (biology), the m ...
(see example below) to show the degree of agreement between the psychiatrists above the level of agreement expected by chance.


Definition

Let be the total number of elements, let be the number of ratings per element, and let be the number of categories into which assignments are made. The elements are indexed by and the categories are indexed by . Let represent the number of raters who assigned the -th element to the -th category. First calculate , the proportion of all assignments which were to the -th category: (2) p_ = \frac \sum_^N n_,\quad\quad 1 = \sum_^k p_ Now calculate P_\,, the extent to which raters agree for the -th element (i.e., compute how many rater-rater pairs are in agreement, relative to the number of all possible rater-rater pairs): (3) Note that P_i is bound between , when ratings are assigned equally over all categories, and , when all ratings are assigned to a single category. Now compute \bar, the mean of the P_i's, and \bar, which go into the formula for \kappa: (4) \begin \bar &= \frac \sum_^N P_ \\ &= \frac \biggl sum_^N \sum_^k \bigl(n_^2\bigr) - N n\biggr\end (5) \bar = \sum_^k p_j^2


Worked example

In the following example, for each of ten "subjects" (N) fourteen raters (n), sampled from a larger group, assign a total of five categories (k). The categories are presented in the columns, while the subjects are presented in the rows. Each cell lists the number of raters who assigned the indicated (row) subject to the indicated (column) category. In the following table, given that N = 10 , n = 14 , and k = 5 . The value p_j is the proportion of all assignments that were made to the jth category. For example, taking the first column p_1 = \frac = 0.143, and taking the second row, P_2 = \frac \left(0^2 + 2^2 + 6^2 + 4^2 + 2^2 - 14\right) = 0.253. In order to calculate \bar, we need to know the sum of P_i, \sum_^N P_= 1.000 + 0.253 + \cdots + 0.286 + 0.286 = 3.780. Over the whole sheet,


Interpretation

gave the following table for interpreting \kappa values for a 2-annotator 2-class example. This table is however ''by no means'' universally accepted. They supplied no evidence to support it, basing it instead on personal opinion. It has been noted that these guidelines may be more harmful than helpful, as the number of categories and subjects will affect the magnitude of the value. For example, the kappa is higher when there are fewer categories.


Tests of significance

Statistical packages can calculate a
standard score In statistics, the standard score or ''z''-score is the number of standard deviations by which the value of a raw score (i.e., an observed value or data point) is above or below the mean value of what is being observed or measured. Raw scores ...
(Z-score) for
Cohen's kappa Cohen's kappa coefficient ('κ', lowercase Greek kappa) is a statistic that is used to measure inter-rater reliability (and also intra-rater reliability) for qualitative (categorical) items. It is generally thought to be a more robust measure than ...
or Fleiss's Kappa, which can be converted into a
P-value In null-hypothesis significance testing, the ''p''-value is the probability of obtaining test results at least as extreme as the result actually observed, under the assumption that the null hypothesis is correct. A very small ''p''-value means ...
. However, even when the P value reaches the threshold of statistical significance (typically less than 0.05), it only indicates that the agreement between raters is significantly better than would be expected by chance. The p-value does not tell, by itself, whether the agreement is good enough to have high predictive value.


See also

*
Pearson product-moment correlation coefficient In statistics, the Pearson correlation coefficient (PCC) is a correlation coefficient that measures linear correlation between two sets of data. It is the ratio between the covariance of two variables and the product of their standard deviation ...
*
Matthews correlation coefficient In statistics, the phi coefficient, or mean square contingency coefficient, denoted by ''φ'' or ''r'φ'', is a measure of association for two binary variables. In machine learning, it is known as the Matthews correlation coefficient (MCC) an ...
*
Krippendorff's alpha Krippendorff's alpha coefficient, named after academic Klaus Krippendorff, is a statistical measure of the agreement achieved when coding a set of units of analysis. Since the 1970s, ''alpha'' has been used in content analysis where textual units ...


References


Further reading

* . * . * .


External links


Cloud-based inter-rater reliability analysis, Cohen's kappa, Gwet's AC1/AC2, Krippendorff's alpha, Brennan-Prediger, Fleiss generalized kappa, intraclass correlation coefficients
– contains a good bibliography of articles about the coefficient
Online Kappa Calculator
– calculates a variation of Fleiss' kappa {{good article Categorical variable interactions Inter-rater reliability Summary statistics for contingency tables