In
statistics
Statistics (from German language, German: ''wikt:Statistik#German, Statistik'', "description of a State (polity), state, a country") is the discipline that concerns the collection, organization, analysis, interpretation, and presentation of ...
, Cramér's V (sometimes referred to as Cramér's phi and denoted as φ
''c'') is a measure of
association
Association may refer to:
*Club (organization), an association of two or more people united by a common interest or goal
*Trade association, an organization founded and funded by businesses that operate in a specific industry
*Voluntary associatio ...
between two
nominal variables, giving a value between 0 and +1 (inclusive). It is based on
Pearson's chi-squared statistic and was published by
Harald Cramér
Harald Cramér (; 25 September 1893 – 5 October 1985) was a Swedish mathematician, actuary, and statistician, specializing in mathematical statistics and probabilistic number theory. John Kingman described him as "one of the giants of statist ...
in 1946.
Usage and interpretation
φ
''c'' is the intercorrelation of two discrete variables
[Sheskin, David J. (1997). Handbook of Parametric and Nonparametric Statistical Procedures. Boca Raton, Fl: CRC Press.] and may be used with variables having two or more levels. φ
''c'' is a symmetrical measure: it does not matter which variable we place in the columns and which in the rows. Also, the order of rows/columns doesn't matter, so φ
''c'' may be used with nominal data types or higher (notably, ordered or numerical).
Cramér's V may also be applied to
goodness of fit
The goodness of fit of a statistical model describes how well it fits a set of observations. Measures of goodness of fit typically summarize the discrepancy between observed values and the values expected under the model in question. Such measure ...
chi-squared models when there is a 1 × ''k'' table (in this case ''r'' = 1). In this case ''k'' is taken as the number of optional outcomes and it functions as a measure of tendency towards a single outcome.
Cramér's V varies from 0 (corresponding to
no association between the variables) to 1 (complete association) and can reach 1 only when each variable is completely determined by the other. It may be viewed as the association between two variables as a percentage of their maximum possible variation.
φ
''c''2 is the mean square
canonical correlation
In statistics, canonical-correlation analysis (CCA), also called canonical variates analysis, is a way of inferring information from cross-covariance matrices. If we have two vectors ''X'' = (''X''1, ..., ''X'n'') and ''Y' ...
between the variables.
In the case of a 2 × 2
contingency table Cramér's V is equal to the absolute value of
Phi coefficient
In statistics, the phi coefficient (or mean square contingency coefficient and denoted by φ or rφ) is a measure of association for two binary variables. In machine learning, it is known as the Matthews correlation coefficient (MCC) and used as ...
.
Note that as chi-squared values tend to increase with the number of cells, the greater the difference between ''r'' (rows) and ''c'' (columns), the more likely φ
c will tend to 1 without strong evidence of a meaningful correlation.
[Cohen, J. (1988). Statistical Power Analysis for the Behavioral Sciences (2nd ed.). Routledge. https://doi.org/10.4324/9780203771587 p78-81.]
Calculation
Let a sample of size ''n'' of the simultaneously distributed variables
and
for
be given by the frequencies
:
number of times the values
were observed.
The chi-squared statistic then is:
:
where
is the number of times the value
is observed and
is the number of times the value
is observed.
Cramér's V is computed by taking the square root of the chi-squared statistic divided by the sample size and the minimum dimension minus 1:
:
where:
*
is the phi coefficient.
*
is derived from Pearson's chi-squared test
*
is the grand total of observations and
*
being the number of columns.
*
being the number of rows.
The
p-value for the
significance of ''V'' is the same one that is calculated using the
Pearson's chi-squared test.
The formula for the variance of ''V''=φ
''c'' is known.
In R, the function
cramerV()
from the package
rcompanion
calculates ''V'' using the chisq.test function from the stats package. In contrast to the function
cramersV()
from the
lsr
package,
cramerV()
also offers an option to correct for bias. It applies the correction described in the following section.
Bias correction
Cramér's V can be a heavily biased estimator of its population counterpart and will tend to overestimate the strength of association. A bias correction, using the above notation, is given by
:
where
:
and
:
:
Then
estimates the same population quantity as Cramér's V but with typically much smaller
mean squared error
In statistics, the mean squared error (MSE) or mean squared deviation (MSD) of an estimator (of a procedure for estimating an unobserved quantity) measures the average of the squares of the errors—that is, the average squared difference between ...
. The rationale for the correction is that under independence,
.
See also
Other measures of correlation for nominal data:
* The
phi coefficient
In statistics, the phi coefficient (or mean square contingency coefficient and denoted by φ or rφ) is a measure of association for two binary variables. In machine learning, it is known as the Matthews correlation coefficient (MCC) and used as ...
*
Tschuprow's T
In statistics, Tschuprow's ''T'' is a measure of association between two nominal variables, giving a value between 0 and 1 (inclusive). It is closely related to Cramér's V, coinciding with it for square contingency tables.
It was published by ...
* The
uncertainty coefficient
In statistics, the uncertainty coefficient, also called proficiency, entropy coefficient or Theil's U, is a measure of nominal association. It was first introduced by Henri Theil and is based on the concept of information entropy.
Definition
Sup ...
* The
Lambda coefficient
* The
Rand index
The RAND Corporation (from the phrase "research and development") is an American nonprofit global policy think tank created in 1948 by Douglas Aircraft Company to offer research and analysis to the United States Armed Forces. It is financed ...
*
Davies–Bouldin index
The Davies–Bouldin index (DBI), introduced by David L. Davies and Donald W. Bouldin in 1979, is a metric for evaluating clustering algorithms. This is an internal evaluation scheme, where the validation of how well the clustering has been d ...
*
Dunn index
The Dunn index (DI) (introduced by J. C. Dunn in 1974) is a metric for evaluating clustering algorithms. This is part of a group of validity indices including the Davies–Bouldin index or Silhouette (clustering), Silhouette index, in that it is an ...
*
Jaccard index
The Jaccard index, also known as the Jaccard similarity coefficient, is a statistic used for gauging the similarity and diversity of sample sets. It was developed by Grove Karl Gilbert in 1884 as his ratio of verification (v) and now is freque ...
*
Fowlkes–Mallows index
The Fowlkes–Mallows index is an external evaluation method that is used to determine the similarity between two clusterings (clusters obtained after a clustering algorithm), and also a metric to measure confusion matrices. This measure of simi ...
Other related articles:
*
Contingency table
*
Effect size
In statistics, an effect size is a value measuring the strength of the relationship between two variables in a population, or a sample-based estimate of that quantity. It can refer to the value of a statistic calculated from a sample of data, the ...
*
References
External links
A Measure of Association for Nonparametric Statistics(Alan C. Acock and Gordon R. Stavig Page 1381 of 1381–1386)
from the homepage of Pat Dattalo.
{{DEFAULTSORT:Cramer's V
Statistical ratios
Summary statistics for contingency tables
Covariance and correlation