In
statistics, a spurious relationship or spurious correlation is a
mathematical relationship in which two or more events or variables are
associated but ''
not''
causally related, due to either coincidence or the presence of a certain third, unseen factor (referred to as a "common response variable", "confounding factor", or "
lurking variable").
Examples
An example of a spurious relationship can be found in the
time-series literature, where a spurious regression is a regression that provides misleading statistical evidence of a
linear relationship between independent
non-stationary variables. In fact, the non-stationarity may be due to the presence of a
unit root in both variables. In particular, any two
nominal economic variables are likely to be correlated with each other, even when neither has a causal effect on the other, because each equals a
real variable times the
price level, and the common presence of the price level in the two data series imparts correlation to them. (See also
spurious correlation of ratios
In statistics, spurious correlation of ratios is a form of spurious correlation that arises between ratios of absolute measurements which themselves are uncorrelated.
The phenomenon of spurious correlation of ratios is one of the main motives fo ...
.)
Another example of a spurious relationship can be seen by examining a city's
ice cream sales. The sales might be highest when the rate of drownings in city
swimming pools is highest. To allege that ice cream sales cause drowning, or vice versa, would be to imply a spurious relationship between the two. In reality, a
heat wave may have caused both. The heat wave is an example of a hidden or unseen variable, also known as a
confounding variable.
Another commonly noted example is a series of Dutch statistics showing a positive correlation between the number of storks nesting in a series of springs and the number of human babies born at that time. Of course there was no causal connection; they were correlated with each other only because they were correlated with the weather nine months before the observations.
In rare cases, a spurious relationship can occur between two completely unrelated variables without any confounding variable, as was the case between the success of the
Washington Redskins professional football team in a specific game before each
presidential election and the success of the incumbent President's political party in said election. For 16 consecutive elections between 1940 and 2000, the
Redskins Rule correctly matched whether the incumbent President's political party would retain or lose the Presidency. The rule eventually failed shortly after
Elias Sports Bureau The Elias Sports Bureau is a privately held company providing historical and current statistical information for the major professional sports leagues operating in the United States and Canada.
Elias is the official statistician for Major League Ba ...
discovered the correlation in 2000; in 2004, 2012 and 2016, the results of the Redskins game and the election did not match.
In a similar spurious relationship involving the
National Football League
The National Football League (NFL) is a professional American football league that consists of 32 teams, divided equally between the American Football Conference (AFC) and the National Football Conference (NFC). The NFL is one of the ma ...
, in the 1970s,
Leonard Koppett Leonard Koppett (September 15, 1923 – June 22, 2003) was an American sportswriter.
Born in Moscow as Leonard Kopeliovich, Koppett moved with his family from Moscow, Russia to the United States when he was five years old. They lived in The B ...
noted a correlation between the direction of the stock market and the winning conference of that year's
Super Bowl, the
Super Bowl indicator The Super Bowl Indicator is a spurious correlation that says that the stock market's performance in a given year can be predicted based on the outcome of the Super Bowl of that year. It was "discovered" by Leonard Koppett in 1978 when he realized ...
; the relationship maintained itself for most of the 20th century before
reverting to more random behavior in the 21st.
Hypothesis testing
Often one tests a null hypothesis of no correlation between two variables, and chooses in advance to reject the hypothesis if the correlation computed from a data sample would have occurred in less than (say) 5% of data samples if the null hypothesis were true. While a true null hypothesis will be accepted 95% of the time, the other 5% of the times having a true null of no correlation a zero correlation will be wrongly rejected, causing acceptance of a correlation which is spurious (an event known as
Type I error
In statistical hypothesis testing, a type I error is the mistaken rejection of an actually true null hypothesis (also known as a "false positive" finding or conclusion; example: "an innocent person is convicted"), while a type II error is the f ...
). Here the spurious correlation in the sample resulted from random selection of a sample that did not reflect the true properties of the underlying population.
Detecting spurious relationships
The term "spurious relationship" is commonly used in
statistics and in particular in
experimental research
An experiment is a procedure carried out to support or refute a hypothesis, or determine the efficacy or likelihood of something previously untried. Experiments provide insight into Causality, cause-and-effect by demonstrating what outcome oc ...
techniques, both of which attempt to understand and predict direct causal relationships (X → Y). A non-causal correlation can be spuriously created by an antecedent which causes both (W → X and W → Y).
Mediating variables, (X → W → Y), if undetected, estimate a total effect rather than direct effect without adjustment for the mediating variable M. Because of this, experimentally identified
correlation
In statistics, correlation or dependence is any statistical relationship, whether causal or not, between two random variables or bivariate data. Although in the broadest sense, "correlation" may indicate any type of association, in statisti ...
s do not represent
causal relationships unless spurious relationships can be ruled out.
Experiments
In experiments, spurious relationships can often be identified by
controlling for other factors, including those that have been theoretically identified as possible confounding factors. For example, consider a researcher trying to determine whether a new drug kills bacteria; when the researcher applies the drug to a bacterial culture, the bacteria die. But to help in ruling out the presence of a confounding variable, another culture is subjected to conditions that are as nearly identical as possible to those facing the first-mentioned culture, but the second culture is not subjected to the drug. If there is an unseen confounding factor in those conditions, this control culture will die as well, so that no conclusion of efficacy of the drug can be drawn from the results of the first culture. On the other hand, if the control culture does not die, then the researcher cannot reject the hypothesis that the drug is efficacious.
Non-experimental statistical analyses
Disciplines whose data are mostly non-experimental, such as
economics
Economics () is the social science that studies the production, distribution, and consumption of goods and services.
Economics focuses on the behaviour and interactions of economic agents and how economies work. Microeconomics analy ...
, usually employ observational data to establish causal relationships. The body of statistical techniques used in economics is called
econometrics
Econometrics is the application of statistical methods to economic data in order to give empirical content to economic relationships.M. Hashem Pesaran (1987). "Econometrics," '' The New Palgrave: A Dictionary of Economics'', v. 2, p. 8 p. 8†...
. The main statistical method in econometrics is multivariable
regression analysis
In statistical modeling, regression analysis is a set of statistical processes for estimating the relationships between a dependent variable (often called the 'outcome' or 'response' variable, or a 'label' in machine learning parlance) and one ...
. Typically a linear relationship such as
:
is hypothesized, in which
is the dependent variable (hypothesized to be the caused variable),
for ''j'' = 1, ..., ''k'' is the ''j''
th independent variable (hypothesized to be a causative variable), and
is the error term (containing the combined effects of all other causative variables, which must be uncorrelated with the included independent variables). If there is reason to believe that none of the
s is caused by ''y'', then estimates of the coefficients
are obtained. If the null hypothesis that
is rejected, then the alternative hypothesis that
and equivalently that
causes ''y'' cannot be rejected. On the other hand, if the null hypothesis that
cannot be rejected, then equivalently the hypothesis of no causal effect of
on ''y'' cannot be rejected. Here the notion of causality is one of
contributory causality: If the true value
, then a change in
will result in a change in ''y'' ''unless'' some other causative variable(s), either included in the regression or implicit in the error term, change in such a way as to exactly offset its effect; thus a change in
is ''not sufficient'' to change ''y''. Likewise, a change in
is ''not necessary'' to change ''y'', because a change in ''y'' could be caused by something implicit in the error term (or by some other causative explanatory variable included in the model).
Regression analysis controls for other relevant variables by including them as regressors (explanatory variables). This helps to avoid mistaken inference of causality due to the presence of a third, underlying, variable that influences both the potentially causative variable and the potentially caused variable: its effect on the potentially caused variable is captured by directly including it in the regression, so that effect will not be picked up as a spurious effect of the potentially causative variable of interest. In addition, the use of multivariate regression helps to avoid wrongly inferring that an indirect effect of, say ''x''
1 (e.g., ''x''
1 → ''x''
2 → ''y'') is a direct effect (''x''
1 → ''y'').
Just as an experimenter must be careful to employ an experimental design that controls for every confounding factor, so also must the user of multiple regression be careful to control for all confounding factors by including them among the regressors. If a confounding factor is omitted from the regression, its effect is captured in the error term by default, and if the resulting error term is correlated with one (or more) of the included regressors, then the estimated regression may be biased or inconsistent (see
omitted variable bias).
In addition to regression analysis, the data can be examined to determine if
Granger causality exists. The presence of Granger causality indicates both that ''x'' precedes ''y'', and that ''x'' contains unique information about ''y''.
Other relationships
There are several other relationships defined in statistical analysis as follows.
*Direct relationship
*
Mediating relationship
*
Moderating relationship
See also
*
Causality
Causality (also referred to as causation, or cause and effect) is influence by which one event, process, state, or object (''a'' ''cause'') contributes to the production of another event, process, state, or object (an ''effect'') where the ca ...
*
Correlation does not imply causation
The phrase "correlation does not imply causation" refers to the inability to legitimately deduce a cause-and-effect relationship between two events or variables solely on the basis of an observed association or correlation between them. The id ...
*
Illusory correlation
*
Model specification
*
Omitted-variable bias
*
Post hoc fallacy
''Post hoc ergo propter hoc'' (Latin: 'after this, therefore because of this') is an informal fallacy that states: "Since event Y ''followed'' event X, event Y must have been ''caused'' by event X." It is often shortened simply to ''post hoc fall ...
*
Statistical model validation
Footnotes
References
*
*
External links
Spurious correlations– a collection of examples
{{fallacies
Causal fallacies
Logic and statistics
Independence (probability theory)