Two-way ANOVA
   HOME

TheInfoList



OR:

In
statistics Statistics (from German language, German: ''wikt:Statistik#German, Statistik'', "description of a State (polity), state, a country") is the discipline that concerns the collection, organization, analysis, interpretation, and presentation of ...
, the two-way analysis of variance (ANOVA) is an extension of the one-way ANOVA that examines the influence of two different categorical
independent variables Dependent and independent variables are variables in mathematical modeling, statistical modeling and experimental sciences. Dependent variables receive this name because, in an experiment, their values are studied under the supposition or demand ...
on one
continuous Continuity or continuous may refer to: Mathematics * Continuity (mathematics), the opposing concept to discreteness; common examples include ** Continuous probability distribution or random variable in probability and statistics ** Continuous ...
dependent variable Dependent and independent variables are variables in mathematical modeling, statistical modeling and experimental sciences. Dependent variables receive this name because, in an experiment, their values are studied under the supposition or demand ...
. The two-way ANOVA not only aims at assessing the main effect of each independent variable but also if there is any interaction between them.


History

In 1925, Ronald Fisher mentions the two-way ANOVA in his celebrated book, ''
Statistical Methods for Research Workers ''Statistical Methods for Research Workers'' is a classic book on statistics, written by the statistician R. A. Fisher. It is considered by some to be one of the 20th century's most influential books on statistical methods, together with his ''The ...
'' (chapters 7 and 8). In 1934,
Frank Yates Frank Yates FRS (12 May 1902 – 17 June 1994) was one of the pioneers of 20th-century statistics. Biography Yates was born in Manchester, England, the eldest of five children (and only son) of seed merchant Percy Yates and his wife Edith. H ...
published procedures for the unbalanced case. Since then, an extensive literature has been produced. The topic was reviewed in 1993 by Yasunori Fujikoshi. In 2005,
Andrew Gelman Andrew Eric Gelman (born February 11, 1965) is an American statistician and professor of statistics and political science at Columbia University. Gelman received bachelor of science degrees in mathematics and in physics from MIT, where he was a ...
proposed a different approach of ANOVA, viewed as a
multilevel model Multilevel models (also known as hierarchical linear models, linear mixed-effect model, mixed models, nested data models, random coefficient, random-effects models, random parameter models, or split-plot designs) are statistical models of parame ...
.


Data set

Let us imagine a data set for which a dependent variable may be influenced by two factors which are potential sources of variation. The first factor has I levels and the second has J levels . Each combination (i,j) defines a treatment, for a total of I \times J treatments. We represent the number of replicates for treatment (i,j) by n_, and let k be the index of the replicate in this treatment . From these data, we can build a
contingency table In statistics, a contingency table (also known as a cross tabulation or crosstab) is a type of table in a matrix format that displays the (multivariate) frequency distribution of the variables. They are heavily used in survey research, business i ...
, where n_ = \sum_^J n_ and n_ = \sum_^I n_, and the total number of replicates is equal to n = \sum_ n_ = \sum_i n_ = \sum_j n_. The experimental design is balanced if each treatment has the same number of replicates, K. In such a case, the design is also said to be orthogonal, allowing to fully distinguish the effects of both factors. We hence can write \forall i,j \; n_ = K, and \forall i,j \; n_ = \frac.


Model

Upon observing variation among all n data points, for instance via a
histogram A histogram is an approximate representation of the distribution of numerical data. The term was first introduced by Karl Pearson. To construct a histogram, the first step is to " bin" (or "bucket") the range of values—that is, divide the ent ...
, " probability may be used to describe such variation". Let us hence denote by Y_ the
random variable A random variable (also called random quantity, aleatory variable, or stochastic variable) is a mathematical formalization of a quantity or object which depends on random events. It is a mapping or a function from possible outcomes (e.g., the po ...
which observed value y_ is the k-th measure for treatment (i,j). The two-way ANOVA models all these variables as varying independently and normally around a mean, \mu_, with a constant variance, \sigma^2 ( homoscedasticity): Y_ \, , \, \mu_, \sigma^2 \; \overset \; \mathcal(\mu_, \sigma^2). Specifically, the mean of the response variable is modeled as a linear combination of the explanatory variables: \mu_ = \mu + \alpha_i + \beta_j + \gamma_, where \mu is the grand mean, \alpha_i is the additive main effect of level i from the first factor (''i''-th row in the contingency table), \beta_j is the additive main effect of level j from the second factor (''j''-th column in the contingency table) and \gamma_ is the non-additive interaction effect of treatment (i,j) for samples k=1,...,n_ij from both factors (cell at row ''i'' and column ''j'' in the contingency table). Another equivalent way of describing the two-way ANOVA is by mentioning that, besides the variation explained by the factors, there remains some
statistical noise In statistics, the fraction of variance unexplained (FVU) in the context of a regression task is the fraction of variance of the regressand (dependent variable) ''Y'' which cannot be explained, i.e., which is not correctly predicted, by the e ...
. This amount of unexplained variation is handled via the introduction of one random variable per data point, \epsilon_, called error. These n random variables are seen as deviations from the means, and are assumed to be independent and normally distributed: Y_ = \mu_ + \epsilon_ \text \epsilon_ \overset \mathcal(0, \sigma^2).


Assumptions

Following Gelman and Hill, the assumptions of the ANOVA, and more generally the general linear model, are, in decreasing order of importance: # the data points are relevant with respect to the scientific question under investigation; # the mean of the response variable is influenced additively (if not interaction term) and linearly by the factors; # the errors are independent; # the errors have the same variance; # the errors are normally distributed.


Parameter estimation

To ensure
identifiability In statistics, identifiability is a property which a model must satisfy for precise inference to be possible. A model is identifiable if it is theoretically possible to learn the true values of this model's underlying parameters after obtaining an ...
of parameters, we can add the following "sum-to-zero" constraints: \sum_i \alpha_i = \sum_j \beta_j = \sum_i \gamma_ =\sum_j \gamma_= 0


Hypothesis testing

In the classical approach, testing null hypotheses (that the factors have no effect) is achieved via their significance which requires calculating sums of squares. Testing if the interaction term is significant can be difficult because of the potentially-large number of
degrees of freedom Degrees of freedom (often abbreviated df or DOF) refers to the number of independent variables or parameters of a thermodynamic system. In various scientific fields, the word "freedom" is used to describe the limits to which physical movement or ...
.


Example

The following hypothetical example gives the yields of 15 plants subject to two different environmental variations, and three different fertilisers. Five sums of squares are calculated: Finally, the sums of squared deviations required for the analysis of variance can be calculated.


See also

* Analysis of variance * F test (''Includes a one-way ANOVA example'') * Mixed model * Multivariate analysis of variance (MANOVA) * One-way ANOVA * Repeated measures ANOVA *
Tukey's test of additivity In statistics, Tukey's test of additivity, named for John Tukey, is an approach used in two-way ANOVA (regression analysis involving two qualitative factors) to assess whether the factor variables ( categorical variables) are additively related to ...


Notes


References

* {{cite book , author=George Casella , date=18 April 2008 , title=Statistical design , url=https://www.springer.com/statistics/statistical+theory+and+methods/book/978-0-387-75964-7 , publisher= Springer , isbn=978-0-387-75965-4 , series=Springer Texts in Statistics , author-link=George Casella Analysis of variance