The design of experiments (DOE, DOX, or experimental design) is the design of any task that aims to describe and explain the variation of information under conditions that are hypothesized to reflect the variation. The term is generally associated with
experiments in which the design introduces conditions that directly affect the variation, but may also refer to the design of
quasi-experiments, in which
natural conditions that influence the variation are selected for observation.
In its simplest form, an experiment aims at predicting the outcome by introducing a change of the preconditions, which is represented by one or more
independent variables, also referred to as "input variables" or "predictor variables." The change in one or more independent variables is generally hypothesized to result in a change in one or more
dependent variables, also referred to as "output variables" or "response variables." The experimental design may also identify
control variables that must be held constant to prevent external factors from affecting the results. Experimental design involves not only the selection of suitable independent, dependent, and control variables, but planning the delivery of the experiment under statistically optimal conditions given the constraints of available resources. There are multiple approaches for determining the set of design points (unique combinations of the settings of the independent variables) to be used in the experiment.
Main concerns in experimental design include the establishment of
validity
Validity or Valid may refer to:
Science/mathematics/statistics:
* Validity (logic), a property of a logical argument
* Scientific:
** Internal validity, the validity of causal inferences within scientific studies, usually based on experiments
...
,
reliability
Reliability, reliable, or unreliable may refer to:
Science, technology, and mathematics Computing
* Data reliability (disambiguation), a property of some disk arrays in computer storage
* High availability
* Reliability (computer networking), ...
, and
replicability. For example, these concerns can be partially addressed by carefully choosing the independent variable, reducing the risk of measurement error, and ensuring that the documentation of the method is sufficiently detailed. Related concerns include achieving appropriate levels of
statistical power and
sensitivity
Sensitivity may refer to:
Science and technology Natural sciences
* Sensitivity (physiology), the ability of an organism or organ to respond to external stimuli
** Sensory processing sensitivity in humans
* Sensitivity and specificity, statisti ...
.
Correctly designed experiments advance knowledge in the natural and social sciences and engineering. Other applications include marketing and policy making. The study of the design of experiments is an important topic in
metascience.
History
Statistical experiments, following Charles S. Peirce
A theory of
statistical inference was developed by
Charles S. Peirce in "
Illustrations of the Logic of Science" (1877–1878) and "
A Theory of Probable Inference" (1883), two publications that emphasized the importance of randomization-based inference in statistics.
Randomized experiments
Charles S. Peirce randomly assigned volunteers to a
blinded,
repeated-measures design
Repeated measures design is a research design that involves multiple measures of the same variable taken on the same or matched subjects either under different conditions or over two or more time periods. For instance, repeated measurements are c ...
to evaluate their ability to discriminate weights.
[of
][
][
]
Peirce's experiment inspired other researchers in psychology and education, which developed a research tradition of randomized experiments in laboratories and specialized textbooks in the 1800s.
Optimal designs for regression models
Charles S. Peirce also contributed the first English-language publication on an
optimal design for
regression
Regression or regressions may refer to:
Science
* Marine regression, coastal advance due to falling sea level, the opposite of marine transgression
* Regression (medicine), a characteristic of diseases to express lighter symptoms or less extent ( ...
models in 1876. A pioneering
optimal design for
polynomial regression was suggested by
Gergonne in 1815. In 1918,
Kirstine Smith published optimal designs for polynomials of degree six (and less).
Sequences of experiments
The use of a sequence of experiments, where the design of each may depend on the results of previous experiments, including the possible decision to stop experimenting, is within the scope of
sequential analysis, a field that was pioneered by
Abraham Wald in the context of sequential tests of statistical hypotheses.
Herman Chernoff wrote an overview of optimal sequential designs,
while
adaptive designs have been surveyed by S. Zacks. One specific type of sequential design is the "two-armed bandit", generalized to the
multi-armed bandit, on which early work was done by
Herbert Robbins in 1952.
Fisher's principles
A methodology for designing experiments was proposed by
Ronald Fisher, in his innovative books: ''The Arrangement of Field Experiments'' (1926) and ''
The Design of Experiments'' (1935). Much of his pioneering work dealt with agricultural applications of statistical methods. As a mundane example, he described how to test the
lady tasting tea hypothesis
A hypothesis (plural hypotheses) is a proposed explanation for a phenomenon. For a hypothesis to be a scientific hypothesis, the scientific method requires that one can testable, test it. Scientists generally base scientific hypotheses on prev ...
, that a certain lady could distinguish by flavour alone whether the milk or the tea was first placed in the cup. These methods have been broadly adapted in biological, psychological, and agricultural research.
[ Miller, Geoffrey (2000). ''The Mating Mind: how sexual choice shaped the evolution of human nature'', London: Heineman, (also Doubleday, ) "To biologists, he was an architect of the 'modern synthesis' that used mathematical models to integrate Mendelian genetics with Darwin's selection theories. To psychologists, Fisher was the inventor of various statistical tests that are still supposed to be used whenever possible in psychology journals. To farmers, Fisher was the founder of experimental agricultural research, saving millions from starvation through rational crop breeding programs." p.54.]
;Comparison
:In some fields of study it is not possible to have independent measurements to a traceable
metrology standard. Comparisons between treatments are much more valuable and are usually preferable, and often compared against a
scientific control or traditional treatment that acts as baseline.
;
Randomization
:Random assignment is the process of assigning individuals at random to groups or to different groups in an experiment, so that each individual of the population has the same chance of becoming a participant in the study. The random assignment of individuals to groups (or conditions within a group) distinguishes a rigorous, "true" experiment from an observational study or "quasi-experiment". There is an extensive body of mathematical theory that explores the consequences of making the allocation of units to treatments by means of some random mechanism (such as tables of random numbers, or the use of randomization devices such as playing cards or dice). Assigning units to treatments at random tends to mitigate
confounding, which makes effects due to factors other than the treatment to appear to result from the treatment.
:The risks associated with random allocation (such as having a serious imbalance in a key characteristic between a treatment group and a control group) are calculable and hence can be managed down to an acceptable level by using enough experimental units. However, if the population is divided into several subpopulations that somehow differ, and the research requires each subpopulation to be equal in size, stratified sampling can be used. In that way, the units in each subpopulation are randomized, but not the whole sample. The results of an experiment can be generalized reliably from the experimental units to a larger
statistical population of units only if the experimental units are a
random sample from the larger population; the probable error of such an extrapolation depends on the sample size, among other things.
;
Statistical replication
:Measurements are usually subject to variation and
measurement uncertainty; thus they are repeated and full experiments are replicated to help identify the sources of variation, to better estimate the true effects of treatments, to further strengthen the experiment's reliability and validity, and to add to the existing knowledge of the topic. However, certain conditions must be met before the replication of the experiment is commenced: the original research question has been published in a
peer-review
Peer review is the evaluation of work by one or more people with similar competencies as the producers of the work (peers). It functions as a form of self-regulation by qualified members of a profession within the relevant field. Peer review ...
ed journal or widely cited, the researcher is independent of the original experiment, the researcher must first try to replicate the original findings using the original data, and the write-up should state that the study conducted is a replication study that tried to follow the original study as strictly as possible.
;
Blocking
:Blocking is the non-random arrangement of experimental units into groups (blocks) consisting of units that are similar to one another. Blocking reduces known but irrelevant sources of variation between units and thus allows greater precision in the estimation of the source of variation under study.
;
Orthogonality
In mathematics, orthogonality is the generalization of the geometric notion of '' perpendicularity''.
By extension, orthogonality is also used to refer to the separation of specific features of a system. The term also has specialized meanings i ...
:Orthogonality concerns the forms of comparison (contrasts) that can be legitimately and efficiently carried out. Contrasts can be represented by vectors and sets of orthogonal contrasts are uncorrelated and independently distributed if the data are normal. Because of this independence, each orthogonal treatment provides different information to the others. If there are ''T'' treatments and ''T'' – 1 orthogonal contrasts, all the information that can be captured from the experiment is obtainable from the set of contrasts.
;
Factorial experiments
:Use of factorial experiments instead of the one-factor-at-a-time method. These are efficient at evaluating the effects and possible
interactions of several factors (independent variables). Analysis of
experiment
An experiment is a procedure carried out to support or refute a hypothesis, or determine the efficacy or likelihood of something previously untried. Experiments provide insight into cause-and-effect by demonstrating what outcome occurs wh ...
design is built on the foundation of the
analysis of variance, a collection of models that partition the observed variance into components, according to what factors the experiment must estimate or test.
Example
This example of design experiments is attributed to
Harold Hotelling, building on examples from
Frank Yates
Frank Yates FRS (12 May 1902 – 17 June 1994) was one of the pioneers of 20th-century statistics.
Biography
Yates was born in Manchester, England, the eldest of five children (and only son) of seed merchant Percy Yates and his wife Edith. H ...
.
[ Herman Chernoff, ''Sequential Analysis and Optimal Design'', SIAM Monograph, 1972.] The experiments designed in this example involve
combinatorial designs.
Weights of eight objects are measured using a
pan balance and set of standard weights. Each weighing measures the weight difference between objects in the left pan and any objects in the right pan by adding calibrated weights to the lighter pan until the balance is in equilibrium. Each measurement has a
random error. The average error is zero; the
standard deviations of the
probability distribution of the errors is the same number σ on different weighings; errors on different weighings are
independent. Denote the true weights by
:
We consider two different experiments:
# Weigh each object in one pan, with the other pan empty. Let ''X''
''i'' be the measured weight of the object, for ''i'' = 1, ..., 8.
# Do the eight weighings according to the following schedule—a
weighing matrix—and let ''Y''
''i'' be the measured difference for ''i'' = 1, ..., 8:
::
: Then the estimated value of the weight ''θ''
1 is
::
:Similar estimates can be found for the weights of the other items. For example
::
The question of design of experiments is: which experiment is better?
The variance of the estimate ''X''
1 of ''θ''
1 is ''σ''
2 if we use the first experiment. But if we use the second experiment, the variance of the estimate given above is ''σ''
2/8. Thus the second experiment gives us 8 times as much precision for the estimate of a single item, and estimates all items simultaneously, with the same precision. What the second experiment achieves with eight would require 64 weighings if the items are weighed separately. However, note that the estimates for the items obtained in the second experiment have errors that correlate with each other.
Many problems of the design of experiments involve
combinatorial designs, as in this example and others.
Avoiding false positives
False positive conclusions, often resulting from the
pressure to publish or the author's own
confirmation bias, are an inherent hazard in many fields. A good way to prevent biases potentially leading to false positives in the data collection phase is to use a double-blind design. When a double-blind design is used, participants are randomly assigned to experimental groups but the researcher is unaware of what participants belong to which group. Therefore, the researcher can not affect the participants' response to the intervention.
Experimental designs with undisclosed degrees of freedom are a problem. This can lead to conscious or unconscious "
p-hacking": trying multiple things until you get the desired result. It typically involves the manipulation – perhaps unconsciously – of the process of
statistical analysis and the degrees of freedom until they return a figure below the p<.05 level of statistical significance. So the design of the experiment should include a clear statement proposing the analyses to be undertaken. P-hacking can be prevented by preregistering researches, in which researchers have to send their data analysis plan to the journal they wish to publish their paper in before they even start their data collection, so no data manipulation is possible (https://osf.io). Another way to prevent this is taking the double-blind design to the data-analysis phase, where the data are sent to a data-analyst unrelated to the research who scrambles up the data so there is no way to know which participants belong to before they are potentially taken away as outliers.
Clear and complete documentation of the experimental methodology is also important in order to support replication of results.
Discussion topics when setting up an experimental design
An experimental design or randomized clinical trial requires careful consideration of several factors before actually doing the experiment. An experimental design is the laying out of a detailed experimental plan in advance of doing the experiment. Some of the following topics have already been discussed in the principles of experimental design section:
# How many factors does the design have, and are the levels of these factors fixed or random?
# Are control conditions needed, and what should they be?
# Manipulation checks: did the manipulation really work?
# What are the background variables?
# What is the sample size? How many units must be collected for the experiment to be generalisable and have enough
power?
# What is the relevance of interactions between factors?
# What is the influence of delayed effects of substantive factors on outcomes?
# How do response shifts affect self-report measures?
# How feasible is repeated administration of the same measurement instruments to the same units at different occasions, with a post-test and follow-up tests?
# What about using a proxy pretest?
# Are there
lurking variables?
# Should the client/patient, researcher or even the analyst of the data be blind to conditions?
# What is the feasibility of subsequent application of different conditions to the same units?
# How many of each control and noise factors should be taken into account?
The independent variable of a study often has many levels or different groups. In a true experiment, researchers can have an experimental group, which is where their intervention testing the hypothesis is implemented, and a control group, which has all the same element as the experimental group, without the interventional element. Thus, when everything else except for one intervention is held constant, researchers can certify with some certainty that this one element is what caused the observed change. In some instances, having a control group is not ethical. This is sometimes solved using two different experimental groups. In some cases, independent variables cannot be manipulated, for example when testing the difference between two groups who have a different disease, or testing the difference between genders (obviously variables that would be hard or unethical to assign participants to). In these cases, a quasi-experimental design may be used.
Causal attributions
In the pure experimental design, the independent (predictor) variable is manipulated by the researcher – that is – every participant of the research is chosen randomly from the population, and each participant chosen is assigned randomly to conditions of the independent variable. Only when this is done is it possible to certify with high probability that the reason for the differences in the outcome variables are caused by the different conditions. Therefore, researchers should choose the experimental design over other design types whenever possible. However, the nature of the independent variable does not always allow for manipulation. In those cases, researchers must be aware of not certifying about causal attribution when their design doesn't allow for it. For example, in observational designs, participants are not assigned randomly to conditions, and so if there are differences found in outcome variables between conditions, it is likely that there is something other than the differences between the conditions that causes the differences in outcomes, that is – a third variable. The same goes for studies with correlational design. (Adér & Mellenbergh, 2008).
Statistical control
It is best that a process be in reasonable statistical control prior to conducting designed experiments. When this is not possible, proper blocking, replication, and randomization allow for the careful conduct of designed experiments.
To control for nuisance variables, researchers institute control checks as additional measures. Investigators should ensure that uncontrolled influences (e.g., source credibility perception) do not skew the findings of the study. A
manipulation check
Manipulation check is a term in experimental research in the social sciences which refers to certain kinds of secondary evaluations of an experiment.
Overview
Manipulation checks are measured variables that show what the manipulated variables c ...
is one example of a control check. Manipulation checks allow investigators to isolate the chief variables to strengthen support that these variables are operating as planned.
One of the most important requirements of experimental research designs is the necessity of eliminating the effects of
spurious, intervening, and
antecedent variables. In the most basic model, cause (X) leads to effect (Y). But there could be a third variable (Z) that influences (Y), and X might not be the true cause at all. Z is said to be a spurious variable and must be controlled for. The same is true for
intervening variable
In statistics, a mediation model seeks to identify and explain the mechanism or process that underlies an observed relationship between an independent variable and a dependent variable via the inclusion of a third hypothetical variable, known a ...
s (a variable in between the supposed cause (X) and the effect (Y)), and anteceding variables (a variable prior to the supposed cause (X) that is the true cause). When a third variable is involved and has not been controlled for, the relation is said to be a
zero order relationship. In most practical applications of experimental research designs there are several causes (X1, X2, X3). In most designs, only one of these causes is manipulated at a time.
Experimental designs after Fisher
Some efficient designs for estimating several main effects were found independently and in near succession by
Raj Chandra Bose and
K. Kishen K is the eleventh letter of the Latin alphabet.
K may also refer to:
General uses
* K (programming language), an array processing language developed by Arthur Whitney and commercialized by Kx Systems
* K (cider), a British draft cider manufact ...
in 1940 at the
Indian Statistical Institute, but remained little known until the
Plackett–Burman designs were published in ''
Biometrika'' in 1946. About the same time,
C. R. Rao introduced the concepts of
orthogonal arrays as experimental designs. This concept played a central role in the development of
Taguchi methods
Taguchi methods ( ja, タグチメソッド) are statistical methods, sometimes called robust design methods, developed by Genichi Taguchi to improve the quality of manufactured goods, and more recently also applied to engineering, biotechnology, ...
by
Genichi Taguchi
was an engineer and statistician. From the 1950s onwards, Taguchi developed a methodology for applying statistics to improve the quality of manufactured goods. Taguchi methods have been controversial among some conventional Western statisticians, ...
, which took place during his visit to Indian Statistical Institute in early 1950s. His methods were successfully applied and adopted by Japanese and Indian industries and subsequently were also embraced by US industry albeit with some reservations.
In 1950,
Gertrude Mary Cox
Gertrude Mary Cox (January 13, 1900 – October 17, 1978) was an American statistician and founder of the department of Experimental Statistics at North Carolina State University. She was later appointed director of both the Institute of Statistic ...
and
William Gemmell Cochran published the book ''Experimental Designs,'' which became the major reference work on the design of experiments for statisticians for years afterwards.
Developments of the theory of
linear models have encompassed and surpassed the cases that concerned early writers. Today, the theory rests on advanced topics in
linear algebra,
algebra
Algebra () is one of the areas of mathematics, broad areas of mathematics. Roughly speaking, algebra is the study of mathematical symbols and the rules for manipulating these symbols in formulas; it is a unifying thread of almost all of mathem ...
and
combinatorics
Combinatorics is an area of mathematics primarily concerned with counting, both as a means and an end in obtaining results, and certain properties of finite structures. It is closely related to many other areas of mathematics and has many a ...
.
As with other branches of statistics, experimental design is pursued using both
frequentist and
Bayesian approaches: In evaluating statistical procedures like experimental designs,
frequentist statistics studies the
sampling distribution while
Bayesian statistics
Bayesian statistics is a theory in the field of statistics based on the Bayesian interpretation of probability where probability expresses a ''degree of belief'' in an event. The degree of belief may be based on prior knowledge about the event, ...
updates a
probability distribution on the parameter space.
Some important contributors to the field of experimental designs are
C. S. Peirce,
R. A. Fisher,
F. Yates,
R. C. Bose,
A. C. Atkinson
A is the first letter of the Latin and English alphabet.
A may also refer to:
Science and technology Quantities and units
* ''a'', a measure for the attraction between particles in the Van der Waals equation
* ''A'' value, a measure of ...
,
R. A. Bailey
Rosemary A. Bailey (born 1947) is a British statistician who works in the design of experiments and the analysis of variance and in related areas of combinatorial design, especially in association schemes. She has written books on the desi ...
,
D. R. Cox
Sir David Roxbee Cox (15 July 1924 – 18 January 2022) was a British statistician and educator. His wide-ranging contributions to the field of statistics included introducing logistic regression, the proportional hazards model and the Cox pro ...
,
G. E. P. Box
George Edward Pelham Box (18 October 1919 – 28 March 2013) was a British statistician, who worked in the areas of quality control, time-series analysis, design of experiments, and Bayesian inference. He has been called "one of the gre ...
,
W. G. Cochran,
W. T. Federer W. may refer to:
* SoHo (Australian TV channel) (previously W.), an Australian pay television channel
* ''W.'' (film), a 2008 American biographical drama film based on the life of George W. Bush
* "W.", the fifth track from Codeine's 1992 EP ''Bar ...
,
V. V. Fedorov
''V.'' is the debut novel of Thomas Pynchon, published in 1963. It describes the exploits of a discharged U.S. Navy sailor named Benny Profane, his reconnection in New York with a group of pseudo-bohemian artists and hangers-on known as the Whol ...
,
A. S. Hedayat
A is the first letter of the Latin and English alphabet.
A may also refer to:
Science and technology Quantities and units
* ''a'', a measure for the attraction between particles in the Van der Waals equation
* ''A'' value, a measure of ...
,
J. Kiefer,
O. Kempthorne,
J. A. Nelder,
Andrej Pázman,
Friedrich Pukelsheim Friedrich may refer to:
Names
*Friedrich (surname), people with the surname ''Friedrich''
*Friedrich (given name), people with the given name ''Friedrich''
Other
*Friedrich (board game), a board game about Frederick the Great and the Seven Years' ...
,
D. Raghavarao
Damaraju Raghavarao (1938–2013) was an Indian-born statistician, formerly the Laura H. Carnell professor of statistics and chair of the department of statistics at Temple University in Philadelphia.
Raghavarao is an elected fellow of t ...
,
C. R. Rao,
Shrikhande S. S.
Sharadchandra Shankar Shrikhande (19 October 1917 – 21 April 2020) was an Indian mathematician with notable achievements in combinatorial mathematics. He was notable for his breakthrough work along with R. C. Bose and E. T. Parker in their di ...
,
J. N. Srivastava,
William J. Studden
William is a masculine given name of Norman French origin.Hanks, Hardcastle and Hodges, ''Oxford Dictionary of First Names'', Oxford University Press, 2nd edition, , p. 276. It became very popular in the English language after the Norman conques ...
,
G. Taguchi and
H. P. Wynn
H is the eighth letter of the Latin alphabet.
H may also refer to:
Musical symbols
* H number, Harry Halbreich reference mechanism for music by Honegger and Martinů
* H, B (musical note)
* H, B major
People
* H. (noble) (died after 12 ...
.
The textbooks of D. Montgomery, R. Myers, and G. Box/W. Hunter/J.S. Hunter have reached generations of students and practitioners.
Some discussion of experimental design in the context of
system identification
The field of system identification uses statistical methods to build mathematical models of dynamical systems from measured data. System identification also includes the optimal design of experiments for efficiently generating informative data f ...
(model building for static or dynamic models) is given in and.
Human participant constraints
Laws and ethical considerations preclude some carefully designed
experiments with human subjects. Legal constraints are dependent on
jurisdiction
Jurisdiction (from Latin 'law' + 'declaration') is the legal term for the legal authority granted to a legal entity to enact justice. In federations like the United States, areas of jurisdiction apply to local, state, and federal levels.
Ju ...
. Constraints may involve
institutional review boards,
informed consent
and
confidentiality affecting both clinical (medical) trials and
behavioral and social science experiments.
In the field of toxicology, for example, experimentation is performed
on laboratory ''animals'' with the goal of defining safe exposure limits
for ''humans''. Balancing
the constraints are views from the medical field.
Regarding the randomization of patients,
"... if no one knows which therapy is better, there is no ethical
imperative to use one therapy or another." (p 380) Regarding
experimental design, "...it is clearly not ethical to place subjects
at risk to collect data in a poorly designed study when this situation
can be easily avoided...". (p 393)
See also
*
Adversarial collaboration
*
Bayesian experimental design Bayesian experimental design provides a general probability-theoretical framework from which other theories on experimental design can be derived. It is based on Bayesian inference to interpret the observations/data acquired during the experiment. ...
*
Block design
*
Box–Behnken design
*
Central composite design
*
Clinical trial
*
Clinical study design
Clinical study design is the formulation of trials and experiments, as well as observational studies in medical, clinical and other types of research (e.g., epidemiological) involving human beings. The goal of a clinical study is to assess the sa ...
*
Computer experiment
*
Control variable
*
Controlling for a variable
*
Experimetrics
Experimetrics comprises the body of econometric techniques that are customized to experimental applications.
Experimetrics refers to the application of econometrics to economics experiments. Experimetrics refers to formal procedures used in des ...
(
econometrics-related experiments)
*
Factor analysis
Factor analysis is a statistical method used to describe variability among observed, correlated variables in terms of a potentially lower number of unobserved variables called factors. For example, it is possible that variations in six observed ...
*
Fractional factorial design
*
Glossary of experimental design
*
Grey box model
*
Industrial engineering
*
Instrument effect
The instrument effect is an issue in experimental methodology meaning that any change during the measurement, or, the instrument, may influence the research validity. For example, in a control group design experiment, if the instruments used to m ...
*
Law of large numbers
*
Manipulation checks
*
Multifactor design of experiments software
*
One-factor-at-a-time method
*
Optimal design
*
Plackett–Burman design
*
Probabilistic design
*
Protocol (natural sciences)
*
Quasi-experimental design
*
Randomized block design
In the statistical theory of the design of experiments, blocking is the arranging of experimental units in groups (blocks) that are similar to one another. Blocking can be used to tackle the problem of pseudoreplication.
Use
Blocking reduces ...
*
Randomized controlled trial
A randomized controlled trial (or randomized control trial; RCT) is a form of scientific experiment used to control factors not under direct experimental control. Examples of RCTs are clinical trials that compare the effects of drugs, surgical ...
*
Research design
*
Robust parameter design
A robust parameter design, introduced by Genichi Taguchi, is an experimental design used to exploit the interaction between control and uncontrollable noise variables by robustification—finding the settings of the control factors that minimize ...
*
Sample size determination
*
Supersaturated design
*
Royal Commission on Animal Magnetism
*
Survey sampling
*
System identification
The field of system identification uses statistical methods to build mathematical models of dynamical systems from measured data. System identification also includes the optimal design of experiments for efficiently generating informative data f ...
*
Taguchi methods
Taguchi methods ( ja, タグチメソッド) are statistical methods, sometimes called robust design methods, developed by Genichi Taguchi to improve the quality of manufactured goods, and more recently also applied to engineering, biotechnology, ...
References
Sources
*
Peirce, C. S. (1877–1878), "Illustrations of the Logic of Science" (series), ''Popular Science Monthly'', vols. 12–13. Relevant individual papers:
** (1878 March), "The Doctrine of Chances", ''Popular Science Monthly'', v. 12, March issue, pp
604615. ''Internet Archive'
Eprint
** (1878 April), "The Probability of Induction", ''Popular Science Monthly'', v. 12, pp
705718. ''Internet Archive'
Eprint
** (1878 June), "The Order of Nature", ''Popular Science Monthly'', v. 13, pp
203217.''Internet Archive'
Eprint
** (1878 August), "Deduction, Induction, and Hypothesis", ''Popular Science Monthly'', v. 13, pp
470482. ''Internet Archive'
Eprint
** (1883), "A Theory of Probable Inference", ''Studies in Logic'', pp
126–181 Little, Brown, and Company. (Reprinted 1983, John Benjamins Publishing Company, )
External links
*
from
"NIST/SEMATECH Handbook on Engineering Statistics"at
NIST
Box–Behnken designsfrom
"NIST/SEMATECH Handbook on Engineering Statistics"at
NIST
Detailed mathematical developments of most common DoEin the Opera Magistris v3.6 online reference Chapter 15, section 7.4, .
{{DEFAULTSORT:Design of Experiments
Experiments
Industrial engineering
Metascience
Quantitative research
Statistical process control
Statistical theory
Systems engineering
Mathematics in medicine