HOME

TheInfoList



OR:

A glossary of terms used in
experimental research An experiment is a procedure carried out to support or refute a hypothesis, or determine the efficacy or likelihood of something previously untried. Experiments provide insight into cause-and-effect by demonstrating what outcome occurs when a ...
.


Concerned fields

* Statistics * Experimental design * Estimation theory


Glossary

* Alias: When the estimate of an effect also includes the influence of one or more other effects (usually high order interactions) the effects are said to be aliased (see confounding). For example, if the estimate of effect ''D'' in a four factor experiment actually estimates (''D'' + ''ABC''), then the main effect ''D'' is aliased with the 3-way interaction ''ABC''. Note: This causes no difficulty when the higher order interaction is either non-existent or insignificant. * Analysis of variance (ANOVA): A mathematical process for separating the variability of a group of observations into assignable causes and setting up various significance tests. * Balanced design: An experimental design where all cells (i.e. treatment combinations) have the same number of observations. * Blocking: A schedule for conducting treatment combinations in an experimental study such that any effects on the experimental results due to a known change in raw materials, operators, machines, etc., become concentrated in the levels of the blocking variable. Note: the reason for blocking is to isolate a systematic effect and prevent it from obscuring the main effects. Blocking is achieved by restricting randomization. * Center Points: Points at the center value of all factor ranges. * Coding Factor Levels: Transforming the scale of measurement for a factor so that the high value becomes +1 and the low value becomes -1 (see scaling). After coding all factors in a 2-level full factorial experiment, the design matrix has all orthogonal columns. Coding is a simple linear transformation of the original measurement scale. If the "high" value is ''X''h and the "low" value is ''X''L (in the original scale), then the scaling transformation takes any original ''X'' value and converts it to (''X'' − ''a'')/''b'', where ''a'' = (''X''h + ''X''L)/2 and ''b'' = (''X''h−''X''L)/2. To go back to the original measurement scale, just take the coded value and multiply it by ''b'' and add ''a'' or, ''X'' = ''b'' × (coded value) + ''a''. As an example, if the factor is temperature and the high setting is 65°C and the low setting is 55°C, then ''a'' = (65 + 55)/2 = 60 and ''b'' = (65 − 55)/2 = 5. The center point (where the coded value is 0) has a temperature of 5(0) + 60 = 60°C. * Comparative design: A design that allows the (typically mean-unbiased) estimation of the difference in factor effects, especially for the difference in treatment effects. The estimation of differences between treatment effects can be made with greater reliability than the estimation of absolute treatment effects. * Confounding: A confounding design is one where some treatment effects (main or interactions) are estimated by the same linear combination of the experimental observations as some blocking effects. In this case, the treatment effect and the blocking effect are said to be confounded. Confounding is also used as a general term to indicate that the value of a main effect estimate comes from both the main effect itself and also contamination or bias from higher order interactions. Note: Confounding designs naturally arise when full factorial designs have to be run in blocks and the block size is smaller than the number of different treatment combinations. They also occur whenever a fractional factorial design is chosen instead of a full factorial design. * Control group: a set of experimental units to which incidental treatments are applied but not main treatments. For example, in applying a herbicide as one treatment, plots receiving that treatment might be driven over by a machine applying the herbicide but treatments not receiving the herbicide would not normally be driven over. The machine traffic is an incidental treatment. If there was a concern that the machine traffic might have an effect on the variable being measured (e.g. death of strawberry plants), then a control treatment would receive the machine traffic but no herbicide. Control groups are a way of eliminating the possibility of incidental treatments being the cause of measured effects. The incidental treatments are controlled for. Compare
treatment group In the design of experiments, hypotheses are applied to experimental units in a treatment group. In comparative experiments, members of a control group receive a standard treatment, a placebo, or no treatment at all. There may be more than one tr ...
s. A treatment that is only the absence of the manipulation being studied is simply one of the treatments and not a control, though it is now common to refer to a non-manipulated treatment as a control. * Crossed factors: See factors below. *
Design A design is a plan or specification for the construction of an object or system or for the implementation of an activity or process or the result of that plan or specification in the form of a prototype, product, or process. The verb ''to design' ...
: A set of experimental runs which allows you to fit a particular model and estimate your desired effects. *
Design matrix In statistics and in particular in regression analysis, a design matrix, also known as model matrix or regressor matrix and often denoted by X, is a matrix of values of explanatory variables of a set of objects. Each row represents an individual ob ...
: A matrix description of an experiment that is useful for constructing and analyzing experiments. *
Design of Experiments The design of experiments (DOE, DOX, or experimental design) is the design of any task that aims to describe and explain the variation of information under conditions that are hypothesized to reflect the variation. The term is generally associ ...
: A systematic, rigorous approach to engineering problem-solving that applies principles and techniques at the data collection stage so as to ensure the generation of valid, defensible, and supportable engineering conclusions * Design Point: A single combination of settings for the independent variables of an experiment. A Design of Experiments will result in a set of design points, and each design point is designed to be executed one or more times, with the number of iterations based on the required statistical significance for the experiment. * Effect (of a factor): How changing the settings of a factor changes the response. The effect of a single factor is also called a main effect. A treatment effect may be assumed to be the same for each experimental unit, by the assumption of treatment-unit additivity; more generally, the treatment effect may be the average effect. Other effects may be block effects. (For a factor A with two levels, scaled so that low = -1 and high = +1, the effect of A has a mean-unbiased estimator that is evaluated by subtracting the average observed response when A is -1 from the average observed response when A = +1 and dividing the result by 2; division by 2 is needed because the -1 level is 2 scaled units away from the +1 level.) *
Error An error (from the Latin ''error'', meaning "wandering") is an action which is inaccurate or incorrect. In some usages, an error is synonymous with a mistake. The etymology derives from the Latin term 'errare', meaning 'to stray'. In statistics ...
: Unexplained variation in a collection of observations. See
Errors and residuals in statistics In statistics and optimization, errors and residuals are two closely related and easily confused measures of the deviation of an observed value of an element of a statistical sample from its "true value" (not necessarily observable). The er ...
. Note: experimental designs typically require understanding of both random error and lack of fit error. * Experimental unit: The entity to which a specific treatment combination is applied. For example, an experimental unit can be a ** PC board ** silicon wafer ** tray of components simultaneously treated ** individual agricultural plants ** plot of land ** automotive transmissions ** Living organisms or parts of them ** etc. * Factors: Process inputs that an investigator manipulates to cause a corresponding change in the output. Some factors cannot be controlled by the experimenter but may affect the responses. These uncontrolled factors should be measured and used in the data analysis, if their effect is significant. Note: The inputs can be discrete or continuous. ** Crossed factors: Two factors are crossed if every level of one occurs with every level of the other in the experiment. ** Nested factors: A factor "A" is nested within another factor "B" if the levels or values of "A" are different for every level or value of "B". Note: Nested factors or effects have a hierarchical relationship. * Fixed effect: An effect associated with an input variable that has a limited number of levels or in which only a limited number of levels are of interest to the experimenter. *
Interaction Interaction is action that occurs between two or more objects, with broad use in philosophy and the sciences. It may refer to: Science * Interaction hypothesis, a theory of second language acquisition * Interaction (statistics) * Interactions o ...
: Occurs when the effect of one factor on a response depends on the level of another factor(s). * Lack of fit error: Error that occurs when the analysis omits one or more important terms or factors from the process model. Note: Including replication in a designed experiment allows separation of experimental error into its components: lack of fit and random (pure) error. *
Model A model is an informative representation of an object, person or system. The term originally denoted the plans of a building in late 16th-century English, and derived via French and Italian ultimately from Latin ''modulus'', a measure. Models c ...
: Mathematical relationship which relates changes in a given response to changes in one or more factors. * Nested Factors: See factors above. * Orthogonality: Two vectors of the same length are orthogonal if the sum of the products of their corresponding elements is 0. Note: An experimental design is orthogonal if the effects of any factor balance out (sum to zero) across the effects of the other factors. * Paradigm: a model created given the basic design, the hypothesis and the particular conditions for the experiment. * Random effect: An effect associated with input variables chosen at random from a population having a large or infinite number of possible values. * Random error: Error that occurs due to natural variation in the process. Note: Random error is typically assumed to be normally distributed with zero mean and a constant variance. Note: Random error is also called experimental error. *
Randomization Randomization is the process of making something random. Randomization is not haphazard; instead, a random process is a sequence of random variables describing a process whose outcomes do not follow a deterministic pattern, but follow an evolution d ...
: A schedule for allocating treatment material and for conducting treatment combinations in a designed experiment such that the conditions in one run neither depend on the conditions of the previous run nor predict the conditions in the subsequent runs. Note: The importance of randomization cannot be over stressed. Randomization is necessary for conclusions drawn from the experiment to be correct, unambiguous and defensible. *
Regression discontinuity design In statistics, econometrics, political science, epidemiology, and related disciplines, a regression discontinuity design (RDD) is a quasi-experimental pretest-posttest design that aims to determine the causal effects of interventions by assigning a ...
: A design in which assignment to a treatment is determined at least partly by the value of an observed covariate lying on either side of a fixed threshold. * Replication: Performing the same treatment combination more than once. Note: Including replication allows an estimate of the random error independent of any lack of fit error. * Resolution: In fractional factorial designs, "resolution" describes the degree to which the estimated main-effects are aliased (or confounded) with estimated higher-order interactions (2-level interactions, 3-level interactions, etc.). In general, the resolution of a design is one more than the smallest order interaction which is aliased with some main effect. If some main effects are confounded with some 2-level interactions, the resolution is 3. Note: Full factorial designs have no confounding and are said to have resolution "infinity". For most practical purposes, a resolution 5 design is excellent and a resolution 4 design may be adequate. Resolution 3 designs are useful as economical screening designs. * Response(s): The output(s) of a process. Sometimes called dependent variable(s). * Response surface: A designed experiment that models the quantitative response, especially for the short-term goal of improving a process and the longer-term goal of finding optimum factor-values. Traditionally, response-surfaces have been modeled with quadratic-polynomials, whose estimation requires that every factor have three levels. * Rotatability: A design is rotatable if the variance of the predicted response at any point x depends only on the distance of x from the design center point. A design with this property can be rotated around its center point without changing the prediction variance at x. Note: Rotatability is a desirable property for response surface designs (i.e. quadratic model designs). * Scaling factor levels: Transforming factor levels so that the high value becomes +1 and the low value becomes -1. * Screening design: A designed experiment that identifies which of many factors have a significant effect on the response. Note: Typically screening designs have more than 5 factors. * Test plan: a written document that gives a specific listing of the test procedures and sequence to be followed. * Treatment: A treatment is a specific combination of factor levels whose effect is to be compared with other treatments. * Treatment combination: The combination of the settings of several factors in a given experimental trial. Also known as a run. *
Treatment group In the design of experiments, hypotheses are applied to experimental units in a treatment group. In comparative experiments, members of a control group receive a standard treatment, a placebo, or no treatment at all. There may be more than one tr ...
: see Control group *
Variance component In statistics, a random effects model, also called a variance components model, is a statistical model where the model parameters are random variables. It is a kind of hierarchical linear model, which assumes that the data being analysed are ...
s: Partitioning of the overall variation into assignable components.


See also

*
Glossary of probability and statistics This glossary of statistics and probability is a list of definitions of terms and concepts used in the mathematical sciences of statistics and probability, their sub-disciplines, and related fields. For additional related terms, see Glossary of m ...
*
Notation in probability and statistics Probability theory and statistics have some commonly used conventions, in addition to standard mathematical notation and mathematical symbols. Probability theory * Random variables are usually written in upper case roman letters: ''X'', ''Y'', ...
*
Glossary of clinical research A glossary of terms used in clinical research. __NOTOC__ A ; Activities of daily living : The tasks of everyday life. These activities include eating, dressing, getting into or out of a bed or chair, taking a bath or shower, and using the toi ...
*
List of statistical topics 0–9 * 1.96 *2SLS (two-stage least squares) redirects to instrumental variable *3SLS – see three-stage least squares *68–95–99.7 rule *100-year flood A *A priori probability *Abductive reasoning *Absolute deviation *Absolute risk red ...


References


External links

* {{NIST-PD Design of experiments
Experimental design The design of experiments (DOE, DOX, or experimental design) is the design of any task that aims to describe and explain the variation of information under conditions that are hypothesized to reflect the variation. The term is generally associ ...
Experimental design The design of experiments (DOE, DOX, or experimental design) is the design of any task that aims to describe and explain the variation of information under conditions that are hypothesized to reflect the variation. The term is generally associ ...
Wikipedia glossaries using unordered lists