HOME
The Info List - Tolerance Interval


--- Advertisement ---



A TOLERANCE INTERVAL is a statistical interval within which, with some confidence level, a specified proportion of a sampled population falls. "More specifically, a 100×p%/100×(1−α) tolerance interval provides limits within which at least a certain proportion (p) of the population falls with a given level of confidence (1−α)." "A (p, 1−α) tolerance interval (TI) based on a sample is constructed so that it would include at least a proportion p of the sampled population with confidence 1−α; such a TI is usually referred to as p-content − (1−α) coverage TI." "A (p, 1−α) upper TOLERANCE LIMIT (TL) is simply an 1−α upper confidence limit for the 100 p percentile of the population."

A tolerance interval can be seen as a statistical version of a probability interval . "In the parameters-known case, a 95% tolerance interval and a 95% prediction interval are the same." If we knew a population's exact parameters, we would be able to compute a range within which a certain proportion of the population falls. For example, if we know a population is normally distributed with mean {displaystyle mu } and standard deviation {displaystyle sigma } , then the interval 1.96 {displaystyle mu pm 1.96sigma } includes 95% of the population (1.96 is the z-score for 95% coverage of a normally distributed population).

However, if we have only a sample from the population, we know only the sample mean {displaystyle {hat {mu }}} and sample standard deviation {displaystyle {hat {sigma }}} , which are only estimates of the true parameters. In that case, 1.96 {displaystyle {hat {mu }}pm 1.96{hat {sigma }}} will not necessarily include 95% of the population, due to variance in these estimates. A tolerance interval bounds this variance by introducing a confidence level {displaystyle gamma } , which is the confidence with which this interval actually includes the specified proportion of the population. For a normally distributed population, a z-score can be transformed into a "k factor" or TOLERANCE FACTOR for a given {displaystyle gamma } via lookup tables or several approximation formulas. "As the degrees of freedom approach infinity, the prediction and tolerance intervals become equal."

CONTENTS

* 1 Formulas

* 1.1 Normal case

* 2 Relation to other intervals

* 2.1 Examples

* 3 Calculation * 4 See also * 5 References * 6 Further reading

FORMULAS

THIS SECTION NEEDS EXPANSION with: mathematical equations. You can help by adding to it . (July 2014)

NORMAL CASE

RELATION TO OTHER INTERVALS

Main article: Interval estimation

The tolerance interval is less widely known than the confidence interval and prediction interval , a situation some educators have lamented, as it can lead to misuse of the other intervals where a tolerance interval is more appropriate.

The tolerance interval differs from a confidence interval in that the confidence interval bounds a single-valued population parameter (the mean or the variance , for example) with some confidence, while the tolerance interval bounds the range of data values that includes a specific proportion of the population. Whereas a confidence interval's size is entirely due to sampling error, and will approach a zero-width interval at the true population parameter as sample size increases, a tolerance interval's size is due partly to sampling error and partly to actual variance in the population, and will approach the population's probability interval as sample size increases.

The tolerance interval is related to a prediction interval in that both put bounds on variation in future samples. The prediction interval only bounds a single future sample, however, whereas a tolerance interval bounds the entire population (equivalently, an arbitrary sequence of future samples). In other words, a prediction interval covers a specified proportion of a population on average, whereas a tolerance interval covers it with a certain confidence level, making the tolerance interval more appropriate if a single interval is intended to bound multiple future samples.

EXAMPLES

gives the following example:

So consider once again a proverbial EPA mileage test scenario, in which several nominally identical autos of a particular model are tested to produce mileage figures y 1 , y 2 , . . . , y n {displaystyle y_{1},y_{2},...,y_{n}} . If such data are processed to produce a 95% confidence interval for the mean mileage of the model, it is, for example, possible to use it to project the mean or total gasoline consumption for the manufactured fleet of such autos over their first 5,000 miles of use. Such an interval, would however, not be of much help to a person renting one of these cars and wondering whether the (full) 10-gallon tank of gas will suffice to carry him the 350 miles to his destination. For that job, a prediction interval would be much more useful. (Consider the differing implications of being "95% sure" that 35 {displaystyle mu geq 35} as opposed to being "95% sure" that y n + 1 35 {displaystyle y_{n+1}geq 35} .) But neither a confidence interval for {displaystyle mu } nor a prediction interval for a single additional mileage is exactly what is needed by a design engineer charged with determining how large a gas tank the model really needs to guarantee that 99% of the autos produced will have a 400-mile cruising range. What the engineer really needs is a tolerance interval for a fraction p = .99 {displaystyle p=.99} of mileages of such autos.

Another example is given by:

The air lead levels were collected from n = 15 {displaystyle n=15} different areas within the facility. It was noted that the log-transformed lead levels fitted a normal distribution well (that is, the data are from a lognormal distribution . Let {displaystyle mu } and 2 {displaystyle sigma ^{2}} , respectively, denote the population mean and variance for the log-transformed data. If X {displaystyle X} denotes the corresponding random variable, we thus have X N ( , 2 ) {displaystyle Xsim {mathcal {N}}(mu ,sigma ^{2})} . We note that exp(mu) is the median air lead level. A confidence interval for mu can be constructed the usual way, based on the t-distribution ; this in turn will provide a confidence interval for the median air lead level. If X {displaystyle {bar {X}}} and S denote the sample mean and standard deviation of the log-transformed data for a sample of size n, a 95% confidence interval for mu is given by X t n 1 , 0.975 S / ( n ) {displaystyle {bar {X}}pm t_{n-1,0.975}S/{sqrt {(}}n)} , where t m , 1 {displaystyle t_{m,1-alpha }} denotes the 1-alpha quantile of a t-distribution with m degrees of freedom. It may also be of interest to derive a 95% upper confidence bound for the median air lead level. Such a bound for mu is given by X + t n 1 , 0.95 S / n {displaystyle {bar {X}}+t_{n-1,0.95}S/{sqrt {n}}} . Consequently, a 95% upper confidence bound for the median air lead is given by exp ( X + t n 1 , 0.95 S / n ) {displaystyle exp {left({bar {X}}+t_{n-1,0.95}S/{sqrt {n}}right)}} . Now suppose we want to predict the air lead level at a particular area within the laboratory. A 95% upper prediction limit for the log-transformed lead level is given by X + t n 1 , 0.95 S ( 1 + 1 / n ) {displaystyle {bar {X}}+t_{n-1,0.95}S{sqrt {left(1+1/nright)}}} . A two-sided prediction interval can be similarly computed. The meaning and interpretation of these intervals are well known. For example, if the confidence interval X t n 1 , 0.975 S / n {displaystyle {bar {X}}pm t_{n-1,0.975}S/{sqrt {n}}} is computed repeatedly from independent samples, 95% of the intervals so computed will include the true value of {displaystyle mu } , in the long run. In other words, the interval is meant to provide information concerning the parameter {displaystyle mu } only. A prediction interval has a similar interpretation, and is meant to provide information concerning a single lead level only. Now suppose we want to use the sample to conclude whether or not at least 95% of the population lead levels are below a threshold. The confidence interval and prediction interval cannot answer this question, since the confidence interval is only for the median lead level, and the prediction interval is only for a single lead level. What is required is a tolerance interval; more specifically, an upper tolerance limit. The upper tolerance limit is to be computed subject to the condition that at least 95% of the population lead levels is below the limit, with a certain confidence level, say 99%.

CALCULATION

One-sided normal tolerance intervals have an exact solution in terms of the sample mean and sample variance based on the noncentral t-distribution . Two-sided normal tolerance intervals can be obtained based on the noncentral chi-squared distribution .

SEE ALSO

* Engineering tolerance

REFERENCES

* ^ D. S. Young (2010), Book Reviews: "Statistical Tolerance Regions: Theory, Applications, and Computation", TECHNOMETRICS, FEBRUARY 2010, VOL. 52, NO. 1, pp.143-144. * ^ A B Krishnamoorthy, K. and Lian, Xiaodong(2011) 'Closed-form approximate tolerance intervals for some general linear models and comparison studies', Journal of Statistical Computation and Simulation,, First published on: 13 June 2011 doi :10.1080/00949655.2010.545061 * ^ Thomas P. Ryan (22 June 2007). Modern Engineering Statistics. John Wiley & Sons. pp. 222–. ISBN 978-0-470-12843-5 . Retrieved 22 February 2013. * ^ "Statistical interpretation of data — Part 6: Determination of statistical tolerance intervals". ISO 16269-6. 2005. p. 64. Missing or empty url= (help ) * ^ "Tolerance intervals for a normal distribution". Engineering Statistics
Statistics
Handbook. NIST/Sematech. 2010. Retrieved 2011-08-26. * ^ De Gryze, S.; Langhans, I.; Vandebroek, M. (2007). "Using the correct intervals for prediction: A tutorial on tolerance intervals for ordinary least-squares regression". Chemometrics and Intelligent Laboratory Systems. 87 (2): 147. doi :10.1016/j.chemolab.2007.03.002 .

* ^ A B C Stephen B. Vardeman (1992). "What about the Other Intervals?". The American Statistician. 46 (3): 193–197. JSTOR 2685212 . doi :10.2307/2685212 . * ^ A B C Mark J. Nelson (2011-08-14). "You might want a tolerance interval". Retrieved 2011-08-26. * ^ A B K. Krishnamoorthy (2009). Statistical Tolerance Regions: Theory, Applications, and Computation. John Wiley and Sons. pp. 1–6. ISBN 0-470-38026-8 . * ^ A B Derek S. Young (August 2010). "tolerance: An R Package for Estimating Tolerance Intervals". Journal of Statistical Software. 36 (5): 1–39. ISSN 1548-7660 . Retrieved 19 February 2013. , p.23

FURTHER READING

* K. Krishnamoorthy (2009). Statistical Tolerance Regions: Theory, Applications, and Computation. John Wiley and Sons. ISBN 0-470-38026-8 . ; Chap. 1, "Preliminaries", is available at http://media.wiley.com/product_data/excerpt/68/04703802/0470380268.pdf * Derek S. Young (August 2010). "tolerance: An R Package for Estimating Tolerance Intervals". Journal of Statistical Software. 36 (5): 1–39. ISSN 1548-7660 . Retrieved 19 February 2013. * ISO 16269-6, Statistical interpretation of data, Part 6: Determination of statistical tolerance intervals, Technical Committee ISO/TC 69, Applications of statistical methods. Available at http://standardsproposals.bsigroup.com/home/getpdf/458

* v * t * e

Statistics
Statistics

* Outline * Index

DESCRIPTIVE STATISTICS

CONTINUOUS DATA

CENTER

* Mean
Mean

* arithmetic * geometric * harmonic

* Median
Median
* Mode

DISPERSION

* Variance * Standard deviation
Standard deviation
* Coefficient of variation * Percentile
Percentile
* Range * Interquartile range
Interquartile range

SHAPE

* Central limit theorem

* Moments

* Skewness
Skewness
* Kurtosis
Kurtosis
* L-moments

COUNT DATA

* Index of dispersion

SUMMARY TABLES

* Grouped data * Frequency distribution * Contingency table

DEPENDENCE

* Pearson product-moment correlation

* Rank correlation

* Spearman\'s rho * Kendall\'s tau

* Partial correlation * Scatter plot
Scatter plot

GRAPHICS

* Bar chart
Bar chart
* Biplot
Biplot
* Box plot * Control chart
Control chart
* Correlogram * Fan chart * Forest plot * Histogram
Histogram
* Pie chart * Q–Q plot * Run chart
Run chart
* Scatter plot
Scatter plot
* Stem-and-leaf display * Radar chart

DATA COLLECTION

STUDY DESIGN

* Population * Statistic
Statistic
* Effect size
Effect size
* Statistical power * Sample size determination * Missing data
Missing data

SURVEY METHODOLOGY

* Sampling

* stratified * cluster

* Standard error * Opinion poll
Opinion poll
* Questionnaire
Questionnaire

CONTROLLED EXPERIMENTS

* Design

* control * optimal

* Controlled trial * Randomized * Random assignment * Replication * Blocking * Interaction * Factorial experiment

UNCONTROLLED STUDIES

* Observational study * Natural experiment * Quasi-experiment

STATISTICAL INFERENCE

STATISTICAL THEORY

* Population * Statistic
Statistic
* Probability distribution
Probability distribution

* Sampling distribution

* Order statistic
Order statistic

* Empirical distribution

* Density estimation

* Statistical model

* Lp space
Lp space

* Parameter

* location * scale * shape

* Parametric family

* Likelihood (monotone) * Location–scale family * Exponential family

* Completeness * Sufficiency

* Statistical functional

* Bootstrap * U * V

* Optimal decision

* loss function

* Efficiency

* Statistical distance

* divergence

* Asymptotics * Robustness

FREQUENTIST INFERENCE

POINT ESTIMATION

* Estimating equations

* Maximum likelihood
Maximum likelihood
* Method of moments * M-estimator * Minimum distance

* Unbiased estimators

* Mean-unbiased minimum-variance

* Rao–Blackwellization * Lehmann–Scheffé theorem

* Median
Median
unbiased

* Plug-in

INTERVAL ESTIMATION

* Confidence interval
Confidence interval
* Pivot * Likelihood interval
Likelihood interval
* Prediction interval * Tolerance interval

* Resampling

* Bootstrap * Jackknife

TESTING HYPOTHESES

* 1- border-left-width:2px;border-left-style:solid;width:100%;padding:0px">

* Likelihood-ratio * Wald * Score

SPECIFIC TESTS

* Z (normal) * Student\'s t-test * F

GOODNESS OF FIT

* Chi-squared * Kolmogorov–Smirnov * Anderson–Darling * Lilliefors * Jarque–Bera * Normality (Shapiro–Wilk) * Likelihood-ratio test

* Model selection
Model selection

* Cross validation * AIC * BIC

RANK STATISTICS

* Sign

* Sample median
Sample median

* Signed rank (Wilcoxon)

* Hodges–Lehmann estimator

* Rank sum (Mann–Whitney)

* Nonparametric anova

* 1-way (Kruskal–Wallis) * 2-way (Friedman) * Ordered alternative (Jonckheere–Terpstra)

BAYESIAN INFERENCE

* Bayesian probability

* prior * posterior

* Credible interval * Bayes factor

* Bayesian estimator

* Maximum posterior estimator

* Correlation * Regression analysis
Regression analysis

CORRELATION

* Pearson product-moment * Partial correlation * Confounding
Confounding
variable * Coefficient of determination
Coefficient of determination

REGRESSION ANALYSIS

* Errors and residuals * Regression model validation
Regression model validation
* Mixed effects models * Simultaneous equations models * Multivariate adaptive regression splines (MARS)

LINEAR REGRESSION

* Simple linear regression
Simple linear regression
* Ordinary least squares * General linear model
General linear model
* Bayesian regression

NON-STANDARD PREDICTORS

* Nonlinear regression * Nonparametric * Semiparametric * Isotonic * Robust * Heteroscedasticity
Heteroscedasticity
* Homoscedasticity
Homoscedasticity

GENERALIZED LINEAR MODEL

* Exponential families * Logistic (Bernoulli) / Binomial / Poisson regressions

PARTITION OF VARIANCE

* Analysis of variance
Analysis of variance
(ANOVA, anova) * Analysis of covariance * Multivariate ANOVA * Degrees of freedom

CATEGORICAL / MULTIVARIATE / TIME-SERIES / SURVIVAL ANALYSIS

CATEGORICAL

* Cohen\'s kappa * Contingency table * Graphical model * Log-linear model * McNemar\'s test

MULTIVARIATE

* Regression * Manova * Principal components * Canonical correlation * Discriminant analysis * Cluster analysis
Cluster analysis
* Classification

* Structural equation model

* Factor analysis
Factor analysis

* Multivariate distributions

* Elliptical distributions

* Normal

TIME-SERIES

GENERAL

* Decomposition * Trend * Stationarity * Seasonal adjustment * Exponential smoothing * Cointegration * Structural break
Structural break
* Granger causality

SPECIFIC TESTS

* Dickey–Fuller * Johansen * Q-statistic (Ljung–Box) * Durbin–Watson * Breusch–Godfrey

TIME DOMAIN

* Autocorrelation
Autocorrelation
(ACF)

* partial (PACF)

* Cross-correlation
Cross-correlation
(XCF) * ARMA model * ARIMA model (Box–Jenkins) * Autoregressive conditional heteroskedasticity (ARCH) * Vector autoregression (VAR)

FREQUENCY DOMAIN

* Spectral density estimation * Fourier analysis
Fourier analysis
* Wavelet
Wavelet

SURVIVAL

SURVIVAL FUNCTION

* Kaplan–Meier estimator (product limit) * Proportional hazards models * Accelerated failure time (AFT) model * First hitting time

HAZARD FUNCTION

* Nelson–Aalen estimator

TEST

* Log-rank test

APPLICATIONS

BIOSTATISTICS

* Bioinformatics
Bioinformatics
* Clinical trials / studies * Epidemiology
Epidemiology
* Medical statistics

ENGINEERING STATISTICS

* Chemometrics * Methods engineering * Probabilistic design * Process / quality control * Reliability * System identification

SOCIAL STATISTICS

* Actuarial science * Census
Census
* Crime statistics * Demography * Econometrics
Econometrics
* National accounts * Official statistics
Official statistics
* Population statistics * Psychometrics

SPATIAL STATISTICS

* Cartography
Cartography
* Environmental statistics * Geographic information system
Geographic information system
* Geostatistics * Kriging

* CATEGORY * PORTAL * COMMONS * WI