Distribution-free Control Chart
   HOME
*





Distribution-free Control Chart
Distribution-free (nonparametric) control charts are one of the most important tools of statistical process monitoring and control. Implementation techniques of distribution-free control charts do not require any knowledge about the underlying process distribution or its parameters. The main advantage of distribution-free control charts is its in-control robustness, in the sense that, irrespective of the nature of the underlying process distributions, the properties of these control charts remain the same when the process is smoothly operating without presence of any assignable cause. Early research on nonparametric control charts may be found in 1981 when P.K. Bhattacharya and D. Frierson introduced a nonparametric control chart for detecting small disorders. However, major growth of nonparametric control charting schemes has taken place only in the recent years. Popular distribution-free control charts There are distribution-free control charts for both Phase-I analysis and ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Distribution-free
Nonparametric statistics is the branch of statistics that is not based solely on parametrized families of probability distributions (common examples of parameters are the mean and variance). Nonparametric statistics is based on either being distribution-free or having a specified distribution but with the distribution's parameters unspecified. Nonparametric statistics includes both descriptive statistics and statistical inference. Nonparametric tests are often used when the assumptions of parametric tests are violated. Definitions The term "nonparametric statistics" has been imprecisely defined in the following two ways, among others: Applications and purpose Non-parametric methods are widely used for studying populations that take on a ranked order (such as movie reviews receiving one to four stars). The use of non-parametric methods may be necessary when data have a ranking but no clear numerical interpretation, such as when assessing preferences. In terms of levels of ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Control Chart
Control charts is a graph used in production control to determine whether quality and manufacturing processes are being controlled under stable conditions. (ISO 7870-1) The hourly status is arranged on the graph, and the occurrence of abnormalities is judged based on the presence of data that differs from the conventional trend or deviates from the control limit line. Control charts are classified into Shewhart individuals control chart (ISO 7870-2) and CUSUM(CUsUM)(or cumulative sum control chart)(ISO 7870-4). Control charts, also known as Shewhart charts (after Walter A. Shewhart) or process-behavior charts, are a statistical process control tool used to determine if a manufacturing or business process is in a state of control. It is more appropriate to say that the control charts are the graphical device for Statistical Process Monitoring (SPM). Traditional control charts are mostly designed to monitor process parameters when the underlying form of the process distributio ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Statistical Process Control
Statistical process control (SPC) or statistical quality control (SQC) is the application of statistical methods to monitor and control the quality of a production process. This helps to ensure that the process operates efficiently, producing more specification-conforming products with less waste (rework or scrap). SPC can be applied to any process where the "conforming product" (product meeting specifications) output can be measured. Key tools used in SPC include run charts, control charts, a focus on continuous improvement, and the design of experiments. An example of a process where SPC is applied is manufacturing lines. SPC must be practiced in two phases: The first phase is the initial establishment of the process, and the second phase is the regular production use of the process. In the second phase, a decision of the period to be examined must be made, depending upon the change in 5M&E conditions (Man, Machine, Material, Method, Movement, Environment) and wear rate of ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Probability Distribution
In probability theory and statistics, a probability distribution is the mathematical function that gives the probabilities of occurrence of different possible outcomes for an experiment. It is a mathematical description of a random phenomenon in terms of its sample space and the probabilities of events (subsets of the sample space). For instance, if is used to denote the outcome of a coin toss ("the experiment"), then the probability distribution of would take the value 0.5 (1 in 2 or 1/2) for , and 0.5 for (assuming that the coin is fair). Examples of random phenomena include the weather conditions at some future date, the height of a randomly selected person, the fraction of male students in a school, the results of a survey to be conducted, etc. Introduction A probability distribution is a mathematical description of the probabilities of events, subsets of the sample space. The sample space, often denoted by \Omega, is the set of all possible outcomes of a rando ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




Common Cause And Special Cause (statistics)
Common and special causes are the two distinct origins of variation in a process, as defined in the statistical thinking and methods of Walter A. Shewhart and W. Edwards Deming. Briefly, "common causes", also called natural patterns, are the usual, historical, quantifiable variation in a system, while "special causes" are unusual, not previously observed, non-quantifiable variation. The distinction is fundamental in philosophy of statistics and philosophy of probability, with different treatment of these issues being a classic issue of probability interpretations, being recognised and discussed as early as 1703 by Gottfried Leibniz; various alternative names have been used over the years. The distinction has been particularly important in the thinking of economists Frank Knight, John Maynard Keynes and G. L. S. Shackle. Origins and concepts In 1703, Jacob Bernoulli wrote to Gottfried Leibniz to discuss their shared interest in applying mathematics and probability to game ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Univariate (statistics)
Univariate is a term commonly used in statistics to describe a type of data which consists of observations on only a single characteristic or attribute. A simple example of univariate data would be the salaries of workers in industry. Like all the other data, univariate data can be visualized using graphs, images or other analysis tools after the data is measured, collected, reported, and analyzed. Univariate data types Some univariate data consists of numbers (such as the height of 65 inches or the weight of 100 pounds), while others are nonnumerical (such as eye colors of brown or blue). Generally, the terms categorical univariate data and numerical univariate data are used to distinguish between these types. Categorical univariate data Categorical univariate data consists of non-numerical observations that may be placed in categories. It includes labels or names used to identify an attribute of each element. Categorical univariate data usually use either nominal or ordin ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Sign Test
The sign test is a statistical method to test for consistent differences between pairs of observations, such as the weight of subjects before and after treatment. Given pairs of observations (such as weight pre- and post-treatment) for each subject, the sign test determines if one member of the pair (such as pre-treatment) tends to be greater than (or less than) the other member of the pair (such as post-treatment). The paired observations may be designated ''x'' and ''y''. For comparisons of paired observations (''x'',y), the sign test is most useful if comparisons can only be expressed as ''x'' > ''y'', ''x'' = ''y'', or ''x'' 0. Assuming that H0 is true, then ''W'' follows a binomial distribution ''W'' ~ b(''m'', 0.5). Assumptions Let ''Z''i = ''Y''i – ''X''i for ''i'' = 1, ... , ''n''. # The differences ''Zi'' are assumed to be independent. # Each ''Zi'' comes from the same continuous population. # The values ''X''''i ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Wilcoxon Rank-sum
Wilcoxon is a surname, and may refer to: * Charles Wilcoxon, drum educator * Henry Wilcoxon, an actor * Frank Wilcoxon, chemist and statistician, inventor of two non-parametric tests for statistical significance: ** The Wilcoxon signed-rank test (also known as the Wilcoxon T test) ** The Wilcoxon rank-sum test (also known as the Mann–Whitney U test). See also *Wilcox (surname) Wilcox is a surname. Notable people with the surname include: * Adam Wilcox (other), multiple people * Albert Spencer Wilcox (1844–1919), businessman and politician in the Kingdom of Hawaii * Alex Wilcox, airline executive and entrepren ... {{surname ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Lepage Test
In statistics, the Lepage test is an exactly distribution-free test (nonparametric test) for jointly monitoring the location (central tendency) and scale (Statistical variability, variability) in two-sample treatment versus control comparisons. This is one of the most famous Rank test (other), rank tests for the two-sample location-scale problem. The Lepage test statistic is the squared Euclidean distance of standardized Wilcoxon rank-sum test for location and the standardized Ansari–Bradley test for scale. The Lepage test was first introduced by Yves Lepage in 1971 in a paper in ''Biometrika''. A large number of Lepage-type tests exists in statistical literature for simultaneously testing location and scale shifts in case-control studies. The details may be found in the book: ''Nonparametric statistical tests: A computational approach''. Kössler, W. in 2006 also introduced various Lepage type tests using some alternative score functions optimal for various distributions ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




Cucconi Test
In statistics, the Cucconi test is a nonparametric test for jointly comparing central tendency and variability (detecting location and scale changes) in two samples. Many rank tests have been proposed for the two-sample location-scale problem. Nearly all of them are Lepage-type tests, that is a combination of a location test and a scale test. The Cucconi test was first proposed by Odoardo Cucconi in 1968. The Cucconi test is not as familiar as other location-scale tests but it is of interest for several reasons. First, from a historical point of view, it was proposed some years before the Lepage test, the standard rank test for the two-sample location-scale problem. Secondly, as opposed to other location-scale tests, the Cucconi test is not a combination of location and scale tests. Thirdly, it compares favorably with Lepage type tests in terms of power and type-one error probability and very importantly it is easier to be computed because it requires only the ranks of one samp ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Statistical Charts And Diagrams
Statistics (from German: '' Statistik'', "description of a state, a country") is the discipline that concerns the collection, organization, analysis, interpretation, and presentation of data. In applying statistics to a scientific, industrial, or social problem, it is conventional to begin with a statistical population or a statistical model to be studied. Populations can be diverse groups of people or objects such as "all people living in a country" or "every atom composing a crystal". Statistics deals with every aspect of data, including the planning of data collection in terms of the design of surveys and experiments.Dodge, Y. (2006) ''The Oxford Dictionary of Statistical Terms'', Oxford University Press. When census data cannot be collected, statisticians collect data by developing specific experiment designs and survey samples. Representative sampling assures that inferences and conclusions can reasonably extend from the sample to the population as a whole. An ex ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]