Checking if a coin is fair
   HOME

TheInfoList



OR:

In
statistics Statistics (from German language, German: ', "description of a State (polity), state, a country") is the discipline that concerns the collection, organization, analysis, interpretation, and presentation of data. In applying statistics to a s ...
, the question of checking whether a coin is fair is one whose importance lies, firstly, in providing a simple problem on which to illustrate basic ideas of
statistical inference Statistical inference is the process of using data analysis to infer properties of an underlying probability distribution.Upton, G., Cook, I. (2008) ''Oxford Dictionary of Statistics'', OUP. . Inferential statistical analysis infers properties of ...
and, secondly, in providing a simple problem that can be used to compare various competing methods of statistical inference, including
decision theory Decision theory or the theory of rational choice is a branch of probability theory, probability, economics, and analytic philosophy that uses expected utility and probabilities, probability to model how individuals would behave Rationality, ratio ...
. The practical problem of checking whether a coin is fair might be considered as easily solved by performing a sufficiently large number of trials, but statistics and
probability theory Probability theory or probability calculus is the branch of mathematics concerned with probability. Although there are several different probability interpretations, probability theory treats the concept in a rigorous mathematical manner by expre ...
can provide guidance on two types of question; specifically those of how many trials to undertake and of the accuracy of an estimate of the probability of turning up heads, derived from a given sample of trials. A
fair coin In probability theory and statistics, a sequence of Independence (probability theory), independent Bernoulli trials with probability 1/2 of success on each trial is metaphorically called a fair coin. One for which the probability is not 1/2 is ca ...
is an idealized randomizing device with two states (usually named "heads" and "tails") which are equally likely to occur. It is based on the
coin flip Coin flipping, coin tossing, or heads or tails is using the thumb to make a coin go up while spinning in the air and checking which side is showing when it is down onto a surface, in order to randomly choose between two alternatives. It is a for ...
used widely in sports and other situations where it is required to give two parties the same chance of winning. Either a specially designed
chip Chip may refer to: Food * Chip (snack), thinly sliced and deep-fried gastro item ** Potato chips (US) or crisp (UK) * Chips (fried potato strips) (UK) or french fries (US) (common as a takeout side) * Game chips, thin chip/French fries * Choco ...
or more usually a simple currency
coin A coin is a small object, usually round and flat, used primarily as a medium of exchange or legal tender. They are standardized in weight, and produced in large quantities at a mint in order to facilitate trade. They are most often issued by ...
is used, although the latter might be slightly "unfair" due to an asymmetrical weight distribution, which might cause one state to occur more frequently than the other, giving one party an unfair advantage. So it might be necessary to test experimentally whether the coin is in fact "fair" – that is, whether the probability of the coin's falling on either side when it is tossed is exactly 50%. It is of course impossible to rule out arbitrarily small deviations from fairness such as might be expected to affect only one flip in a lifetime of flipping; also it is always possible for an unfair (or " biased") coin to happen to turn up exactly 10 heads in 20 flips. Therefore, any fairness test must only establish a certain degree of confidence in a certain degree of fairness (a certain maximum bias). In more rigorous terminology, the problem is of determining the parameters of a
Bernoulli process In probability and statistics, a Bernoulli process (named after Jacob Bernoulli) is a finite or infinite sequence of binary random variables, so it is a discrete-time stochastic process that takes only two values, canonically 0 and 1. The ...
, given only a limited sample of
Bernoulli trial In the theory of probability and statistics, a Bernoulli trial (or binomial trial) is a random experiment with exactly two possible outcomes, "success" and "failure", in which the probability of success is the same every time the experiment is ...
s.


Preamble

This article describes experimental procedures for determining whether a coin is fair or unfair. There are many statistical methods for analyzing such an experimental procedure. This article illustrates two of them. Both methods prescribe an experiment (or trial) in which the coin is tossed many times and the result of each toss is recorded. The results can then be analysed statistically to decide whether the coin is "fair" or "probably not fair". * Posterior probability density function, or PDF ( Bayesian approach). Initially, the true probability of obtaining a particular side when a coin is tossed is unknown, but the uncertainty is represented by the "
prior distribution A prior probability distribution of an uncertain quantity, simply called the prior, is its assumed probability distribution before some evidence is taken into account. For example, the prior could be the probability distribution representing the ...
". The theory of
Bayesian inference Bayesian inference ( or ) is a method of statistical inference in which Bayes' theorem is used to calculate a probability of a hypothesis, given prior evidence, and update it as more information becomes available. Fundamentally, Bayesian infer ...
is used to derive the
posterior distribution The posterior probability is a type of conditional probability that results from updating the prior probability with information summarized by the likelihood via an application of Bayes' rule. From an epistemological perspective, the posterior ...
by combining the prior distribution and the
likelihood function A likelihood function (often simply called the likelihood) measures how well a statistical model explains observed data by calculating the probability of seeing that data under different parameter values of the model. It is constructed from the ...
which represents the information obtained from the experiment. The probability that this particular coin is a "fair coin" can then be obtained by integrating the PDF of the
posterior distribution The posterior probability is a type of conditional probability that results from updating the prior probability with information summarized by the likelihood via an application of Bayes' rule. From an epistemological perspective, the posterior ...
over the relevant interval that represents all the probabilities that can be counted as "fair" in a practical sense. * Estimator of true probability ( Frequentist approach). This method assumes that the experimenter can decide to toss the coin any number of times. The experimenter first decides on the level of confidence required and the tolerable margin of error. These parameters determine the minimum number of tosses that must be performed to complete the experiment. An important difference between these two approaches is that the first approach gives some weight to one's prior experience of tossing coins, while the second does not. The question of how much weight to give to prior experience, depending on the quality (credibility) of that experience, is discussed under credibility theory.


Posterior probability density function

One method is to calculate the posterior
probability density function In probability theory, a probability density function (PDF), density function, or density of an absolutely continuous random variable, is a Function (mathematics), function whose value at any given sample (or point) in the sample space (the s ...
of
Bayesian probability theory Bayesian probability ( or ) is an interpretation of the concept of probability, in which, instead of frequency or propensity of some phenomenon, probability is interpreted as reasonable expectation representing a state of knowledge or as quanti ...
. A test is performed by tossing the coin ''N'' times and noting the observed numbers of heads, ''h'', and tails, ''t''. The symbols ''H'' and ''T'' represent more generalised variables expressing the numbers of heads and tails respectively that ''might'' have been observed in the experiment. Thus ''N'' = ''H'' + ''T'' = ''h'' + ''t''. Next, let ''r'' be the actual probability of obtaining heads in a single toss of the coin. This is the property of the coin which is being investigated. Using
Bayes' theorem Bayes' theorem (alternatively Bayes' law or Bayes' rule, after Thomas Bayes) gives a mathematical rule for inverting Conditional probability, conditional probabilities, allowing one to find the probability of a cause given its effect. For exampl ...
, the posterior probability density of ''r'' conditional on ''h'' and ''t'' is expressed as follows: : f(r \mid H = h, T = t) = \frac, where ''g''(''r'') represents the prior probability density distribution of ''r'', which lies in the range 0 to 1. The prior probability density distribution summarizes what is known about the distribution of ''r'' in the absence of any observation. We will assume that the
prior distribution A prior probability distribution of an uncertain quantity, simply called the prior, is its assumed probability distribution before some evidence is taken into account. For example, the prior could be the probability distribution representing the ...
of ''r'' is
uniform A uniform is a variety of costume worn by members of an organization while usually participating in that organization's activity. Modern uniforms are most often worn by armed forces and paramilitary organizations such as police, emergency serv ...
over the interval , 1 That is, ''g''(''r'') = 1. (In practice, it would be more appropriate to assume a prior distribution which is much more heavily weighted in the region around 0.5, to reflect our experience with real coins.) The probability of obtaining ''h'' heads in ''N'' tosses of a coin with a probability of heads equal to ''r'' is given by the
binomial distribution In probability theory and statistics, the binomial distribution with parameters and is the discrete probability distribution of the number of successes in a sequence of statistical independence, independent experiment (probability theory) ...
: : \Pr(H = h \mid r, N = h + t) = r^h (1 - r)^t. Substituting this into the previous formula: : f(r \mid H = h, T = t) = \frac = \frac. This is in fact a
beta distribution In probability theory and statistics, the beta distribution is a family of continuous probability distributions defined on the interval
, 1 The comma is a punctuation mark that appears in several variants in different languages. Some typefaces render it as a small line, slightly curved or straight, but inclined from the vertical; others give it the appearance of a miniature fille ...
or (0, 1) in terms of two positive Statistical parameter, parameters, denoted by ''alpha'' (''α'') an ...
(the
conjugate prior In Bayesian probability theory, if, given a likelihood function p(x \mid \theta), the posterior distribution p(\theta \mid x) is in the same probability distribution family as the prior probability distribution p(\theta), the prior and posteri ...
for the binomial distribution), whose denominator can be expressed in terms of the
beta function In mathematics, the beta function, also called the Euler integral of the first kind, is a special function that is closely related to the gamma function and to binomial coefficients. It is defined by the integral : \Beta(z_1,z_2) = \int_0^1 t^ ...
: :f(r \mid H = h, T = t) = \frac r^h (1 - r)^t. As a uniform prior distribution has been assumed, and because ''h'' and ''t'' are integers, this can also be written in terms of
factorial In mathematics, the factorial of a non-negative denoted is the Product (mathematics), product of all positive integers less than or equal The factorial also equals the product of n with the next smaller factorial: \begin n! &= n \times ...
s: :f(r \mid H = h, T = t) = \frac r^h (1 - r)^t.


Example

For example, let ''N'' = 10, ''h'' = 7, i.e. the coin is tossed 10 times and 7 heads are obtained: : f(r \mid H = 7, T = 3) = \frac r^7 (1 - r)^3 = 1320 \, r^7 (1 - r)^3. The graph on the right shows the
probability density function In probability theory, a probability density function (PDF), density function, or density of an absolutely continuous random variable, is a Function (mathematics), function whose value at any given sample (or point) in the sample space (the s ...
of ''r'' given that 7 heads were obtained in 10 tosses. (Note: ''r'' is the probability of obtaining heads when tossing the same coin once.) The probability for an unbiased coin (defined for this purpose as one whose probability of coming down heads is somewhere between 45% and 55%) : \Pr(0.45 < r <0.55) = \int_^ f(p \mid H = 7, T = 3) \,dp \approx 13\% \! is small when compared with the alternative hypothesis (a biased coin). However, it is not small enough to cause us to believe that the coin has a significant bias. This probability is slightly ''higher'' than our presupposition of the probability that the coin was fair corresponding to the uniform prior distribution, which was 10%. Using a prior distribution that reflects our prior knowledge of what a coin is and how it acts, the posterior distribution would not favor the hypothesis of bias. However the number of trials in this example (10 tosses) is very small, and with more trials the choice of prior distribution would be somewhat less relevant.) With the uniform prior, the posterior probability distribution ''f''(''r'' ,  ''H'' = 7,''T'' = 3) achieves its peak at ''r'' = ''h'' / (''h'' + ''t'') = 0.7; this value is called the maximum ''a posteriori'' (MAP) estimate of ''r''. Also with the uniform prior, the
expected value In probability theory, the expected value (also called expectation, expectancy, expectation operator, mathematical expectation, mean, expectation value, or first Moment (mathematics), moment) is a generalization of the weighted average. Informa ...
of ''r'' under the posterior distribution is :\operatorname = \int_0^1 r \cdot f(r \mid H=7, T=3) \, \mathrmr = \frac = \frac.


Estimator of true probability

Using this approach, to decide the number of times the coin should be tossed, two parameters are required: # The confidence level which is denoted by confidence interval (Z) # The maximum (acceptable) error (E) *The confidence level is denoted by Z and is given by the Z-value of a standard
normal distribution In probability theory and statistics, a normal distribution or Gaussian distribution is a type of continuous probability distribution for a real-valued random variable. The general form of its probability density function is f(x) = \frac ...
. This value can be read off a
standard score In statistics, the standard score or ''z''-score is the number of standard deviations by which the value of a raw score (i.e., an observed value or data point) is above or below the mean value of what is being observed or measured. Raw scores ...
statistics table for the normal distribution. Some examples are: *The maximum error (E) is defined by , p - r, < E where p\,\! is the estimated probability of obtaining heads. Note: r is the same actual probability (of obtaining heads) as r\,\! of the previous section in this article. *In statistics, the estimate of a proportion of a sample (denoted by ''p'') has a
standard error The standard error (SE) of a statistic (usually an estimator of a parameter, like the average or mean) is the standard deviation of its sampling distribution or an estimate of that standard deviation. In other words, it is the standard deviati ...
given by: :s_p = \sqrt where ''n'' is the number of trials (which was denoted by ''N'' in the previous section). This standard error s_p function of ''p'' has a maximum at p = (1-p) = 0.5. Further, in the case of a coin being tossed, it is likely that ''p'' will be not far from 0.5, so it is reasonable to take ''p''=0.5 in the following: : And hence the value of maximum error (E) is given by : Solving for the required number of coin tosses, ''n'', :n = \frac \!


Examples


Other approaches

Other approaches to the question of checking whether a coin is fair are available using
decision theory Decision theory or the theory of rational choice is a branch of probability theory, probability, economics, and analytic philosophy that uses expected utility and probabilities, probability to model how individuals would behave Rationality, ratio ...
, whose application would require the formulation of a
loss function In mathematical optimization and decision theory, a loss function or cost function (sometimes also called an error function) is a function that maps an event or values of one or more variables onto a real number intuitively representing some "cost ...
or
utility function In economics, utility is a measure of a certain person's satisfaction from a certain state of the world. Over time, the term has been used with at least two meanings. * In a Normative economics, normative context, utility refers to a goal or ob ...
which describes the consequences of making a given decision. An approach that avoids requiring either a loss function or a prior probability (as in the Bayesian approach) is that of "acceptance sampling".Cox, D.R., Hinkley, D.V. (1974) ''Theoretical Statistics'' (Example 11.7), Chapman & Hall.


Other applications

The above mathematical analysis for determining if a coin is fair can also be applied to other uses. For example: * Determining the proportion of defective items for a product subjected to a particular (but well defined) condition. Sometimes a product can be very difficult or expensive to produce. Furthermore, if testing such products will result in their destruction, a minimum number of items should be tested. Using a similar analysis, the probability density function of the product defect rate can be found. * Two party polling. If a small random sample poll is taken where there are only two mutually exclusive choices, then this is similar to tossing a single coin multiple times using a possibly biased coin. A similar analysis can therefore be applied to determine the confidence to be ascribed to the actual ratio of votes cast. (If people are allowed to abstain then the analysis must take account of that, and the coin-flip analogy doesn't quite hold.) * Determining the sex ratio in a large group of an animal species. Provided that a small random sample (i.e. small in comparison with the total population) is taken when performing the random sampling of the population, the analysis is similar to determining the probability of obtaining heads in a coin toss.


See also

*
Binomial test Binomial test is an exact test of the statistical significance of deviations from a theoretically expected distribution of observations into two categories using sample data. Usage A binomial test is a statistical hypothesis test used to deter ...
*
Coin flipping Coin flipping, coin tossing, or heads or tails is using the thumb to make a coin go up while spinning in the air and checking which side is showing when it is down onto a surface, in order to randomly choose between two alternatives. It is a for ...
* Confidence interval *
Estimation theory Estimation theory is a branch of statistics that deals with estimating the values of Statistical parameter, parameters based on measured empirical data that has a random component. The parameters describe an underlying physical setting in such ...
*
Inferential statistics Statistical inference is the process of using data analysis to infer properties of an underlying probability distribution.Upton, G., Cook, I. (2008) ''Oxford Dictionary of Statistics'', OUP. . Inferential statistical analysis infers properties of ...
*
Loaded dice A die (: dice, sometimes also used as ) is a small, throwable object with marked sides that can rest in multiple positions. Dice are used for generating Statistical randomness, random values, commonly as part of tabletop games, including List ...
*
Margin of error The margin of error is a statistic expressing the amount of random sampling error in the results of a Statistical survey, survey. The larger the margin of error, the less confidence one should have that a poll result would reflect the result of ...
*
Point estimation In statistics, point estimation involves the use of sample data to calculate a single value (known as a point estimate since it identifies a point in some parameter space) which is to serve as a "best guess" or "best estimate" of an unknown popul ...
*
Statistical randomness A numeric sequence is said to be statistically random when it contains no recognizable patterns or regularities; sequences such as the results of an ideal dice, dice roll or the digits of pi, π exhibit statistical randomness. Statistical randomne ...


References

*Guttman, Wilks, and Hunter: ''Introductory Engineering Statistics'', John Wiley & Sons, Inc. (1971) *Devinder Sivia: ''Data Analysis, a Bayesian Tutorial'', Oxford University Press (1996) {{ISBN, 0-19-851889-7 Statistical hypothesis testing Bayesian inference Experiments Coin flipping