HOME

TheInfoList



OR:

In
probability theory Probability theory or probability calculus is the branch of mathematics concerned with probability. Although there are several different probability interpretations, probability theory treats the concept in a rigorous mathematical manner by expre ...
and
statistics Statistics (from German language, German: ', "description of a State (polity), state, a country") is the discipline that concerns the collection, organization, analysis, interpretation, and presentation of data. In applying statistics to a s ...
, the geometric distribution is either one of two
discrete probability distribution In probability theory and statistics, a probability distribution is a function that gives the probabilities of occurrence of possible events for an experiment. It is a mathematical description of a random phenomenon in terms of its sample spa ...
s: * The probability distribution of the number X of
Bernoulli trial In the theory of probability and statistics, a Bernoulli trial (or binomial trial) is a random experiment with exactly two possible outcomes, "success" and "failure", in which the probability of success is the same every time the experiment is ...
s needed to get one success, supported on \mathbb = \; * The probability distribution of the number Y=X-1 of failures before the first success, supported on \mathbb_0 = \ . These two different geometric distributions should not be confused with each other. Often, the name ''shifted'' geometric distribution is adopted for the former one (distribution of X); however, to avoid ambiguity, it is considered wise to indicate which is intended, by mentioning the support explicitly. The geometric distribution gives the probability that the first occurrence of success requires k independent trials, each with success probability p. If the probability of success on each trial is p, then the probability that the k-th trial is the first success is :\Pr(X = k) = (1-p)^p for k=1,2,3,4,\dots The above form of the geometric distribution is used for modeling the number of trials up to and including the first success. By contrast, the following form of the geometric distribution is used for modeling the number of failures until the first success: :\Pr(Y=k) =\Pr(X=k+1)= (1 - p)^k p for k=0,1,2,3,\dots The geometric distribution gets its name because its probabilities follow a geometric sequence. It is sometimes called the Furry distribution after Wendell H. Furry.


Definition

The geometric distribution is the
discrete probability distribution In probability theory and statistics, a probability distribution is a function that gives the probabilities of occurrence of possible events for an experiment. It is a mathematical description of a random phenomenon in terms of its sample spa ...
that describes when the first success in an infinite sequence of
independent and identically distributed Independent or Independents may refer to: Arts, entertainment, and media Artist groups * Independents (artist group), a group of modernist painters based in Pennsylvania, United States * Independentes (English: Independents), a Portuguese artist ...
Bernoulli trials occurs. Its
probability mass function In probability and statistics, a probability mass function (sometimes called ''probability function'' or ''frequency function'') is a function that gives the probability that a discrete random variable is exactly equal to some value. Sometimes i ...
depends on its parameterization and support. When supported on \mathbb, the probability mass function is P(X = k) = (1 - p)^ p where k = 1, 2, 3, \dotsc is the number of trials and p is the probability of success in each trial. The support may also be \mathbb_0, defining Y=X-1. This alters the probability mass function into P(Y = k) = (1 - p)^k p where k = 0, 1, 2, \dotsc is the number of failures before the first success. An alternative parameterization of the distribution gives the probability mass function P(Y = k) = \left(\frac\right)^k \left(1-\frac\right) where P = \frac and Q = \frac. An example of a geometric distribution arises from rolling a six-sided die until a "1" appears. Each roll is
independent Independent or Independents may refer to: Arts, entertainment, and media Artist groups * Independents (artist group), a group of modernist painters based in Pennsylvania, United States * Independentes (English: Independents), a Portuguese artist ...
with a 1/6 chance of success. The number of rolls needed follows a geometric distribution with p=1/6.


Properties


Memorylessness

The geometric distribution is the only memoryless discrete probability distribution. It is the discrete version of the same property found in the
exponential distribution In probability theory and statistics, the exponential distribution or negative exponential distribution is the probability distribution of the distance between events in a Poisson point process, i.e., a process in which events occur continuousl ...
. The property asserts that the number of previously failed trials does not affect the number of future trials needed for a success. Because there are two definitions of the geometric distribution, there are also two definitions of memorylessness for discrete random variables. Expressed in terms of
conditional probability In probability theory, conditional probability is a measure of the probability of an Event (probability theory), event occurring, given that another event (by assumption, presumption, assertion or evidence) is already known to have occurred. This ...
, the two definitions are\Pr(X>m+n\mid X>n)=\Pr(X>m), and\Pr(Y>m+n\mid Y\geq n)=\Pr(Y>m), where m and n are
natural numbers In mathematics, the natural numbers are the numbers 0, 1, 2, 3, and so on, possibly excluding 0. Some start counting with 0, defining the natural numbers as the non-negative integers , while others start with 1, defining them as the positiv ...
, X is a geometrically distributed random variable defined over \mathbb, and Y is a geometrically distributed random variable defined over \mathbb_0. Note that these definitions are not equivalent for discrete random variables; Y does not satisfy the first equation and X does not satisfy the second.


Moments and cumulants

The
expected value In probability theory, the expected value (also called expectation, expectancy, expectation operator, mathematical expectation, mean, expectation value, or first Moment (mathematics), moment) is a generalization of the weighted average. Informa ...
and
variance In probability theory and statistics, variance is the expected value of the squared deviation from the mean of a random variable. The standard deviation (SD) is obtained as the square root of the variance. Variance is a measure of dispersion ...
of a geometrically distributed
random variable A random variable (also called random quantity, aleatory variable, or stochastic variable) is a Mathematics, mathematical formalization of a quantity or object which depends on randomness, random events. The term 'random variable' in its mathema ...
X defined over \mathbb is\operatorname(X) = \frac, \qquad\operatorname(X) = \frac. With a geometrically distributed random variable Y defined over \mathbb_0, the expected value changes into\operatorname(Y) = \frac p,while the variance stays the same. For example, when rolling a six-sided die until landing on a "1", the average number of rolls needed is \frac = 6 and the average number of failures is \frac = 5. The moment generating function of the geometric distribution when defined over \mathbb and \mathbb_0 respectively is\begin M_X(t) &= \frac \\ M_Y(t) &= \frac, t < -\ln(1-p) \endThe moments for the number of failures before the first success are given by : \begin \mathrm(Y^n) & =\sum_^\infty (1-p)^k p\cdot k^n \\ & =p \operatorname_(1-p) & (\textn \neq 0) \end where \operatorname_(1-p) is the polylogarithm function. The
cumulant generating function In probability theory and statistics, the cumulants of a probability distribution are a set of quantities that provide an alternative to the '' moments'' of the distribution. Any two probability distributions whose moments are identical will have ...
of the geometric distribution defined over \mathbb_0 is K(t) = \ln p - \ln (1 - (1-p)e^t)The
cumulant In probability theory and statistics, the cumulants of a probability distribution are a set of quantities that provide an alternative to the '' moments'' of the distribution. Any two probability distributions whose moments are identical will have ...
s \kappa_r satisfy the recursion\kappa_ = q \frac, r=1,2,\dotscwhere q = 1-p, when defined over \mathbb_0.


Proof of expected value

Consider the expected value \mathrm(X) of ''X'' as above, i.e. the average number of trials until a success. The first trial either succeeds with probability p, or fails with probability 1-p. If it fails, the remaining mean number of trials until a success is identical to the original mean - this follows from the fact that all trials are independent. From this we get the formula: : \operatorname \mathrm(X) = p + (1-p)(1 + \mathrm , which, when solved for \mathrm(X) , gives: : \operatorname E(X) = \frac. The expected number of failures Y can be found from the
linearity of expectation In probability theory, the expected value (also called expectation, expectancy, expectation operator, mathematical expectation, mean, expectation value, or first moment) is a generalization of the weighted average. Informally, the expected va ...
, \mathrm(Y) = \mathrm(X-1) = \mathrm(X) - 1 = \frac 1 p - 1 = \frac. It can also be shown in the following way: : \begin \operatorname E(Y) & =p\sum_^\infty(1-p)^k k \\ & = p (1-p) \sum_^\infty (1-p)^ k\\ & = p (1-p) \left(-\sum_^\infty \frac\left 1-p)^k\rightright) \\ & = p (1-p) \left frac\left(-\sum_^\infty (1-p)^k\right)\right\\ & = p(1-p)\frac\left(-\frac\right) \\ & = \frac. \end The interchange of summation and differentiation is justified by the fact that convergent
power series In mathematics, a power series (in one variable) is an infinite series of the form \sum_^\infty a_n \left(x - c\right)^n = a_0 + a_1 (x - c) + a_2 (x - c)^2 + \dots where ''a_n'' represents the coefficient of the ''n''th term and ''c'' is a co ...
converge uniformly on
compact Compact as used in politics may refer broadly to a pact or treaty; in more specific cases it may refer to: * Interstate compact, a type of agreement used by U.S. states * Blood compact, an ancient ritual of the Philippines * Compact government, a t ...
subsets of the set of points where they converge.


Summary statistics

The
mean A mean is a quantity representing the "center" of a collection of numbers and is intermediate to the extreme values of the set of numbers. There are several kinds of means (or "measures of central tendency") in mathematics, especially in statist ...
of the geometric distribution is its expected value which is, as previously discussed in § Moments and cumulants, \frac or \frac when defined over \mathbb or \mathbb_0 respectively. The
median The median of a set of numbers is the value separating the higher half from the lower half of a Sample (statistics), data sample, a statistical population, population, or a probability distribution. For a data set, it may be thought of as the “ ...
of the geometric distribution is \left\lceil -\frac \right\rceilwhen defined over \mathbb and \left\lfloor-\frac\right\rfloor when defined over \mathbb_0. The mode of the geometric distribution is the first value in the support set. This is 1 when defined over \mathbb and 0 when defined over \mathbb_0. The
skewness In probability theory and statistics, skewness is a measure of the asymmetry of the probability distribution of a real-valued random variable about its mean. The skewness value can be positive, zero, negative, or undefined. For a unimodal ...
of the geometric distribution is \frac. The
kurtosis In probability theory and statistics, kurtosis (from , ''kyrtos'' or ''kurtos'', meaning "curved, arching") refers to the degree of “tailedness” in the probability distribution of a real-valued random variable. Similar to skewness, kurtos ...
of the geometric distribution is 9 + \frac. The
excess kurtosis In probability theory and statistics, kurtosis (from , ''kyrtos'' or ''kurtos'', meaning "curved, arching") refers to the degree of “tailedness” in the probability distribution of a real-valued random variable. Similar to skewness, kurtosi ...
of a distribution is the difference between its kurtosis and the kurtosis of a
normal distribution In probability theory and statistics, a normal distribution or Gaussian distribution is a type of continuous probability distribution for a real-valued random variable. The general form of its probability density function is f(x) = \frac ...
, 3. Therefore, the excess kurtosis of the geometric distribution is 6 + \frac. Since \frac \geq 0, the excess kurtosis is always positive so the distribution is leptokurtic. In other words, the tail of a geometric distribution decays faster than a Gaussian.


Entropy and Fisher's Information


Entropy (Geometric Distribution, Failures Before Success)

Entropy is a measure of uncertainty in a probability distribution. For the geometric distribution that models the number of failures before the first success, the probability mass function is: :P(X = k) = (1 - p)^k p, \quad k = 0, 1, 2, \dots The entropy H(X) for this distribution is defined as: :\begin H(X) &= - \sum_^ P(X = k) \ln P(X = k) \\ &= - \sum_^ (1 - p)^k p \ln \left( (1 - p)^k p \right) \\ &= - \sum_^ (1 - p)^k p \left k \ln(1 - p) + \ln p \right\\ &= -\log p - \frac \log(1 - p) \end The entropy increases as the probability p decreases, reflecting greater uncertainty as success becomes rarer.


Fisher's Information (Geometric Distribution, Failures Before Success)

Fisher information measures the amount of information that an observable random variable X carries about an unknown parameter p. For the geometric distribution (failures before the first success), the Fisher information with respect to p is given by: :I(p) = \frac Proof: *The Likelihood Function for a geometric random variable X is: :L(p; X) = (1 - p)^X p *The Log-Likelihood Function is: :\ln L(p; X) = X \ln(1 - p) + \ln p *The Score Function (first derivative of the log-likelihood w.r.t. p) is: :\frac \ln L(p; X) = \frac - \frac *The second derivative of the log-likelihood function is: :\frac \ln L(p; X) = -\frac - \frac *Fisher Information is calculated as the negative expected value of the second derivative: :\begin I(p) &= -E\left frac \ln L(p; X)\right\\ &= - \left(-\frac - \frac \right) \\ &= \frac \end Fisher information increases as p decreases, indicating that rarer successes provide more information about the parameter p.


Entropy (Geometric Distribution, Trials Until Success)

For the geometric distribution modeling the number of trials until the first success, the probability mass function is: :P(X = k) = (1 - p)^ p, \quad k = 1, 2, 3, \dots The entropy H(X) for this distribution is given by: :\begin H(X) &= - \sum_^ P(X = k) \ln P(X = k) \\ &= - \sum_^ (1 - p)^ p \ln \left( (1 - p)^ p \right) \\ &= - \sum_^ (1 - p)^ p \left (k - 1) \ln(1 - p) + \ln p \right\\ &= - \log p + \frac \log(1 - p) \end Entropy increases as p decreases, reflecting greater uncertainty as the probability of success in each trial becomes smaller.


Fisher's Information (Geometric Distribution, Trials Until Success)

Fisher information for the geometric distribution modeling the number of trials until the first success is given by: :I(p) = \frac Proof: *The Likelihood Function for a geometric random variable X is: :L(p; X) = (1 - p)^ p *The Log-Likelihood Function is: :\ln L(p; X) = (X - 1) \ln(1 - p) + \ln p *The Score Function (first derivative of the log-likelihood w.r.t. p) is: :\frac \ln L(p; X) = \frac - \frac *The second derivative of the log-likelihood function is: :\frac \ln L(p; X) = -\frac - \frac *Fisher Information is calculated as the negative expected value of the second derivative: :\begin I(p) &= -E\left frac \ln L(p; X)\right\\ &= - \left(-\frac - \frac \right) \\ &= \frac \end


General properties

* The
probability generating function In probability theory, the probability generating function of a discrete random variable is a power series representation (the generating function) of the probability mass function of the random variable. Probability generating functions are of ...
s of geometric random variables X and Y defined over \mathbb and \mathbb_0 are, respectively, ::\begin G_X(s) & = \frac, \\ 0ptG_Y(s) & = \frac, \quad , s, < (1-p)^. \end * The
characteristic function In mathematics, the term "characteristic function" can refer to any of several distinct concepts: * The indicator function of a subset, that is the function \mathbf_A\colon X \to \, which for a given subset ''A'' of ''X'', has value 1 at points ...
\varphi(t) is equal to G(e^) so the geometric distribution's characteristic function, when defined over \mathbb and \mathbb_0 respectively, is\begin \varphi_X(t) &= \frac,\\ 0pt\varphi_Y(t) &= \frac. \end * The
entropy Entropy is a scientific concept, most commonly associated with states of disorder, randomness, or uncertainty. The term and the concept are used in diverse fields, from classical thermodynamics, where it was first recognized, to the micros ...
of a geometric distribution with parameter p is-\frac * Given a
mean A mean is a quantity representing the "center" of a collection of numbers and is intermediate to the extreme values of the set of numbers. There are several kinds of means (or "measures of central tendency") in mathematics, especially in statist ...
, the geometric distribution is the
maximum entropy probability distribution In statistics and information theory, a maximum entropy probability distribution has entropy that is at least as great as that of all other members of a specified class of probability distributions. According to the principle of maximum entropy, ...
of all discrete probability distributions. The corresponding continuous distribution is the
exponential distribution In probability theory and statistics, the exponential distribution or negative exponential distribution is the probability distribution of the distance between events in a Poisson point process, i.e., a process in which events occur continuousl ...
. * The geometric distribution defined on \mathbb_0 is
infinitely divisible Infinite divisibility arises in different ways in philosophy, physics, economics, order theory (a branch of mathematics), and probability theory (also a branch of mathematics). One may speak of infinite divisibility, or the lack thereof, of matter ...
, that is, for any positive integer n, there exist n independent identically distributed random variables whose sum is also geometrically distributed. This is because the negative binomial distribution can be derived from a Poisson-stopped sum of logarithmic random variables. * The decimal digits of the geometrically distributed random variable ''Y'' are a sequence of
independent Independent or Independents may refer to: Arts, entertainment, and media Artist groups * Independents (artist group), a group of modernist painters based in Pennsylvania, United States * Independentes (English: Independents), a Portuguese artist ...
(and ''not'' identically distributed) random variables. For example, the hundreds digit ''D'' has this probability distribution: ::\Pr(D=d) = , :where ''q'' = 1 − ''p'', and similarly for the other digits, and, more generally, similarly for
numeral system A numeral system is a writing system for expressing numbers; that is, a mathematical notation for representing numbers of a given set, using digits or other symbols in a consistent manner. The same sequence of symbols may represent differe ...
s with other bases than 10. When the base is 2, this shows that a geometrically distributed random variable can be written as a sum of independent random variables whose probability distributions are indecomposable. *
Golomb coding Golomb coding is a lossless data compression method using a family of data compression codes invented by Solomon W. Golomb in the 1960s. Alphabets following a geometric distribution will have a Golomb code as an optimal prefix code, making ...
is the optimal
prefix code A prefix code is a type of code system distinguished by its possession of the prefix property, which requires that there is no whole Code word (communication), code word in the system that is a prefix (computer science), prefix (initial segment) of ...
for the geometric discrete distribution.


Related distributions

* The sum of r
independent Independent or Independents may refer to: Arts, entertainment, and media Artist groups * Independents (artist group), a group of modernist painters based in Pennsylvania, United States * Independentes (English: Independents), a Portuguese artist ...
geometric random variables with parameter p is a
negative binomial In probability theory and statistics, the negative binomial distribution, also called a Pascal distribution, is a discrete probability distribution that models the number of failures in a sequence of independent and identically distributed Berno ...
random variable with parameters r and p. The geometric distribution is a special case of the negative binomial distribution, with r=1. *The geometric distribution is a special case of discrete compound Poisson distribution. * The minimum of n geometric random variables with parameters p_1, \dotsc, p_n is also geometrically distributed with parameter 1 - \prod_^n (1-p_i). * Suppose 0 < ''r'' < 1, and for ''k'' = 1, 2, 3, ... the random variable ''X''''k'' has a
Poisson distribution In probability theory and statistics, the Poisson distribution () is a discrete probability distribution that expresses the probability of a given number of events occurring in a fixed interval of time if these events occur with a known const ...
with expected value ''r''''k''/''k''. Then ::\sum_^\infty k\,X_k :has a geometric distribution taking values in \mathbb_0, with expected value ''r''/(1 − ''r''). * The
exponential distribution In probability theory and statistics, the exponential distribution or negative exponential distribution is the probability distribution of the distance between events in a Poisson point process, i.e., a process in which events occur continuousl ...
is the continuous analogue of the geometric distribution. Applying the
floor A floor is the bottom surface of a room or vehicle. Floors vary from wikt:hovel, simple dirt in a cave to many layered surfaces made with modern technology. Floors may be stone, wood, bamboo, metal or any other material that can support the ex ...
function to the exponential distribution with parameter \lambda creates a geometric distribution with parameter p=1-e^ defined over \mathbb_0. This can be used to generate geometrically distributed random numbers as detailed in § Random variate generation. * If ''p'' = 1/''n'' and ''X'' is geometrically distributed with parameter ''p'', then the distribution of ''X''/''n'' approaches an
exponential distribution In probability theory and statistics, the exponential distribution or negative exponential distribution is the probability distribution of the distance between events in a Poisson point process, i.e., a process in which events occur continuousl ...
with expected value 1 as ''n'' → ∞, since \begin \Pr(X/n>a)=\Pr(X>na) & = (1-p)^ = \left(1-\frac 1 n \right)^ = \left \left( 1-\frac 1 n \right)^n \right \\ & \to ^ = e^ \text n\to\infty. \end More generally, if ''p'' = ''λ''/''n'', where ''λ'' is a parameter, then as ''n''→ ∞ the distribution of ''X''/''n'' approaches an exponential distribution with rate ''λ'':\Pr(X>nx)=\lim_(1-\lambda /n)^=e^ therefore the distribution function of ''X''/''n'' converges to 1-e^, which is that of an exponential random variable. * The
index of dispersion In probability theory and statistics, the index of dispersion, dispersion index, coefficient of dispersion, relative variance, or variance-to-mean ratio (VMR), like the coefficient of variation, is a normalized measure of the dispersion of a pro ...
of the geometric distribution is \frac and its
coefficient of variation In probability theory and statistics, the coefficient of variation (CV), also known as normalized root-mean-square deviation (NRMSD), percent RMS, and relative standard deviation (RSD), is a standardized measure of dispersion of a probability ...
is \frac. The distribution is overdispersed.


Statistical inference

The true parameter p of an unknown geometric distribution can be inferred through estimators and conjugate distributions.


Method of moments

Provided they exist, the first l moments of a probability distribution can be estimated from a sample x_1, \dotsc, x_n using the formulam_i = \frac \sum_^n x^i_jwhere m_i is the ith sample moment and 1 \leq i \leq l. Estimating \mathrm(X) with m_1 gives the
sample mean The sample mean (sample average) or empirical mean (empirical average), and the sample covariance or empirical covariance are statistics computed from a sample of data on one or more random variables. The sample mean is the average value (or me ...
, denoted \bar . Substituting this estimate in the formula for the expected value of a geometric distribution and solving for p gives the estimators \hat = \frac and \hat = \frac when supported on \mathbb and \mathbb_0 respectively. These estimators are biased since \mathrm\left(\frac\right) > \frac = p as a result of
Jensen's inequality In mathematics, Jensen's inequality, named after the Danish mathematician Johan Jensen, relates the value of a convex function of an integral to the integral of the convex function. It was proved by Jensen in 1906, building on an earlier p ...
.


Maximum likelihood estimation

The
maximum likelihood estimator In statistics, maximum likelihood estimation (MLE) is a method of estimating the parameters of an assumed probability distribution, given some observed data. This is achieved by maximizing a likelihood function so that, under the assumed stati ...
of p is the value that maximizes the
likelihood function A likelihood function (often simply called the likelihood) measures how well a statistical model explains observed data by calculating the probability of seeing that data under different parameter values of the model. It is constructed from the ...
given a sample. By finding the
zero 0 (zero) is a number representing an empty quantity. Adding (or subtracting) 0 to any number leaves that number unchanged; in mathematical terminology, 0 is the additive identity of the integers, rational numbers, real numbers, and compl ...
of the
derivative In mathematics, the derivative is a fundamental tool that quantifies the sensitivity to change of a function's output with respect to its input. The derivative of a function of a single variable at a chosen input value, when it exists, is t ...
of the log-likelihood function when the distribution is defined over \mathbb, the maximum likelihood estimator can be found to be \hat = \frac, where \bar is the sample mean. If the domain is \mathbb_0, then the estimator shifts to \hat = \frac. As previously discussed in § Method of moments, these estimators are biased. Regardless of the domain, the bias is equal to : b \equiv \operatorname\bigg ;(\hat p_\mathrm - p)\;\bigg = \frac which yields the bias-corrected maximum likelihood estimator, : \hat^*_\text = \hat_\text - \hat


Bayesian inference

In
Bayesian inference Bayesian inference ( or ) is a method of statistical inference in which Bayes' theorem is used to calculate a probability of a hypothesis, given prior evidence, and update it as more information becomes available. Fundamentally, Bayesian infer ...
, the parameter p is a random variable from a
prior distribution A prior probability distribution of an uncertain quantity, simply called the prior, is its assumed probability distribution before some evidence is taken into account. For example, the prior could be the probability distribution representing the ...
with a
posterior distribution The posterior probability is a type of conditional probability that results from updating the prior probability with information summarized by the likelihood via an application of Bayes' rule. From an epistemological perspective, the posterior ...
calculated using
Bayes' theorem Bayes' theorem (alternatively Bayes' law or Bayes' rule, after Thomas Bayes) gives a mathematical rule for inverting Conditional probability, conditional probabilities, allowing one to find the probability of a cause given its effect. For exampl ...
after observing samples. If a
beta distribution In probability theory and statistics, the beta distribution is a family of continuous probability distributions defined on the interval
, 1 The comma is a punctuation mark that appears in several variants in different languages. Some typefaces render it as a small line, slightly curved or straight, but inclined from the vertical; others give it the appearance of a miniature fille ...
or (0, 1) in terms of two positive Statistical parameter, parameters, denoted by ''alpha'' (''α'') an ...
is chosen as the prior distribution, then the posterior will also be a beta distribution and it is called the conjugate distribution. In particular, if a \mathrm(\alpha,\beta) prior is selected, then the posterior, after observing samples k_1, \dotsc, k_n \in \mathbb, isp \sim \mathrm\left(\alpha+n,\ \beta+\sum_^n (k_i-1)\right). \!Alternatively, if the samples are in \mathbb_0, the posterior distribution isp \sim \mathrm\left(\alpha+n,\beta+\sum_^n k_i\right).Since the expected value of a \mathrm(\alpha,\beta) distribution is \frac, as \alpha and \beta approach zero, the posterior mean approaches its maximum likelihood estimate.


Random variate generation

The geometric distribution can be generated experimentally from i.i.d. standard uniform random variables by finding the first such random variable to be less than or equal to p. However, the number of random variables needed is also geometrically distributed and the algorithm slows as p decreases. Random generation can be done in
constant time In theoretical computer science, the time complexity is the computational complexity that describes the amount of computer time it takes to run an algorithm. Time complexity is commonly estimated by counting the number of elementary operations p ...
by truncating exponential random numbers. An exponential random variable E can become geometrically distributed with parameter p through \lceil -E/\log(1-p) \rceil. In turn, E can be generated from a standard uniform random variable U altering the formula into \lceil \log(U) / \log(1-p)\rceil.


Applications

The geometric distribution is used in many disciplines. In
queueing theory Queueing theory is the mathematical study of waiting lines, or queues. A queueing model is constructed so that queue lengths and waiting time can be predicted. Queueing theory is generally considered a branch of operations research because th ...
, the
M/M/1 queue In queueing theory, a discipline within the mathematical probability theory, theory of probability, an M/M/1 queue represents the queue length in a system having a single server, where arrivals are determined by a Poisson process and job service ...
has a steady state following a geometric distribution. In
stochastic processes In probability theory and related fields, a stochastic () or random process is a mathematical object usually defined as a family of random variables in a probability space, where the index of the family often has the interpretation of time. Stoc ...
, the Yule Furry process is geometrically distributed. The distribution also arises when modeling the lifetime of a device in discrete contexts. It has also been used to fit data including modeling patients spreading
COVID-19 Coronavirus disease 2019 (COVID-19) is a contagious disease caused by the coronavirus SARS-CoV-2. In January 2020, the disease spread worldwide, resulting in the COVID-19 pandemic. The symptoms of COVID‑19 can vary but often include fever ...
.


See also

*
Hypergeometric distribution In probability theory and statistics, the hypergeometric distribution is a Probability distribution#Discrete probability distribution, discrete probability distribution that describes the probability of k successes (random draws for which the ...
* Coupon collector's problem * Compound Poisson distribution *
Negative binomial distribution In probability theory and statistics, the negative binomial distribution, also called a Pascal distribution, is a discrete probability distribution that models the number of failures in a sequence of independent and identically distributed Berno ...


References

{{ProbDistributions, discrete-infinite Discrete distributions Exponential family distributions Infinitely divisible probability distributions Articles with example R code