Conway–Maxwell–Poisson Distribution
   HOME

TheInfoList



OR:

In
probability theory Probability theory is the branch of mathematics concerned with probability. Although there are several different probability interpretations, probability theory treats the concept in a rigorous mathematical manner by expressing it through a set o ...
and
statistics Statistics (from German language, German: ''wikt:Statistik#German, Statistik'', "description of a State (polity), state, a country") is the discipline that concerns the collection, organization, analysis, interpretation, and presentation of ...
, the Conway–Maxwell–Poisson (CMP or COM–Poisson) distribution is a
discrete probability distribution In probability theory and statistics, a probability distribution is the mathematical function that gives the probabilities of occurrence of different possible outcomes for an experiment. It is a mathematical description of a random phenomenon i ...
named after Richard W. Conway, William L. Maxwell, and
Siméon Denis Poisson Baron Siméon Denis Poisson FRS FRSE (; 21 June 1781 – 25 April 1840) was a French mathematician and physicist who worked on statistics, complex analysis, partial differential equations, the calculus of variations, analytical mechanics, electri ...
that generalizes the
Poisson distribution In probability theory and statistics, the Poisson distribution is a discrete probability distribution that expresses the probability of a given number of events occurring in a fixed interval of time or space if these events occur with a known co ...
by adding a parameter to model
overdispersion In statistics, overdispersion is the presence of greater variability (statistical dispersion) in a data set than would be expected based on a given statistical model. A common task in applied statistics is choosing a parametric model to fit a giv ...
and underdispersion. It is a member of the
exponential family In probability and statistics, an exponential family is a parametric set of probability distributions of a certain form, specified below. This special form is chosen for mathematical convenience, including the enabling of the user to calculate ...
, has the Poisson distribution and
geometric distribution In probability theory and statistics, the geometric distribution is either one of two discrete probability distributions: * The probability distribution of the number ''X'' of Bernoulli trials needed to get one success, supported on the set \; * ...
as
special case In logic, especially as applied in mathematics, concept is a special case or specialization of concept precisely if every instance of is also an instance of but not vice versa, or equivalently, if is a generalization of . A limiting case is ...
s and the
Bernoulli distribution In probability theory and statistics, the Bernoulli distribution, named after Swiss mathematician Jacob Bernoulli,James Victor Uspensky: ''Introduction to Mathematical Probability'', McGraw-Hill, New York 1937, page 45 is the discrete probabil ...
as a limiting case.


Background

The CMP distribution was originally proposed by Conway and Maxwell in 1962 as a solution to handling queueing systems with state-dependent service rates. The CMP distribution was introduced into the statistics literature by Boatwright et al. 2003 Boatwright, P., Borle, S. and Kadane, J.B. "A model of the joint distribution of purchase quantity and timing."
Journal of the American Statistical Association The ''Journal of the American Statistical Association (JASA)'' is the primary journal published by the American Statistical Association, the main professional body for statisticians in the United States. It is published four times a year in March, ...
98 (2003): 564–572.
and Shmueli et al. (2005).Shmueli G., Minka T., Kadane J.B., Borle S., and Boatwright, P.B. "A useful distribution for fitting discrete data: revival of the Conway–Maxwell–Poisson distribution."
Journal of the Royal Statistical Society The ''Journal of the Royal Statistical Society'' is a peer-reviewed scientific journal of statistics. It comprises three series and is published by Wiley for the Royal Statistical Society. History The Statistical Society of London was founded ...
: Series C (Applied Statistics) 54.1 (2005): 127–14

/ref> The first detailed investigation into the probabilistic and statistical properties of the distribution was published by Shmueli et al. (2005). Some theoretical probability results of COM-Poisson distribution is studied and reviewed by Li et al. (2019),Li B., Zhang H., Jiao H. "Some Characterizations and Properties of COM-Poisson Random Variables."
Communications in Statistics ''Communications in Statistics'' is a peer-reviewed scientific journal that publishes papers related to statistics. It is published by Taylor & Francis in three series, ''Theory and Methods'', ''Simulation and Computation'', and ''Case Studies, Da ...
- Theory and Methods, (2019

/ref> especially the characterizations of COM-Poisson distribution.


Probability mass function and basic properties

The CMP distribution is defined to be the distribution with
probability mass function In probability and statistics, a probability mass function is a function that gives the probability that a discrete random variable is exactly equal to some value. Sometimes it is also known as the discrete density function. The probability mass ...
: P(X = x) = f(x; \lambda, \nu) = \frac\frac. where : : Z(\lambda,\nu) = \sum_^\infty \frac. The function Z(\lambda,\nu) serves as a
normalization constant The concept of a normalizing constant arises in probability theory and a variety of other areas of mathematics. The normalizing constant is used to reduce any probability function to a probability density function with total probability of one. ...
so the probability mass function sums to one. Note that Z(\lambda,\nu) does not have a closed form. The domain of admissible parameters is \lambda,\nu>0, and 0<\lambda<1, \nu=0. The additional parameter \nu which does not appear in the
Poisson distribution In probability theory and statistics, the Poisson distribution is a discrete probability distribution that expresses the probability of a given number of events occurring in a fixed interval of time or space if these events occur with a known co ...
allows for adjustment of the rate of decay. This rate of decay is a non-linear decrease in ratios of successive probabilities, specifically : \frac = \frac. When \nu = 1, the CMP distribution becomes the standard
Poisson distribution In probability theory and statistics, the Poisson distribution is a discrete probability distribution that expresses the probability of a given number of events occurring in a fixed interval of time or space if these events occur with a known co ...
and as \nu \to \infty, the distribution approaches a
Bernoulli distribution In probability theory and statistics, the Bernoulli distribution, named after Swiss mathematician Jacob Bernoulli,James Victor Uspensky: ''Introduction to Mathematical Probability'', McGraw-Hill, New York 1937, page 45 is the discrete probabil ...
with parameter \lambda/(1+\lambda). When \nu=0 the CMP distribution reduces to a
geometric distribution In probability theory and statistics, the geometric distribution is either one of two discrete probability distributions: * The probability distribution of the number ''X'' of Bernoulli trials needed to get one success, supported on the set \; * ...
with probability of success 1-\lambda provided \lambda<1. For the CMP distribution, moments can be found through the recursive formula : \operatorname ^= \begin \lambda \, \operatorname +1 & \text r = 0 \\ \lambda \, \frac\operatorname ^r+ \operatorname operatorname ^r& \text r > 0. \\ \end


Cumulative distribution function

For general \nu, there does not exist a closed form formula for the
cumulative distribution function In probability theory and statistics, the cumulative distribution function (CDF) of a real-valued random variable X, or just distribution function of X, evaluated at x, is the probability that X will take a value less than or equal to x. Ev ...
of X\sim\mathrm(\lambda,\nu). If \nu\geq1 is an integer, we can, however, obtain the following formula in terms of the
generalized hypergeometric function In mathematics, a generalized hypergeometric series is a power series in which the ratio of successive coefficients indexed by ''n'' is a rational function of ''n''. The series, if convergent, defines a generalized hypergeometric function, which ...
:Nadarajah, S. "Useful moment and CDF formulations for the COM–Poisson distribution." Statistical Papers 50 (2009): 617–622. : F(n)=P(X\leq n)=1-\frac.


The normalizing constant

Many important summary statistics, such as moments and cumulants, of the CMP distribution can be expressed in terms of the normalizing constant Z(\lambda,\nu). Indeed, The
probability generating function In probability theory, the probability generating function of a discrete random variable is a power series representation (the generating function) of the probability mass function of the random variable. Probability generating functions are often ...
is \operatornames^X=Z(s\lambda,\nu)/Z(\lambda,\nu), and the
mean There are several kinds of mean in mathematics, especially in statistics. Each mean serves to summarize a given group of data, often to better understand the overall value (magnitude and sign) of a given data set. For a data set, the ''arithme ...
and
variance In probability theory and statistics, variance is the expectation of the squared deviation of a random variable from its population mean or sample mean. Variance is a measure of dispersion, meaning it is a measure of how far a set of numbers ...
are given by : \operatornameX=\lambda\frac\big\, : \operatorname(X)=\lambda\frac\operatornameX. The
cumulant generating function In probability theory and statistics, the cumulants of a probability distribution are a set of quantities that provide an alternative to the '' moments'' of the distribution. Any two probability distributions whose moments are identical will have ...
is : g(t)=\ln(\operatorname ^=\ln(Z(\lambda e^,\nu))-\ln(Z(\lambda,\nu)), and the
cumulants In probability theory and statistics, the cumulants of a probability distribution are a set of quantities that provide an alternative to the '' moments'' of the distribution. Any two probability distributions whose moments are identical will ha ...
are given by : \kappa_n=g^(0)=\frac\ln(Z(\lambda e^t,\nu)) \bigg, _, \quad n\geq1. Whilst the normalizing constant Z(\lambda,\nu)=\sum_^\infty\frac does not in general have a closed form, there are some noteworthy special cases: * Z(\lambda,1)=\mathrm^ * Z(\lambda,0)=(1-\lambda)^ * \lim_Z(\lambda,\nu)=1+\lambda * Z(\lambda,2)=I_0(2\sqrt), where I_0(x)=\sum_^\infty\frac\big(\frac\big)^ is a
modified Bessel function Bessel functions, first defined by the mathematician Daniel Bernoulli and then generalized by Friedrich Bessel, are canonical solutions of Bessel's differential equation x^2 \frac + x \frac + \left(x^2 - \alpha^2 \right)y = 0 for an arbitrary ...
of the first kind. * For integer \nu, the normalizing constant can expressed as a generalized hypergeometric function: Z(\lambda,\nu)=_0F_(;1,\ldots,1;\lambda). Because the normalizing constant does not in general have a closed form, the following
asymptotic expansion In mathematics, an asymptotic expansion, asymptotic series or Poincaré expansion (after Henri Poincaré) is a formal series of functions which has the property that truncating the series after a finite number of terms provides an approximation to ...
is of interest. Fix \nu>0. Then, as \lambda\rightarrow\infty,Gaunt, R.E., Iyengar, S., Olde Daalhuis, A.B. and Simsek, B. "An asymptotic expansion for the normalizing constant of the Conway–Maxwell–Poisson distribution." To appear in Annals of the Institute of Statistical Mathematics (2017+) DOI 10.1007/s10463-017-0629-6 : Z(\lambda,\nu)=\frac\sum_^\infty c_k\big(\nu\lambda^\big)^, where the c_j are uniquely determined by the expansion : \left(\Gamma(t+1)\right)^=\frac\sum_^\infty\frac. In particular, c_0=1, c_1=\frac, c_2=\frac\left(\nu^2+23\right). Further
coefficients In mathematics, a coefficient is a multiplicative factor in some term of a polynomial, a series, or an expression; it is usually a number, but may be any expression (including variables such as , and ). When the coefficients are themselves var ...
are given in.


Moments, cumulants and related results

For general values of \nu, there does not exist closed form formulas for the mean, variance and moments of the CMP distribution. We do, however, have the following neat formula. Let (j)_r=j(j-1)\cdots(j-r+1) denote the
falling factorial In mathematics, the falling factorial (sometimes called the descending factorial, falling sequential product, or lower factorial) is defined as the polynomial :\begin (x)_n = x^\underline &= \overbrace^ \\ &= \prod_^n(x-k+1) = \prod_^(x-k) \,. \e ...
. Let X\sim\mathrm(\lambda,\nu), \lambda,\nu>0. Then : \operatorname (X)_r)^\nu\lambda^r, for r\in\mathbb. Since in general closed form formulas are not available for moments and cumulants of the CMP distribution, the following asymptotic formulas are of interest. Let X\sim \mathrm(\lambda,\nu), where \nu>0. Denote the
skewness In probability theory and statistics, skewness is a measure of the asymmetry of the probability distribution of a real-valued random variable about its mean. The skewness value can be positive, zero, negative, or undefined. For a unimodal d ...
\gamma_1=\frac and
excess kurtosis In probability theory and statistics, kurtosis (from el, κυρτός, ''kyrtos'' or ''kurtos'', meaning "curved, arching") is a measure of the "tailedness" of the probability distribution of a real-valued random variable. Like skewness, kurtosi ...
\gamma_2=\frac, where \sigma^2=\mathrm(X). Then, as \lambda\rightarrow\infty, : \operatornameX= \lambda^\left(1-\frac \lambda^-\frac\lambda^-\frac\lambda^+\mathcal(\lambda^) \right), : \mathrm(X)= \frac\bigg(1+\frac\lambda^+\frac\lambda^+\mathcal(\lambda^)\bigg), : \kappa_n= \frac\bigg(1+\frac\lambda^+\frac\lambda^+\mathcal(\lambda^)\bigg), : \gamma_1= \frac\bigg(1-\frac\lambda^-\frac\lambda^+\mathcal(\lambda^)\bigg), : \gamma_2= \frac\bigg(1-\frac\lambda^+\frac\lambda^+\mathcal(\lambda^)\bigg), : \operatorname ^n \lambda^\bigg(1+\frac\lambda^+a_2\lambda^+\mathcal(\lambda^)\bigg), where : a_2=-\frac+\frac\bigg\. The asymptotic series for \kappa_n holds for all n\geq2, and \kappa_1=\operatornameX.


Moments for the case of integer \nu

When \nu is an integer explicit formulas for moments can be obtained. The case \nu=1 corresponds to the Poisson distribution. Suppose now that \nu=2. For m\in\mathbb, : \operatorname X)_m\frac, where I_r(x) is the
modified Bessel function Bessel functions, first defined by the mathematician Daniel Bernoulli and then generalized by Friedrich Bessel, are canonical solutions of Bessel's differential equation x^2 \frac + x \frac + \left(x^2 - \alpha^2 \right)y = 0 for an arbitrary ...
of the first kind. Using the connecting formula for moments and factorial moments gives : \operatornameX^m=\sum_^m\left\\frac. In particular, the mean of X is given by : \operatornameX=\frac. Also, since \operatornameX^2=\lambda, the variance is given by : \mathrm(X)=\lambda\left(1-\frac\right). Suppose now that \nu\geq1 is an integer. Then : \operatorname X)_m\frac \frac. In particular, : \operatorname \lambda \frac, and \mathrm(X)=\frac \frac+\operatorname (\operatorname ^2.


Median, mode and mean deviation

Let X\sim\mathrm(\lambda,\nu). Then the
mode Mode ( la, modus meaning "manner, tune, measure, due measure, rhythm, melody") may refer to: Arts and entertainment * '' MO''D''E (magazine)'', a defunct U.S. women's fashion magazine * ''Mode'' magazine, a fictional fashion magazine which is ...
of X is \lfloor\lambda^\rfloor if \lambda^ is not an integer. Otherwise, the modes of X are \lambda^ and \lambda^-1. The mean deviation of X^\nu about its mean \lambda is given by Daly, F. and Gaunt, R.E. " The Conway–Maxwell–Poisson distribution: distributional theory and approximation." ALEA Latin American Journal of Probability and Mathematical Statistics 13 (2016): 635–658. : \operatorname, X^\nu-\lambda, = 2Z(\lambda,\nu)^ \frac. No explicit formula is known for the
median In statistics and probability theory, the median is the value separating the higher half from the lower half of a data sample, a population, or a probability distribution. For a data set, it may be thought of as "the middle" value. The basic fe ...
of X, but the following asymptotic result is available. Let m be the median of X\sim\mbox(\lambda,\nu). Then : m=\lambda^+\mathcal\left(\lambda^\right), as \lambda\rightarrow\infty.


Stein characterisation

Let X\sim\mbox(\lambda,\nu), and suppose that f:\mathbb^+\mapsto\mathbb is such that \operatorname, f(X+1), <\infty and \operatorname, X^f(X), <\infty. Then : \operatorname lambda f(X+1)-X^\nu f(X)0. Conversely, suppose now that W is a real-valued random variable supported on \mathbb^+ such that \operatorname lambda f(W+1)-W^\nu f(W)0 for all bounded f:\mathbb^+\mapsto\mathbb. Then W\sim \mbox(\lambda,\nu).


Use as a limiting distribution

Let Y_n have the Conway–Maxwell–binomial distribution with parameters n, p=\lambda/n^\nu and \nu. Fix \lambda>0 and \nu>0. Then, Y_n converges in distribution to the \mathrm(\lambda,\nu) distribution as n\rightarrow\infty. This result generalises the classical Poisson approximation of the binomial distribution. More generally, the CMP distribution arises as a limiting distribution of Conway–Maxwell–Poisson binomial distribution. Apart from the fact that COM-binomial approximates to COM-Poisson, Zhang et al. (2018)Zhang H., Tan K., Li B. "COM-negative binomial distribution: modeling overdispersion and ultrahigh zero-inflated count data." Frontiers of Mathematics in China, 2018, 13(4): 967–99

/ref> illustrates that COM-negative binomial distribution with
probability mass function In probability and statistics, a probability mass function is a function that gives the probability that a discrete random variable is exactly equal to some value. Sometimes it is also known as the discrete density function. The probability mass ...
: \mathrm(X = k) = \frac = \frac,\quad (k = 0,1,2, \ldots ), convergents to a limiting distribution which is the COM-Poisson, as .


Related distributions

* X\sim\operatorname(\lambda,1), then X follows the Poisson distribution with parameter \lambda. * Suppose \lambda<1. Then if X\sim\mathrm(\lambda,0), we have that X follows the geometric distribution with probability mass function P(X=k)=\lambda^k(1-\lambda), k\geq0. * The sequence of random variable X_\nu\sim\mathrm(\lambda,\nu) converges in distribution as \nu\rightarrow\infty to the Bernoulli distribution with mean \lambda(1+\lambda)^.


Parameter estimation

There are a few methods of estimating the parameters of the CMP distribution from the data. Two methods will be discussed: weighted least squares and maximum likelihood. The weighted least squares approach is simple and efficient but lacks precision. Maximum likelihood, on the other hand, is precise, but is more complex and computationally intensive.


Weighted least squares

The
weighted least squares Weighted least squares (WLS), also known as weighted linear regression, is a generalization of ordinary least squares and linear regression in which knowledge of the variance of observations is incorporated into the regression. WLS is also a speci ...
provides a simple, efficient method to derive rough estimates of the parameters of the CMP distribution and determine if the distribution would be an appropriate model. Following the use of this method, an alternative method should be employed to compute more accurate estimates of the parameters if the model is deemed appropriate. This method uses the relationship of successive probabilities as discussed above. By taking logarithms of both sides of this equation, the following linear relationship arises : \log \frac = - \log \lambda + \nu \log x where p_x denotes \Pr(X = x). When estimating the parameters, the probabilities can be replaced by the
relative frequencies In statistics, the frequency (or absolute frequency) of an Event (probability theory), event i is the number n_i of times the observation has occurred/recorded in an experiment or study. These frequencies are often depicted graphically or in tabu ...
of x and x-1. To determine if the CMP distribution is an appropriate model, these values should be plotted against \log x for all ratios without zero counts. If the data appear to be linear, then the model is likely to be a good fit. Once the appropriateness of the model is determined, the parameters can be estimated by fitting a regression of \log (\hat p_ / \hat p_x) on \log x. However, the basic assumption of
homoscedasticity In statistics, a sequence (or a vector) of random variables is homoscedastic () if all its random variables have the same finite variance. This is also known as homogeneity of variance. The complementary notion is called heteroscedasticity. The s ...
is violated, so a
weighted least squares Weighted least squares (WLS), also known as weighted linear regression, is a generalization of ordinary least squares and linear regression in which knowledge of the variance of observations is incorporated into the regression. WLS is also a speci ...
regression must be used. The inverse weight matrix will have the variances of each ratio on the diagonal with the one-step covariances on the first off-diagonal, both given below. : \operatorname\left log \frac\right\approx \frac + \frac : \text\left(\log \frac, \log \frac \right) \approx - \frac


Maximum likelihood

The CMP
likelihood function The likelihood function (often simply called the likelihood) represents the probability of random variable realizations conditional on particular values of the statistical parameters. Thus, when evaluated on a given sample, the likelihood funct ...
is : \mathcal(\lambda,\nu\mid x_1,\dots,x_n) = \lambda^ \exp(-\nu S_2) Z^(\lambda, \nu) where S_1 = \sum_^n x_i and S_2 = \sum_^n \log x_i!. Maximizing the likelihood yields the following two equations : \operatorname = \bar X : \operatorname log X!= \overline which do not have an analytic solution. Instead, the
maximum likelihood In statistics, maximum likelihood estimation (MLE) is a method of estimation theory, estimating the Statistical parameter, parameters of an assumed probability distribution, given some observed data. This is achieved by Mathematical optimization, ...
estimates are approximated numerically by the
Newton–Raphson method In numerical analysis, Newton's method, also known as the Newton–Raphson method, named after Isaac Newton and Joseph Raphson, is a root-finding algorithm which produces successively better approximations to the roots (or zeroes) of a real-valu ...
. In each iteration, the expectations, variances, and covariance of X and \log X! are approximated by using the estimates for \lambda and \nu from the previous iteration in the expression : \operatorname
(x) An emoticon (, , rarely , ), short for "emotion icon", also known simply as an emote, is a pictorial representation of a facial expression using characters—usually punctuation marks, numbers, and letters—to express a person's feelings, ...
= \sum_^\infty f(j) \frac. This is continued until convergence of \hat\lambda and \hat\nu.


Generalized linear model

The basic CMP distribution discussed above has also been used as the basis for a
generalized linear model In statistics, a generalized linear model (GLM) is a flexible generalization of ordinary linear regression. The GLM generalizes linear regression by allowing the linear model to be related to the response variable via a ''link function'' and b ...
(GLM) using a Bayesian formulation. A dual-link GLM based on the CMP distribution has been developed,Guikema, S.D. and J.P. Coffelt (2008) "A Flexible Count Data Regression Model for Risk Analysis", ''Risk Analysis'', 28 (1), 213–223. and this model has been used to evaluate traffic accident data.Lord, D., S.D. Guikema, and S.R. Geedipally (2008) "Application of the Conway–Maxwell–Poisson Generalized Linear Model for Analyzing Motor Vehicle Crashes," ''Accident Analysis & Prevention'', 40 (3), 1123–1134. Lord, D., S.R. Geedipally, and S.D. Guikema (2010) "Extension of the Application of Conway–Maxwell–Poisson Models: Analyzing Traffic Crash Data Exhibiting Under-Dispersion," ''Risk Analysis'', 30 (8), 1268–1276. The CMP GLM developed by Guikema and Coffelt (2008) is based on a reformulation of the CMP distribution above, replacing \lambda with \mu=\lambda^. The integral part of \mu is then the mode of the distribution. A full Bayesian estimation approach has been used with MCMC sampling implemented in
WinBugs WinBUGS is statistical software for Bayesian analysis using Markov chain Monte Carlo (MCMC) methods. It is based on the BUGS ( Bayesian inference Using Gibbs Sampling) project started in 1989. It runs under Microsoft Windows, though it can als ...
with
non-informative prior In Bayesian statistical inference, a prior probability distribution, often simply called the prior, of an uncertain quantity is the probability distribution that would express one's beliefs about this quantity before some evidence is taken into ...
s for the regression parameters. This approach is computationally expensive, but it yields the full posterior distributions for the regression parameters and allows expert knowledge to be incorporated through the use of informative priors. A classical GLM formulation for a CMP regression has been developed which generalizes
Poisson regression In statistics, Poisson regression is a generalized linear model form of regression analysis used to model count data and contingency tables. Poisson regression assumes the response variable ''Y'' has a Poisson distribution, and assumes the logari ...
and
logistic regression In statistics, the logistic model (or logit model) is a statistical model that models the probability of an event taking place by having the log-odds for the event be a linear function (calculus), linear combination of one or more independent var ...
. Sellers, K. S. and Shmueli, G. (2010)
"A Flexible Regression Model for Count Data"
''Annals of Applied Statistics'', 4 (2), 943–961
This takes advantage of the
exponential family In probability and statistics, an exponential family is a parametric set of probability distributions of a certain form, specified below. This special form is chosen for mathematical convenience, including the enabling of the user to calculate ...
properties of the CMP distribution to obtain elegant model estimation (via
maximum likelihood In statistics, maximum likelihood estimation (MLE) is a method of estimation theory, estimating the Statistical parameter, parameters of an assumed probability distribution, given some observed data. This is achieved by Mathematical optimization, ...
), inference, diagnostics, and interpretation. This approach requires substantially less computational time than the Bayesian approach, at the cost of not allowing expert knowledge to be incorporated into the model. In addition it yields standard errors for the regression parameters (via the Fisher Information matrix) compared to the full posterior distributions obtainable via the Bayesian formulation. It also provides a
statistical test A statistical hypothesis test is a method of statistical inference used to decide whether the data at hand sufficiently support a particular hypothesis. Hypothesis testing allows us to make probabilistic statements about population parameters. ...
for the level of dispersion compared to a Poisson model. Code for fitting a CMP regression, testing for dispersion, and evaluating fit is available.Code for COM_Poisson modelling
Georgetown Univ.
The two GLM frameworks developed for the CMP distribution significantly extend the usefulness of this distribution for data analysis problems.


References


External links



* ttp://alumni.media.mit.edu/~tpminka/software/compoisson Conway–Maxwell–Poisson distribution package for R (compoisson) by Tom Minka, third party package {{DEFAULTSORT:Conway-Maxwell-Poisson distribution Discrete distributions Poisson distribution