HOME

TheInfoList



OR:

In
probability theory Probability theory is the branch of mathematics concerned with probability. Although there are several different probability interpretations, probability theory treats the concept in a rigorous mathematical manner by expressing it through a set o ...
and
directional statistics Directional statistics (also circular statistics or spherical statistics) is the subdiscipline of statistics that deals with directions (unit vectors in Euclidean space, R''n''), axes (lines through the origin in R''n'') or rotations in R''n''. M ...
, the
von Mises Mises or von Mises may refer to: * Ludwig von Mises, an Austrian-American economist of the Austrian School, older brother of Richard von Mises ** Mises Institute, or the Ludwig von Mises Institute for Austrian Economics, named after Ludwig von ...
distribution (also known as the circular normal distribution or Tikhonov distribution) is a continuous
probability distribution In probability theory and statistics, a probability distribution is the mathematical function that gives the probabilities of occurrence of different possible outcomes for an experiment. It is a mathematical description of a random phenomenon i ...
on the
circle A circle is a shape consisting of all points in a plane that are at a given distance from a given point, the centre. Equivalently, it is the curve traced out by a point that moves in a plane so that its distance from a given point is const ...
. It is a close approximation to the
wrapped normal distribution In probability theory and directional statistics, a wrapped normal distribution is a wrapped probability distribution that results from the "wrapping" of the normal distribution around the unit circle. It finds application in the theory of Brownia ...
, which is the circular analogue of the
normal distribution In statistics, a normal distribution or Gaussian distribution is a type of continuous probability distribution for a real-valued random variable. The general form of its probability density function is : f(x) = \frac e^ The parameter \mu ...
. A freely diffusing angle \theta on a circle is a wrapped normally distributed random variable with an
unwrapped ''Unwrapped'', also known as ''Unwrapped with Marc Summers'', is an American television program on Food Network that reveals the origins of sponsored foods. It first aired in June 2001 and is hosted by Marc Summers. The show leads viewers on to ...
variance that grows linearly in time. On the other hand, the von Mises distribution is the stationary distribution of a drift and diffusion process on the circle in a harmonic potential, i.e. with a preferred orientation. The von Mises distribution is the
maximum entropy distribution In statistics and information theory, a maximum entropy probability distribution has entropy that is at least as great as that of all other members of a specified class of probability distributions. According to the principle of maximum entropy, ...
for circular data when the real and imaginary parts of the first circular moment are specified. The von Mises distribution is a special case of the
von Mises–Fisher distribution In directional statistics, the von Mises–Fisher distribution (named after Richard von Mises and Ronald Fisher), is a probability distribution on the (p-1)-sphere in \mathbb^. If p=2 the distribution reduces to the von Mises distribution on the ...
on the ''N''-dimensional sphere.


Definition

The von Mises probability density function for the angle ''x'' is given by: :f(x\mid\mu,\kappa)=\frac where ''I''0(\kappa) is the modified
Bessel function Bessel functions, first defined by the mathematician Daniel Bernoulli and then generalized by Friedrich Bessel, are canonical solutions of Bessel's differential equation x^2 \frac + x \frac + \left(x^2 - \alpha^2 \right)y = 0 for an arbitrary ...
of the first kind of order 0, with this scaling constant chosen so that the distribution sums to unity: \int_^\pi \exp(\kappa\cos x)dx = . The parameters ''μ'' and 1/\kappa are analogous to ''μ'' and ''σ'' (the mean and variance) in the normal distribution: * ''μ'' is a measure of location (the distribution is clustered around ''μ''), and * \kappa is a measure of concentration (a reciprocal measure of
dispersion Dispersion may refer to: Economics and finance *Dispersion (finance), a measure for the statistical distribution of portfolio returns *Price dispersion, a variation in prices across sellers of the same item *Wage dispersion, the amount of variatio ...
, so 1/\kappa is analogous to ''σ''). ** If \kappa is zero, the distribution is uniform, and for small \kappa, it is close to uniform. ** If \kappa is large, the distribution becomes very concentrated about the angle ''μ'' with \kappa being a measure of the concentration. In fact, as \kappa increases, the distribution approaches a normal distribution in ''x''  with mean ''μ'' and variance 1/\kappa. The probability density can be expressed as a series of Bessel functions : f(x\mid\mu,\kappa) = \frac\left(1+\frac \sum_^\infty I_j(\kappa) \cos (x-\mu)right) where ''I''''j''(''x'') is the modified
Bessel function Bessel functions, first defined by the mathematician Daniel Bernoulli and then generalized by Friedrich Bessel, are canonical solutions of Bessel's differential equation x^2 \frac + x \frac + \left(x^2 - \alpha^2 \right)y = 0 for an arbitrary ...
of order ''j''. The cumulative distribution function is not analytic and is best found by integrating the above series. The indefinite integral of the probability density is: :\Phi(x\mid\mu,\kappa)=\int f(t\mid\mu,\kappa)\,dt =\frac\left(x + \frac \sum_^\infty I_j(\kappa) \frac\right). The cumulative distribution function will be a function of the lower limit of integration ''x''0: :F(x\mid\mu,\kappa)=\Phi(x\mid\mu,\kappa)-\Phi(x_0\mid\mu,\kappa).\,


Moments

The moments of the von Mises distribution are usually calculated as the moments of the complex exponential ''z'' = ''e'' rather than the angle ''x'' itself. These moments are referred to as ''circular moments''. The variance calculated from these moments is referred to as the ''circular variance''. The one exception to this is that the "mean" usually refers to the
argument An argument is a statement or group of statements called premises intended to determine the degree of truth or acceptability of another statement called conclusion. Arguments can be studied from three main perspectives: the logical, the dialectic ...
of the complex mean. The ''n''th raw moment of ''z'' is: :m_n=\langle z^n\rangle=\int_\Gamma z^n\,f(x, \mu,\kappa)\,dx := \frace^ where the integral is over any interval \Gamma of length 2π. In calculating the above integral, we use the fact that ''z'' = cos(''n''x) + i sin(''nx'') and the Bessel function identity:See Abramowitz and Stegu
§9.6.19
/ref> :I_n(\kappa)=\frac\int_0^\pi e^\cos(nx)\,dx. The mean of the complex exponential ''z''  is then just :m_1= \frace^ and the ''circular mean'' value of the angle ''x'' is then taken to be the argument ''μ''. This is the expected or preferred direction of the angular random variables. The variance of ''z'', or the circular variance of ''x'' is: :\textrm(x)= 1-E cos(x-\mu)= 1-\frac.


Limiting behavior

When \kappa is large, the distribution resembles a
normal distribution In statistics, a normal distribution or Gaussian distribution is a type of continuous probability distribution for a real-valued random variable. The general form of its probability density function is : f(x) = \frac e^ The parameter \mu ...
. More specifically, for large positive real numbers \kappa, : f(x\mid\mu,\kappa) \approx \frac 1 \exp\left dfrac\right/math> where σ2 = 1/\kappa and the difference between the left hand side and the right hand side of the approximation converges uniformly to zero as \kappa goes to infinity. Also, when \kappa is small, the probability density function resembles a uniform distribution: :\lim_f(x\mid\mu,\kappa)=\mathrm(x) where the interval for the uniform distribution \mathrm(x) is the chosen interval of length 2\pi (i.e. \mathrm(x) = 1/(2\pi) when x is in the interval and \mathrm(x)=0 when x is not in the interval).


Estimation of parameters

A series of ''N'' measurements z_n=e^ drawn from a von Mises distribution may be used to estimate certain parameters of the distribution. The average of the series \overline is defined as :\overline=\frac\sum_^N z_n and its expectation value will be just the first moment: :\langle\overline\rangle=\frace^. In other words, \overline is an
unbiased estimator In statistics, the bias of an estimator (or bias function) is the difference between this estimator's expected value and the true value of the parameter being estimated. An estimator or decision rule with zero bias is called ''unbiased''. In stat ...
of the first moment. If we assume that the mean \mu lies in the interval \pi,\pi/math>, then Arg(\overline) will be a (biased) estimator of the mean \mu. Viewing the z_n as a set of vectors in the complex plane, the \bar^ 2 statistic is the square of the length of the averaged vector: :\bar^ 2=\overline\,\overline=\left(\frac\sum_^N \cos\theta_n\right)^2+\left(\frac\sum_^N \sin\theta_n\right)^2 and its expectation value is } :\langle \bar^2\rangle=\frac+\frac\,\frac. In other words, the statistic :R_e^2=\frac\left(\bar^2-\frac\right) will be an unbiased estimator of \frac\, and solving the equation R_e=\frac\, for \kappa\, will yield a (biased) estimator of \kappa\,. In analogy to the linear case, the solution to the equation \bar=\frac\, will yield the
maximum likelihood estimate In statistics, maximum likelihood estimation (MLE) is a method of estimating the parameters of an assumed probability distribution, given some observed data. This is achieved by maximizing a likelihood function so that, under the assumed statist ...
of \kappa\, and both will be equal in the limit of large ''N''. For approximate solution to \kappa\, refer to
von Mises–Fisher distribution In directional statistics, the von Mises–Fisher distribution (named after Richard von Mises and Ronald Fisher), is a probability distribution on the (p-1)-sphere in \mathbb^. If p=2 the distribution reduces to the von Mises distribution on the ...
.


Distribution of the mean

The distribution of the sample mean \overline = \bare^ for the von Mises distribution is given by: : P(\bar,\bar)\,d\bar\,d\bar=\frac\int_\Gamma \prod_^N \left( e^ d\theta_n\right) = \frac\left(\frac\int_\Gamma \prod_^N d\theta_n\right) where ''N'' is the number of measurements and \Gamma\, consists of intervals of 2\pi in the variables, subject to the constraint that \bar and \bar are constant, where \bar is the mean resultant: : \bar^2=, \bar, ^2= \left(\frac\sum_^N \cos(\theta_n) \right)^2 + \left(\frac\sum_^N \sin(\theta_n) \right)^2 and \overline is the mean angle: : \overline=\mathrm(\overline). \, Note that product term in parentheses is just the distribution of the mean for a
circular uniform distribution In probability theory and directional statistics, a circular uniform distribution is a probability distribution on the unit circle whose density is uniform for all angles. Description Definition The probability density function (pdf) of the c ...
. This means that the distribution of the mean direction \mu of a von Mises distribution VM(\mu, \kappa) is a von Mises distribution VM(\mu, \barN\kappa), or, equivalently, VM(\mu, R\kappa).


Entropy

By definition, the
information entropy In information theory, the entropy of a random variable is the average level of "information", "surprise", or "uncertainty" inherent to the variable's possible outcomes. Given a discrete random variable X, which takes values in the alphabet \ ...
of the von Mises distribution is :H = -\int_\Gamma f(\theta;\mu,\kappa)\,\ln(f(\theta;\mu,\kappa))\,d\theta\, where \Gamma is any interval of length 2\pi. The logarithm of the density of the Von Mises distribution is straightforward: :\ln(f(\theta;\mu,\kappa))=-\ln(2\pi I_0(\kappa))+ \kappa \cos(\theta)\, The characteristic function representation for the Von Mises distribution is: :f(\theta;\mu,\kappa) =\frac\left(1+2\sum_^\infty\phi_n\cos(n\theta)\right) where \phi_n= I_(\kappa)/I_0(\kappa). Substituting these expressions into the entropy integral, exchanging the order of integration and summation, and using the orthogonality of the cosines, the entropy may be written: :H = \ln(2\pi I_0(\kappa))-\kappa\phi_1 = \ln(2\pi I_0(\kappa))-\kappa\frac For \kappa=0, the von Mises distribution becomes the
circular uniform distribution In probability theory and directional statistics, a circular uniform distribution is a probability distribution on the unit circle whose density is uniform for all angles. Description Definition The probability density function (pdf) of the c ...
and the entropy attains its maximum value of \ln(2\pi). Notice that the Von Mises distribution maximizes the entropy when the real and imaginary parts of the first circular moment are specified or, equivalently, the circular mean and circular variance are specified.


See also

*
Bivariate von Mises distribution In probability theory and statistics, the bivariate von Mises distribution is a probability distribution describing values on a torus. It may be thought of as an analogue on the torus of the bivariate normal distribution. The distribution belon ...
*
Directional statistics Directional statistics (also circular statistics or spherical statistics) is the subdiscipline of statistics that deals with directions (unit vectors in Euclidean space, R''n''), axes (lines through the origin in R''n'') or rotations in R''n''. M ...
*
Von Mises–Fisher distribution In directional statistics, the von Mises–Fisher distribution (named after Richard von Mises and Ronald Fisher), is a probability distribution on the (p-1)-sphere in \mathbb^. If p=2 the distribution reduces to the von Mises distribution on the ...
*
Kent distribution In directional statistics, the Kent distribution, also known as the 5-parameter Fisher–Bingham distribution (named after John T. Kent, Ronald Fisher, and Christopher Bingham), is a probability distribution on the unit sphere (2-sphere ''S''2 i ...


References


Further reading

* Abramowitz, M. and Stegun, I. A. (ed.), Handbook of Mathematical Functions, National Bureau of Standards, 1964; reprinted
Dover Publications Dover Publications, also known as Dover Books, is an American book publisher founded in 1941 by Hayward and Blanche Cirker. It primarily reissues books that are out of print from their original publishers. These are often, but not always, books ...
, 1965. * "Algorithm AS 86: The von Mises Distribution Function", Mardia, Applied Statistics, 24, 1975 (pp. 268–272). * "Algorithm 518, Incomplete Bessel Function I0: The von Mises Distribution", Hill, ACM Transactions on Mathematical Software, Vol. 3, No. 3, September 1977, Pages 279–284. * Best, D. and Fisher, N. (1979). Efficient simulation of the von Mises distribution. Applied Statistics, 28, 152–157. * Evans, M., Hastings, N., and Peacock, B., "von Mises Distribution". Ch. 41 in Statistical Distributions, 3rd ed. New York.
Wiley Wiley may refer to: Locations * Wiley, Colorado, a U.S. town * Wiley, Pleasants County, West Virginia, U.S. * Wiley-Kaserne, a district of the city of Neu-Ulm, Germany People * Wiley (musician), British grime MC, rapper, and producer * Wiley Mil ...
2000. * Fisher, Nicholas I., Statistical Analysis of Circular Data. New York. Cambridge 1993. * "Statistical Distributions", 2nd. Edition, Evans, Hastings, and Peacock,
John Wiley and Sons John Wiley & Sons, Inc., commonly known as Wiley (), is an American multinational publishing company founded in 1807 that focuses on academic publishing and instructional materials. The company produces books, journals, and encyclopedias, in p ...
, 1993, (chapter 39). * {{DEFAULTSORT:Von Mises Distribution Continuous distributions Directional statistics Exponential family distributions