Lorentz Distribution
   HOME

TheInfoList



OR:

The Cauchy distribution, named after
Augustin Cauchy Baron Augustin-Louis Cauchy (, ; ; 21 August 178923 May 1857) was a French mathematician, engineer, and physicist who made pioneering contributions to several branches of mathematics, including mathematical analysis and continuum mechanics. He w ...
, is a
continuous probability distribution In probability theory and statistics, a probability distribution is the mathematical function that gives the probabilities of occurrence of different possible outcomes for an experiment. It is a mathematical description of a random phenomenon i ...
. It is also known, especially among
physicist A physicist is a scientist who specializes in the field of physics, which encompasses the interactions of matter and energy at all length and time scales in the physical universe. Physicists generally are interested in the root or ultimate caus ...
s, as the Lorentz distribution (after
Hendrik Lorentz Hendrik Antoon Lorentz (; 18 July 1853 – 4 February 1928) was a Dutch physicist who shared the 1902 Nobel Prize in Physics with Pieter Zeeman for the discovery and theoretical explanation of the Zeeman effect. He also derived the Lorentz t ...
), Cauchy–Lorentz distribution, Lorentz(ian) function, or Breit–Wigner distribution. The Cauchy distribution f(x; x_0,\gamma) is the distribution of the -intercept of a ray issuing from (x_0,\gamma) with a uniformly distributed angle. It is also the distribution of the ratio of two independent normally distributed random variables with mean zero. The Cauchy distribution is often used in statistics as the canonical example of a "
pathological Pathology is the study of the causal, causes and effects of disease or injury. The word ''pathology'' also refers to the study of disease in general, incorporating a wide range of biology research fields and medical practices. However, when us ...
" distribution since both its
expected value In probability theory, the expected value (also called expectation, expectancy, mathematical expectation, mean, average, or first moment) is a generalization of the weighted average. Informally, the expected value is the arithmetic mean of a l ...
and its
variance In probability theory and statistics, variance is the expectation of the squared deviation of a random variable from its population mean or sample mean. Variance is a measure of dispersion, meaning it is a measure of how far a set of numbers ...
are undefined (but see below). The Cauchy distribution does not have finite
moment Moment or Moments may refer to: * Present time Music * The Moments, American R&B vocal group Albums * ''Moment'' (Dark Tranquillity album), 2020 * ''Moment'' (Speed album), 1998 * ''Moments'' (Darude album) * ''Moments'' (Christine Guldbrand ...
s of order greater than or equal to one; only fractional absolute moments exist., Chapter 16. The Cauchy distribution has no
moment generating function In probability theory and statistics, the moment-generating function of a real-valued random variable is an alternative specification of its probability distribution. Thus, it provides the basis of an alternative route to analytical results compare ...
. In
mathematics Mathematics is an area of knowledge that includes the topics of numbers, formulas and related structures, shapes and the spaces in which they are contained, and quantities and their changes. These topics are represented in modern mathematics ...
, it is closely related to the
Poisson kernel In mathematics, and specifically in potential theory, the Poisson kernel is an integral kernel, used for solving the two-dimensional Laplace equation, given Dirichlet boundary conditions on the unit disk. The kernel can be understood as the deriv ...
, which is the
fundamental solution In mathematics, a fundamental solution for a linear partial differential operator is a formulation in the language of distribution theory of the older idea of a Green's function (although unlike Green's functions, fundamental solutions do not ad ...
for the
Laplace equation In mathematics and physics, Laplace's equation is a second-order partial differential equation named after Pierre-Simon Laplace, who first studied its properties. This is often written as \nabla^2\! f = 0 or \Delta f = 0, where \Delta = \nab ...
in the
upper half-plane In mathematics, the upper half-plane, \,\mathcal\,, is the set of points in the Cartesian plane with > 0. Complex plane Mathematicians sometimes identify the Cartesian plane with the complex plane, and then the upper half-plane corresponds to t ...
. It is one of the few distributions that is
stable A stable is a building in which livestock, especially horses, are kept. It most commonly means a building that is divided into separate stalls for individual animals and livestock. There are many different types of stables in use today; the ...
and has a probability density function that can be expressed analytically, the others being the
normal distribution In statistics, a normal distribution or Gaussian distribution is a type of continuous probability distribution for a real-valued random variable. The general form of its probability density function is : f(x) = \frac e^ The parameter \mu ...
and the
Lévy distribution In probability theory and statistics, the Lévy distribution, named after Paul Lévy, is a continuous probability distribution for a non-negative random variable. In spectroscopy, this distribution, with frequency as the dependent variable, is kn ...
.


History

A function with the form of the density function of the Cauchy distribution was studied geometrically by
Fermat Pierre de Fermat (; between 31 October and 6 December 1607 – 12 January 1665) was a French mathematician who is given credit for early developments that led to infinitesimal calculus, including his technique of adequality. In particular, he i ...
in 1659, and later was known as the
witch of Agnesi In mathematics, the witch of Agnesi () is a cubic plane curve defined from two diametrically opposite points of a circle. It gets its name from Italian mathematician Maria Gaetana Agnesi, and from a mistranslation of an Italian word for a sail ...
, after
Agnesi Agnesi is an Italian surname. Notable people with the surname include: * Alberto Agnesi (born 1980), Mexican telenovela actor * Luigi Agnesi (1833–1875), Belgian operatic bass-baritone, conductor and composer * Maria Gaetana Agnesi (1718–1799), ...
included it as an example in her 1748 calculus textbook. Despite its name, the first explicit analysis of the properties of the Cauchy distribution was published by the French mathematician Poisson in 1824, with Cauchy only becoming associated with it during an academic controversy in 1853. Poisson noted that if the mean of observations following such a distribution were taken, the mean error did not converge to any finite number. As such,
Laplace Pierre-Simon, marquis de Laplace (; ; 23 March 1749 – 5 March 1827) was a French scholar and polymath whose work was important to the development of engineering, mathematics, statistics, physics, astronomy, and philosophy. He summarized ...
's use of the
central limit theorem In probability theory, the central limit theorem (CLT) establishes that, in many situations, when independent random variables are summed up, their properly normalized sum tends toward a normal distribution even if the original variables themselv ...
with such distribution was inappropriate, as it assumed a finite mean and variance. Despite this, Poisson did not regard the issue as important, in contrast to Bienaymé, who was to engage Cauchy in a long dispute over the matter.


Characterization


Probability density function

The Cauchy distribution has the
probability density function In probability theory, a probability density function (PDF), or density of a continuous random variable, is a function whose value at any given sample (or point) in the sample space (the set of possible values taken by the random variable) can ...
(PDF) :f(x; x_0,\gamma) = \frac = \left \right where x_0 is the
location parameter In geography, location or place are used to denote a region (point, line, or area) on Earth's surface or elsewhere. The term ''location'' generally implies a higher degree of certainty than ''place'', the latter often indicating an entity with an ...
, specifying the location of the peak of the distribution, and \gamma is the
scale parameter In probability theory and statistics, a scale parameter is a special kind of numerical parameter of a parametric family of probability distributions. The larger the scale parameter, the more spread out the distribution. Definition If a family o ...
which specifies the half-width at half-maximum (HWHM), alternatively 2\gamma is
full width at half maximum In a distribution, full width at half maximum (FWHM) is the difference between the two values of the independent variable at which the dependent variable is equal to half of its maximum value. In other words, it is the width of a spectrum curve mea ...
(FWHM). \gamma is also equal to half the
interquartile range In descriptive statistics, the interquartile range (IQR) is a measure of statistical dispersion, which is the spread of the data. The IQR may also be called the midspread, middle 50%, fourth spread, or H‑spread. It is defined as the difference ...
and is sometimes called the
probable error In statistics, probable error defines the half-range of an interval about a central point for the distribution, such that half of the values from the distribution will lie within the interval and half outside.Dodge, Y. (2006) ''The Oxford Dictiona ...
.
Augustin-Louis Cauchy Baron Augustin-Louis Cauchy (, ; ; 21 August 178923 May 1857) was a French mathematician, engineer, and physicist who made pioneering contributions to several branches of mathematics, including mathematical analysis and continuum mechanics. He ...
exploited such a density function in 1827 with an
infinitesimal In mathematics, an infinitesimal number is a quantity that is closer to zero than any standard real number, but that is not zero. The word ''infinitesimal'' comes from a 17th-century Modern Latin coinage ''infinitesimus'', which originally referr ...
scale parameter, defining what would now be called a
Dirac delta function In mathematics, the Dirac delta distribution ( distribution), also known as the unit impulse, is a generalized function or distribution over the real numbers, whose value is zero everywhere except at zero, and whose integral over the entire ...
. The maximum value or amplitude of the Cauchy PDF is \frac, located at x=x_0. It is sometimes convenient to express the PDF in terms of the complex parameter \psi= x_0 + i\gamma : f(x;\psi)=\frac\,\textrm\left(\frac\right)=\frac\,\textrm\left(\frac\right) The special case when x_0 = 0 and \gamma = 1 is called the standard Cauchy distribution with the probability density function : f(x; 0,1) = \frac. \! In physics, a three-parameter Lorentzian function is often used: :f(x; x_0,\gamma,I) = \frac = I \left \right where I is the height of the peak. The three-parameter Lorentzian function indicated is not, in general, a probability density function, since it does not integrate to 1, except in the special case where I = \frac.\!


Cumulative distribution function

The
cumulative distribution function In probability theory and statistics, the cumulative distribution function (CDF) of a real-valued random variable X, or just distribution function of X, evaluated at x, is the probability that X will take a value less than or equal to x. Ev ...
of the Cauchy distribution is: :F(x; x_0,\gamma)=\frac \arctan\left(\frac\right)+\frac and the
quantile function In probability and statistics, the quantile function, associated with a probability distribution of a random variable, specifies the value of the random variable such that the probability of the variable being less than or equal to that value equ ...
(inverse cdf) of the Cauchy distribution is :Q(p; x_0,\gamma) = x_0 + \gamma\,\tan\left pi\left(p-\tfrac\right)\right It follows that the first and third quartiles are (x_0 - \gamma, x_0 + \gamma), and hence the
interquartile range In descriptive statistics, the interquartile range (IQR) is a measure of statistical dispersion, which is the spread of the data. The IQR may also be called the midspread, middle 50%, fourth spread, or H‑spread. It is defined as the difference ...
is 2\gamma. For the standard distribution, the cumulative distribution function simplifies to arctangent function \arctan(x): :F(x; 0,1)=\frac \arctan\left(x\right)+\frac


Entropy

The entropy of the Cauchy distribution is given by: : \begin H(\gamma) & =-\int_^\infty f(x;x_0,\gamma) \log(f(x;x_0,\gamma)) \, dx \\ pt& =\log(4\pi\gamma) \end The derivative of the
quantile function In probability and statistics, the quantile function, associated with a probability distribution of a random variable, specifies the value of the random variable such that the probability of the variable being less than or equal to that value equ ...
, the quantile density function, for the Cauchy distribution is: :Q'(p; \gamma) = \gamma\,\pi\,^2\left pi\left(p-\tfrac 1 2 \right)\right\! The
differential entropy Differential entropy (also referred to as continuous entropy) is a concept in information theory that began as an attempt by Claude Shannon to extend the idea of (Shannon) entropy, a measure of average surprisal of a random variable, to continu ...
of a distribution can be defined in terms of its quantile density, specifically: :H(\gamma) = \int_0^1 \log\,(Q'(p; \gamma))\,\mathrm dp = \log(4\pi\gamma) The Cauchy distribution is the
maximum entropy probability distribution In statistics and information theory, a maximum entropy probability distribution has entropy that is at least as great as that of all other members of a specified class of probability distributions. According to the principle of maximum entro ...
for a random variate X for which :\operatorname log(1+(X-x_0)^2/\gamma^2)\log 4 or, alternatively, for a random variate X for which :\operatorname log(1+(X-x_0)^2)2\log(1+\gamma). In its standard form, it is the
maximum entropy probability distribution In statistics and information theory, a maximum entropy probability distribution has entropy that is at least as great as that of all other members of a specified class of probability distributions. According to the principle of maximum entro ...
for a random variate X for which :\operatorname\!\left ln(1+X^2) \right\ln 4.


Kullback-Leibler divergence

The Kullback-Leibler divergence between two Cauchy distributions has the following symmetric closed-form formula: : \mathrm\left(p_: p_\right)=\log \frac. Any
f-divergence In probability theory, an f-divergence is a function D_f(P\, Q) that measures the difference between two probability distributions P and Q. Many common divergences, such as KL-divergence, Hellinger distance, and total variation distance, are sp ...
between two Cauchy distributions is symmetric and can be expressed as a function of the chi-squared divergence. Closed-form expression for the
total variation In mathematics, the total variation identifies several slightly different concepts, related to the (local or global) structure of the codomain of a function or a measure. For a real-valued continuous function ''f'', defined on an interval 'a'' ...
,
Jensen–Shannon divergence In probability theory and statistics, the Jensen– Shannon divergence is a method of measuring the similarity between two probability distributions. It is also known as information radius (IRad) or total divergence to the average. It is based o ...
,
Hellinger distance In probability and statistics, the Hellinger distance (closely related to, although different from, the Bhattacharyya distance) is used to quantify the similarity between two probability distributions. It is a type of ''f''-divergence. The Hellin ...
, etc are available.


Properties

The Cauchy distribution is an example of a distribution which has no
mean There are several kinds of mean in mathematics, especially in statistics. Each mean serves to summarize a given group of data, often to better understand the overall value (magnitude and sign) of a given data set. For a data set, the ''arithme ...
,
variance In probability theory and statistics, variance is the expectation of the squared deviation of a random variable from its population mean or sample mean. Variance is a measure of dispersion, meaning it is a measure of how far a set of numbers ...
or higher moments defined. Its
mode Mode ( la, modus meaning "manner, tune, measure, due measure, rhythm, melody") may refer to: Arts and entertainment * '' MO''D''E (magazine)'', a defunct U.S. women's fashion magazine * ''Mode'' magazine, a fictional fashion magazine which is ...
and
median In statistics and probability theory, the median is the value separating the higher half from the lower half of a data sample, a population, or a probability distribution. For a data set, it may be thought of as "the middle" value. The basic fe ...
are well defined and are both equal to x_0. When U and V are two independent normally distributed
random variable A random variable (also called random quantity, aleatory variable, or stochastic variable) is a mathematical formalization of a quantity or object which depends on random events. It is a mapping or a function from possible outcomes (e.g., the po ...
s with
expected value In probability theory, the expected value (also called expectation, expectancy, mathematical expectation, mean, average, or first moment) is a generalization of the weighted average. Informally, the expected value is the arithmetic mean of a l ...
0 and
variance In probability theory and statistics, variance is the expectation of the squared deviation of a random variable from its population mean or sample mean. Variance is a measure of dispersion, meaning it is a measure of how far a set of numbers ...
1, then the ratio U/V has the standard Cauchy distribution. If \Sigma is a p\times p positive-semidefinite covariance matrix with strictly positive diagonal entries, then for
independent and identically distributed In probability theory and statistics, a collection of random variables is independent and identically distributed if each random variable has the same probability distribution as the others and all are mutually independent. This property is usua ...
X,Y\sim N(0,\Sigma) and any random p-vector w independent of X and Y such that w_1+\cdots+w_p=1 and w_i\geq 0, i=1,\ldots,p, (defining a
categorical distribution In probability theory and statistics, a categorical distribution (also called a generalized Bernoulli distribution, multinoulli distribution) is a discrete probability distribution that describes the possible results of a random variable that can ...
) it holds that :\sum_^p w_j\frac\sim\mathrm(0,1). If X_1, \ldots, X_n are
independent and identically distributed In probability theory and statistics, a collection of random variables is independent and identically distributed if each random variable has the same probability distribution as the others and all are mutually independent. This property is usua ...
random variables, each with a standard Cauchy distribution, then the
sample mean The sample mean (or "empirical mean") and the sample covariance are statistics computed from a Sample (statistics), sample of data on one or more random variables. The sample mean is the average value (or mean, mean value) of a sample (statistic ...
(X_1 + \cdots + X_n)/n has the same standard Cauchy distribution. To see that this is true, compute the
characteristic function In mathematics, the term "characteristic function" can refer to any of several distinct concepts: * The indicator function of a subset, that is the function ::\mathbf_A\colon X \to \, :which for a given subset ''A'' of ''X'', has value 1 at points ...
of the sample mean: :\varphi_(t) = \mathrm\left ^\right/math> where \overline is the sample mean. This example serves to show that the condition of finite variance in the
central limit theorem In probability theory, the central limit theorem (CLT) establishes that, in many situations, when independent random variables are summed up, their properly normalized sum tends toward a normal distribution even if the original variables themselv ...
cannot be dropped. It is also an example of a more generalized version of the central limit theorem that is characteristic of all
stable distribution In probability theory, a distribution is said to be stable if a linear combination of two independent random variables with this distribution has the same distribution, up to location and scale parameters. A random variable is said to be stab ...
s, of which the Cauchy distribution is a special case. The Cauchy distribution is an infinitely divisible probability distribution. It is also a strictly
stable A stable is a building in which livestock, especially horses, are kept. It most commonly means a building that is divided into separate stalls for individual animals and livestock. There are many different types of stables in use today; the ...
distribution. The standard Cauchy distribution coincides with the Student's ''t''-distribution with one degree of freedom. Like all stable distributions, the location-scale family to which the Cauchy distribution belongs is closed under
linear transformations In mathematics, and more specifically in linear algebra, a linear map (also called a linear mapping, linear transformation, vector space homomorphism, or in some contexts linear function) is a mapping V \to W between two vector spaces that pre ...
with
real Real may refer to: Currencies * Brazilian real (R$) * Central American Republic real * Mexican real * Portuguese real * Spanish real * Spanish colonial real Music Albums * ''Real'' (L'Arc-en-Ciel album) (2000) * ''Real'' (Bright album) (2010) ...
coefficients. In addition, the Cauchy distribution is closed under
linear fractional transformations In mathematics, a linear fractional transformation is, roughly speaking, a transformation of the form :z \mapsto \frac , which has an inverse. The precise definition depends on the nature of , and . In other words, a linear fractional transfo ...
with real coefficients. In this connection, see also
McCullagh's parametrization of the Cauchy distributions In probability theory, the "standard" Cauchy distribution is the probability distribution whose probability density function (pdf) is :f(x) = for ''x'' real. This has median 0, and first and third quartiles respectively −1 and +1. General ...
.


Characteristic function

Let X denote a Cauchy distributed random variable. The
characteristic function In mathematics, the term "characteristic function" can refer to any of several distinct concepts: * The indicator function of a subset, that is the function ::\mathbf_A\colon X \to \, :which for a given subset ''A'' of ''X'', has value 1 at points ...
of the Cauchy distribution is given by :\varphi_X(t) = \operatorname\left ^ \right =\int_^\infty f(x;x_0,\gamma)e^\,dx = e^. which is just the
Fourier transform A Fourier transform (FT) is a mathematical transform that decomposes functions into frequency components, which are represented by the output of the transform as a function of frequency. Most commonly functions of time or space are transformed, ...
of the probability density. The original probability density may be expressed in terms of the characteristic function, essentially by using the inverse Fourier transform: :f(x; x_0,\gamma) = \frac\int_^\infty \varphi_X(t;x_0,\gamma)e^ \, dt \! The ''n''th moment of a distribution is the ''n''th derivative of the characteristic function evaluated at t=0. Observe that the characteristic function is not
differentiable In mathematics, a differentiable function of one real variable is a function whose derivative exists at each point in its domain. In other words, the graph of a differentiable function has a non-vertical tangent line at each interior point in its ...
at the origin: this corresponds to the fact that the Cauchy distribution does not have well-defined moments higher than the zeroth moment.


Comparison with the normal distribution

Compared to the normal distribution, the Cauchy density function has a higher peak and lower tails. An example is shown in the two figures added here The figure to the left shows the ''Cauchy probability density function'' fitted to an observed
histogram A histogram is an approximate representation of the distribution of numerical data. The term was first introduced by Karl Pearson. To construct a histogram, the first step is to " bin" (or "bucket") the range of values—that is, divide the ent ...
. The peak of the function is higher than the peak of the histogram while the tails are lower than those of the histogram.
The figure to the right shows the ''normal probability density function'' fitted to ''the same'' observed histogram. The peak of the function is lower than the peak of the histogram.
This illustates the above statement.


Explanation of undefined moments


Mean

If a
probability distribution In probability theory and statistics, a probability distribution is the mathematical function that gives the probabilities of occurrence of different possible outcomes for an experiment. It is a mathematical description of a random phenomenon i ...
has a
density function In probability theory, a probability density function (PDF), or density of a continuous random variable, is a function whose value at any given sample (or point) in the sample space (the set of possible values taken by the random variable) can ...
f(x), then the mean, if it exists, is given by We may evaluate this two-sided
improper integral In mathematical analysis, an improper integral is the limit of a definite integral as an endpoint of the interval(s) of integration approaches either a specified real number or positive or negative infinity; or in some instances as both endpo ...
by computing the sum of two one-sided improper integrals. That is, for an arbitrary real number a. For the integral to exist (even as an infinite value), at least one of the terms in this sum should be finite, or both should be infinite and have the same sign. But in the case of the Cauchy distribution, both the terms in this sum () are infinite and have opposite sign. Hence () is undefined, and thus so is the mean. Note that the
Cauchy principal value In mathematics, the Cauchy principal value, named after Augustin Louis Cauchy, is a method for assigning values to certain improper integrals which would otherwise be undefined. Formulation Depending on the type of singularity in the integrand , ...
of the mean of the Cauchy distribution is \lim_\int_^a x f(x)\,dx which is zero. On the other hand, the related integral \lim_\int_^a x f(x)\,dx is ''not'' zero, as can be seen by computing the integral. This again shows that the mean () cannot exist. Various results in probability theory about
expected value In probability theory, the expected value (also called expectation, expectancy, mathematical expectation, mean, average, or first moment) is a generalization of the weighted average. Informally, the expected value is the arithmetic mean of a l ...
s, such as the
strong law of large numbers In probability theory, the law of large numbers (LLN) is a theorem that describes the result of performing the same experiment a large number of times. According to the law, the average of the results obtained from a large number of trials shou ...
, fail to hold for the Cauchy distribution.


Smaller moments

The absolute moments for p\in(-1,1) are defined. For X\sim\mathrm(0,\gamma) we have :\operatorname raw_moment In mathematics, the moments of a function are certain quantitative measures related to the shape of the function's graph. If the function represents mass density, then the zeroth moment is the total mass, the first moment (normalized by total ma ...
s_do_exist_and_have_a_value_of_infinity,_for_example,_the_raw_second_moment: :_ \begin \operatorname ^2&_\propto_\int_^\infty_\frac\,dx_=_\int_^\infty_1_-_\frac\,dx_\\ pt&_=_\int_^\infty_dx_-_\int_^\infty_\frac\,dx_=_\int_^\infty_dx-\pi_=_\infty. \end By_re-arranging_the_formula,_one_can_see_that_the_second_moment_is_essentially_the_infinite_integral_of_a_constant_(here_1).__Higher_even-powered_raw_moments_will_also_evaluate_to_infinity.__Odd-powered_raw_moments,_however,_are_undefined,_which_is_distinctly_different_from_existing_with_the_value_of_infinity._The_odd-powered_raw_moments_are_undefined_because_their_values_are_essentially_equivalent_to_\infty_-_\infty_since_the_two_halves_of_the_integral_both_diverge_and_have_opposite_signs.__The_first_raw_moment_is_the_mean,_which,_being_odd,_does_not_exist._(See_also_the_discussion_above_about_this.)_This_in_turn_means_that_all_of_the_
central_moment In probability theory and statistics, a central moment is a moment of a probability distribution of a random variable about the random variable's mean; that is, it is the expected value of a specified integer power of the deviation of the random ...
s_and_
standardized_moment In probability theory and statistics, a standardized moment of a probability distribution is a moment (often a higher degree central moment) that is normalized, typically by a power of the standard deviation, rendering the moment scale invariant. ...
s_are_undefined_since_they_are_all_based_on_the_mean.__The_variance—which_is_the_second_central_moment—is_likewise_non-existent_(despite_the_fact_that_the_raw_second_moment_exists_with_the_value_infinity). The_results_for_higher_moments_follow_from_ Hölder's_inequality,_which_implies_that_higher_moments_(or_halves_of_moments)_diverge_if_lower_ones_do.


_Moments_of_truncated_distributions

Consider_the_
truncated_distribution In statistics, a truncated distribution is a conditional distribution that results from restricting the domain of some other probability distribution. Truncated distributions arise in practical statistics in cases where the ability to record, or e ...
_defined_by_restricting_the_standard_Cauchy_distribution_to_the_interval_._Such_a_truncated_distribution_has_all_moments_(and_the_central_limit_theorem_applies_for_
i.i.d. In probability theory and statistics, a collection of random variables is independent and identically distributed if each random variable has the same probability distribution as the others and all are mutually independent. This property is us ...
_observations_from_it);_yet_for_almost_all_practical_purposes_it_behaves_like_a_Cauchy_distribution.


_Estimation_of_parameters_

Because_the_parameters_of_the_Cauchy_distribution_do_not_correspond_to_a_mean_and_variance,_attempting_to_estimate_the_parameters_of_the_Cauchy_distribution_by_using_a_sample_mean_and_a_sample_variance_will_not_succeed._For_example,_if_an_i.i.d._sample_of_size_''n''_is_taken_from_a_Cauchy_distribution,_one_may_calculate_the_sample_mean_as: :\bar=\frac_1_n_\sum_^n_x_i Although_the_sample_values_x_i_will_be_concentrated_about_the_central_value_x_0,_the_sample_mean_will_become_increasingly_variable_as_more_observations_are_taken,_because_of_the_increased_probability_of_encountering_sample_points_with_a_large_absolute_value._In_fact,_the_distribution_of_the_sample_mean_will_be_equal_to_the_distribution_of_the_observations_themselves;_i.e.,_the_sample_mean_of_a_large_sample_is_no_better_(or_worse)_an_estimator_of_x_0_than_any_single_observation_from_the_sample._Similarly,_calculating_the_sample_variance_will_result_in_values_that_grow_larger_as_more_observations_are_taken. Therefore,_more_robust_means_of_estimating_the_central_value_x_0_and_the_scaling_parameter_\gamma_are_needed._One_simple_method_is_to_take_the_median_value_of_the_sample_as_an_estimator_of_x_0_and_half_the_sample_interquartile_range_ In_descriptive_statistics,_the_interquartile_range_(IQR)_is_a_measure_of_statistical_dispersion,_which_is_the_spread_of_the_data._The_IQR_may_also_be_called_the_midspread,_middle_50%,_fourth_spread,_or_H‑spread._It_is_defined_as_the_difference_...
_as_an_estimator_of_\gamma._Other,_more_precise_and_robust_methods_have_been_developed___For_example,_the_
truncated_mean A truncated mean or trimmed mean is a statistical measure of central tendency, much like the mean and median. It involves the calculation of the mean after discarding given parts of a probability distribution or sample at the high and low end, an ...
_of_the_middle_24%_of_the_sample_
order_statistics In statistics, the ''k''th order statistic of a statistical sample is equal to its ''k''th-smallest value. Together with rank statistics, order statistics are among the most fundamental tools in non-parametric statistics and inference. Importa ...
_produces_an_estimate_for_x_0_that_is_more_efficient_than_using_either_the_sample_median_or_the_full_sample_mean._However,_because_of_the_
fat_tails A fat-tailed distribution is a probability distribution that exhibits a large skewness or kurtosis, relative to that of either a normal distribution or an exponential distribution. In common usage, the terms fat-tailed and heavy-tailed are someti ...
_of_the_Cauchy_distribution,_the_efficiency_of_the_estimator_decreases_if_more_than_24%_of_the_sample_is_used.
Maximum_likelihood In statistics, maximum likelihood estimation (MLE) is a method of estimation theory, estimating the Statistical parameter, parameters of an assumed probability distribution, given some observed data. This is achieved by Mathematical optimization, ...
_can_also_be_used_to_estimate_the_parameters_x_0_and_\gamma._However,_this_tends_to_be_complicated_by_the_fact_that_this_requires_finding_the_roots_of_a_high_degree_polynomial,_and_there_can_be_multiple_roots_that_represent_local_maxima._Also,_while_the_maximum_likelihood_estimator_is_asymptotically_efficient,_it_is_relatively_inefficient_for_small_samples.__The_log-likelihood_function_for_the_Cauchy_distribution_for_sample_size_n_is: :\hat\ell(x_1,\dotsc,x_n_\mid_\!x_0,\gamma_)_=_-_n_\log_(\gamma_\pi)_-_\sum_^n_\log_\left(1_+_\left(\frac\right)^2\right) Maximizing_the_log_likelihood_function_with_respect_to_x_0_and_\gamma_by_taking_the_first_derivative_produces_the_following_system_of_equations: :_\frac_=__\sum_^n_\frac_=0 :_\frac_=_\sum_^n_\frac_-_\frac_=_0 Note_that :_\sum_^n_\frac_ is_a_monotone_function_in_\gamma_and_that_the_solution_\gamma_must_satisfy :_\min_, x_i-x_0, \le_\gamma\le_\max_, x_i-x_0, ._ Solving_just_for_x_0_requires_solving_a_polynomial_of_degree_2n-1,_and_solving_just_for_\,\!\gamma_requires_solving_a_polynomial_of_degree_2n._Therefore,_whether_solving_for_one_parameter_or_for_both_parameters_simultaneously,_a_ numerical_solution_on_a_computer_is_typically_required._The_benefit_of_maximum_likelihood_estimation_is_asymptotic_efficiency;_estimating_x_0_using_the_sample_median_is_only_about_81%_as_asymptotically_efficient_as_estimating_x_0_by_maximum_likelihood._The_truncated_sample_mean_using_the_middle_24%_order_statistics_is_about_88%_as_asymptotically_efficient_an_estimator_of_x_0_as_the_maximum_likelihood_estimate._When_
Newton's_method In numerical analysis, Newton's method, also known as the Newton–Raphson method, named after Isaac Newton and Joseph Raphson, is a root-finding algorithm which produces successively better approximations to the roots (or zeroes) of a real-valu ...
_is_used_to_find_the_solution_for_the_maximum_likelihood_estimate,_the_middle_24%_order_statistics_can_be_used_as_an_initial_solution_for_x_0. The_shape_can_be_estimated_using_the_median_of_absolute_values,_since_for_location_0_Cauchy_variables_X\sim\mathrm(0,\gamma),_the_\mathrm(, X, )_=_\gamma_the_shape_parameter.


_Multivariate_Cauchy_distribution

A_
random_vector In probability, and statistics, a multivariate random variable or random vector is a list of mathematical variables each of whose value is unknown, either because the value has not yet occurred or because there is imperfect knowledge of its value. ...
_X=(X_1,_\ldots,_X_k)^T_is_said_to_have_the_multivariate_Cauchy_distribution_if_every_linear_combination_of_its_components_Y=a_1X_1+_\cdots_+_a_kX_k_has_a_Cauchy_distribution._That_is,_for_any_constant_vector_a\in_\mathbb_R^k,_the_random_variable_Y=a^TX_should_have_a_univariate_Cauchy_distribution.__The_characteristic_function_of_a_multivariate_Cauchy_distribution_is_given_by: :\varphi_X(t)_=__e^,_\! where_x_0(t)_and_\gamma(t)_are_real_functions_with_x_0(t)_a_
homogeneous_function In mathematics, a homogeneous function is a function of several variables such that, if all its arguments are multiplied by a scalar, then its value is multiplied by some power of this scalar, called the degree of homogeneity, or simply the ''deg ...
_of_degree_one_and_\gamma(t)_a_positive_homogeneous_function_of_degree_one.__More_formally: :x_0(at)_=_ax_0(t), :\gamma_(at)_=_, a, \gamma_(t), for_all_t. An_example_of_a_bivariate_Cauchy_distribution_can_be_given_by: :f(x,_y;_x_0,y_0,\gamma)=__\left _\right. Note_that_in_this_example,_even_though_the_covariance_between__x_and_y_is_0,_x_and_y_are_not_
statistically_independent Independence is a fundamental notion in probability theory, as in statistics and the theory of stochastic processes. Two events are independent, statistically independent, or stochastically independent if, informally speaking, the occurrence of o ...
. We_also_can_write_this_formula_for_complex_variable._Then_the_probability_density_function_of_complex_cauchy_is_: :f(z;_z_0,\gamma)=__\left _\right. Analogous_to_the_univariate_density,_the_multidimensional_Cauchy_density_also_relates_to_the_
multivariate_Student_distribution In statistics, the multivariate ''t''-distribution (or multivariate Student distribution) is a multivariate probability distribution. It is a generalization to random vectors of the Student's ''t''-distribution, which is a distribution applicab ...
._They_are_equivalent_when_the_degrees_of_freedom_parameter_is_equal_to_one._The_density_of_a_k_dimension_Student_distribution_with_one_degree_of_freedom_becomes: :f(;_,,_k)=_\frac_. Properties_and_details_for_this_density_can_be_obtained_by_taking_it_as_a_particular_case_of_the_multivariate_Student_density.


_Transformation_properties

*If_X_\sim_\operatorname(x_0,\gamma)_then__kX_+_\ell_\sim_\textrm(x_0_k+\ell,_\gamma_, k, ) *If_X_\sim_\operatorname(x_0,_\gamma_0)_and_Y_\sim_\operatorname(x_1,\gamma_1)_are_independent,_then__X+Y_\sim_\operatorname(x_0+x_1,\gamma_0_+\gamma_1)_and__X-Y_\sim_\operatorname(x_0-x_1,_\gamma_0+\gamma_1) *If_X_\sim_\operatorname(0,\gamma)_then__\tfrac_\sim_\operatorname(0,_\tfrac) *McCullagh's_parametrization_of_the_Cauchy_distributions_In_probability_theory,_the_"standard"_Cauchy_distribution_is_the_probability_distribution_whose_probability_density_function_(pdf)_is :f(x)_=_ for_''x''_real.__This_has_median_0,_and_first_and_third_quartiles_respectively_−1_and_+1.__General_...
: McCullagh,_P.
"Conditional_inference_and_Cauchy_models"
_''
Biometrika ''Biometrika'' is a peer-reviewed scientific journal published by Oxford University Press for thBiometrika Trust The editor-in-chief is Paul Fearnhead (Lancaster University). The principal focus of this journal is theoretical statistics. It was es ...
'',_volume_79_(1992),_pages_247–259
PDF
_from_McCullagh's_homepage._Expressing_a_Cauchy_distribution_in_terms_of_one_complex_parameter_\psi_=_x_0+i\gamma,_define_X_\sim_\operatorname(\psi)_to_mean_X_\sim_\operatorname(x_0,, \gamma, )._If_X_\sim_\operatorname(\psi)_then:_\frac_\sim_\operatorname\left(\frac\right)_where_a,_b,_c_and_d_are_real_numbers. *_Using_the_same_convention_as_above,_if_X_\sim_\operatorname(\psi)_then:_\frac_\sim_\operatorname\left(\frac\right)where_\operatorname_is_the_
circular_Cauchy_distribution In probability theory and directional statistics, a wrapped Cauchy distribution is a wrapped probability distribution that results from the "wrapping" of the Cauchy distribution around the unit circle. The Cauchy distribution is sometimes known as ...
.


__Lévy_measure_

The_Cauchy_distribution_is_the_stable_distribution_ In_probability_theory,_a_distribution_is_said_to_be_stable_if_a__linear_combination_of_two_independent_random_variables_with_this_distribution_has_the_same_distribution,_up_to_location_and__scale_parameters._A_random_variable_is_said_to_be_stab_...
_of_index_1._The_ Lévy–Khintchine_representation_of_such_a_stable_distribution_of_parameter__\gamma__is_given,_for__X_\sim_\operatorname(\gamma,_0,_0)\,_by: :_\operatorname\left(_e^_\right)_=_\exp\left(_\int__(e^_-_1)_\Pi_\gamma(dy)_\right) where :\Pi_\gamma(dy)_=_\left(_c___\frac_1__+_c___\frac_1__\right)_\,_dy_ and__c_,_c___can_be_expressed_explicitly._In_the_case__\gamma_=_1__of_the_Cauchy_distribution,_one_has__c__=_c___. This_last_representation_is_a_consequence_of_the_formula :_\pi_, x, _=_\operatorname\int__(1_-_e^)_\,_\frac_


_Related_distributions

*\operatorname(0,1)_\sim_\textrm(\mathrm=1)\,_ Student's_''t''_distribution *\operatorname(\mu,\sigma)_\sim_\textrm_(\mu,\sigma)\,_ non-standardized_Student's_''t''_distribution *If_X,_Y_\sim_\textrm(0,1)\,_X,_Y_independent,_then__\tfrac_X_Y\sim_\textrm(0,1)\, *If_X_\sim_\textrm(0,1)\,_then__\tan_\left(_\pi_\left(X-\tfrac\right)_\right)_\sim_\textrm(0,1)\, *If_X_\sim_\operatorname(0,_1)_then_\ln(X)_\sim_\textrm(0,_1) *If_X_\sim_\operatorname(x_0,\gamma)__then_\tfrac1X_\sim_\operatorname\left(\tfrac,\tfrac\right)_ *The_Cauchy_distribution_is_a_limiting_case_of_a_
Pearson_distribution The Pearson distribution is a family of continuous probability distribution, continuous probability distributions. It was first published by Karl Pearson in 1895 and subsequently extended by him in 1901 and 1916 in a series of articles on biostat ...
_of_type_4 *The_Cauchy_distribution_is_a_special_case_of_a_
Pearson_distribution The Pearson distribution is a family of continuous probability distribution, continuous probability distributions. It was first published by Karl Pearson in 1895 and subsequently extended by him in 1901 and 1916 in a series of articles on biostat ...
_of_type_7. *The_Cauchy_distribution_is_a_stable_distribution_ In_probability_theory,_a_distribution_is_said_to_be_stable_if_a__linear_combination_of_two_independent_random_variables_with_this_distribution_has_the_same_distribution,_up_to_location_and__scale_parameters._A_random_variable_is_said_to_be_stab_...
:_if_X_\sim_\textrm(1,_0,_\gamma,_\mu),_then_X_\sim_\operatorname(\mu,_\gamma). *The_Cauchy_distribution_is_a_singular_limit_of_a_
hyperbolic_distribution The hyperbolic distribution is a continuous probability distribution characterized by the logarithm of the probability density function being a hyperbola. Thus the distribution decreases exponentially, which is more slowly than the normal distribu ...
*The_
wrapped_Cauchy_distribution In probability theory and directional statistics, a wrapped Cauchy distribution is a wrapped probability distribution that results from the "wrapping" of the Cauchy distribution around the unit circle. The Cauchy distribution is sometimes known a ...
,_taking_values_on_a_circle,_is_derived_from_the_Cauchy_distribution_by_wrapping_it_around_the_circle. *If_X_\sim_\textrm(0,1),_Z_\sim_\operatorname(1/2,_s^2/2),_then_Y_=_\mu_+_X_\sqrt_Z_\sim_\operatorname(\mu,s)._For_half-Cauchy_distributions,_the_relation_holds_by_setting_X_\sim_\textrm(0,1)_I\.


_Relativistic_Breit–Wigner_distribution

In_
nuclear Nuclear may refer to: Physics Relating to the nucleus of the atom: * Nuclear engineering *Nuclear physics *Nuclear power *Nuclear reactor *Nuclear weapon *Nuclear medicine *Radiation therapy *Nuclear warfare Mathematics *Nuclear space *Nuclear ...
_and_
particle_physics Particle physics or high energy physics is the study of fundamental particles and forces that constitute matter and radiation. The fundamental particles in the universe are classified in the Standard Model as fermions (matter particles) an ...
,_the_energy_profile_of_a_
resonance Resonance describes the phenomenon of increased amplitude that occurs when the frequency of an applied periodic force (or a Fourier component of it) is equal or close to a natural frequency of the system on which it acts. When an oscillatin ...
_is_described_by_the_
relativistic_Breit–Wigner_distribution The relativistic Breit–Wigner distribution (after the 1936 nuclear resonance formula of Gregory Breit and Eugene Wigner) is a continuous probability distribution with the following probability density function, SePythia 6.4 Physics and Manual(pa ...
,_while_the_Cauchy_distribution_is_the_(non-relativistic)_Breit–Wigner_distribution.


_Occurrence_and_applications

*In_
spectroscopy Spectroscopy is the field of study that measures and interprets the electromagnetic spectra that result from the interaction between electromagnetic radiation and matter as a function of the wavelength or frequency of the radiation. Matter wa ...
,_the_Cauchy_distribution_describes_the_shape_of_
spectral_line A spectral line is a dark or bright line in an otherwise uniform and continuous spectrum, resulting from emission or absorption of light in a narrow frequency range, compared with the nearby frequencies. Spectral lines are often used to iden ...
s_which_are_subject_to_
homogeneous_broadening Homogeneous broadening is a type of emission spectrum broadening in which all atoms radiating from a specific level under consideration radiate with equal opportunity. If an optical emitter (e.g. an atom) shows homogeneous broadening, its spectra ...
_in_which_all_atoms_interact_in_the_same_way_with_the_frequency_range_contained_in_the_line_shape._Many_mechanisms_cause_homogeneous_broadening,_most_notably_ collision_broadening.__ Lifetime_or_natural_broadening_also_gives_rise_to_a_line_shape_described_by_the_Cauchy_distribution. *Applications_of_the_Cauchy_distribution_or_its_transformation_can_be_found_in_fields_working_with_exponential_growth.__A_1958_paper_by_White__derived_the_test_statistic_for_estimators_of_\hat_for_the_equation_x_=\beta_t+\varepsilon_,\beta>1_and_where_the_maximum_likelihood_estimator_is_found_using_ordinary_least_squares_showed_the_sampling_distribution_of_the_statistic_is_the_Cauchy_distribution. *The_Cauchy_distribution_is_often_the_distribution_of_observations_for_objects_that_are_spinning.__The_classic_reference_for_this_is_called_the_Gull's_lighthouse_problem_and_as_in_the_above_section_as_the_Breit–Wigner_distribution_in_particle_physics. *In_
hydrology Hydrology () is the scientific study of the movement, distribution, and management of water on Earth and other planets, including the water cycle, water resources, and environmental watershed sustainability. A practitioner of hydrology is calle ...
_the_Cauchy_distribution_is_applied_to_extreme_events_such_as_annual_maximum_one-day_rainfalls_and_river_discharges._The_blue_picture_illustrates_an_example_of_fitting_the_Cauchy_distribution_to_ranked_monthly_maximum_one-day_rainfalls_showing_also_the_90%_
confidence_belt In frequentist statistics, a confidence interval (CI) is a range of estimates for an unknown parameter. A confidence interval is computed at a designated ''confidence level''; the 95% confidence level is most common, but other levels, such as 9 ...
_based_on_the_
binomial_distribution In probability theory and statistics, the binomial distribution with parameters ''n'' and ''p'' is the discrete probability distribution of the number of successes in a sequence of ''n'' independent experiments, each asking a yes–no quest ...
._The_rainfall_data_are_represented_by_
plotting_position Plot or Plotting may refer to: Art, media and entertainment * Plot (narrative), the story of a piece of fiction Music * ''The Plot'' (album), a 1976 album by jazz trumpeter Enrico Rava * The Plot (band), a band formed in 2003 Other * ''Plot'' ...
s_as_part_of_the_
cumulative_frequency_analysis Cumulative frequency analysis is the analysis of the frequency of occurrence of values of a phenomenon less than a reference value. The phenomenon may be time- or space-dependent. Cumulative frequency is also called ''frequency of non-exceedance ...
. *The_expression_for_imaginary_part_of_complex_
electrical_permittivity In electromagnetism, the absolute permittivity, often simply called permittivity and denoted by the Greek letter ''ε'' (epsilon), is a measure of the electric polarizability of a dielectric. A material with high permittivity polarizes more in r ...
_according_to_Lorentz_model_is_a_model_VAR_(
value_at_risk Value at risk (VaR) is a measure of the risk of loss for investments. It estimates how much a set of investments might lose (with a given probability), given normal market conditions, in a set time period such as a day. VaR is typically used by ...
)_producing_a_much_larger_probability_of_extreme_risk_than_ Gaussian_Distribution.Tong_Liu_(2012),_An_intermediate_distribution_between_Gaussian_and_Cauchy_distributions._https://arxiv.org/pdf/1208.5109.pdf_


_See_also

*_
Lévy_flight A Lévy flight is a random walk in which the step-lengths have a Lévy distribution, a probability distribution that is heavy-tailed. When defined as a walk in a space of dimension greater than one, the steps made are in isotropic random direct ...
_and_
Lévy_process In probability theory, a Lévy process, named after the French mathematician Paul Lévy, is a stochastic process with independent, stationary increments: it represents the motion of a point whose successive displacements are random, in which disp ...
*_
Laplace_distribution In probability theory and statistics, the Laplace distribution is a continuous probability distribution named after Pierre-Simon Laplace. It is also sometimes called the double exponential distribution, because it can be thought of as two exponen ...
,_the_Fourier_transform_of_the_Cauchy_distribution *_
Cauchy_process In probability theory, a Cauchy process is a type of stochastic process. There are symmetric and asymmetric forms of the Cauchy process. The unspecified term "Cauchy process" is often used to refer to the symmetric Cauchy process. The Cauchy ...
*_
Stable_process In probability theory, a stable process is a type of stochastic process. It includes stochastic processes whose associated probability distributions are stable distributions. Examples of stable processes include the Wiener process, or Brownian mo ...
*_
Slash_distribution In probability theory, the slash distribution is the probability distribution of a standard normal variate divided by an independent standard uniform variate. In other words, if the random variable ''Z'' has a normal distribution with zero mean an ...


_References


_External_links

*_
Earliest_Uses:_The_entry_on_Cauchy_distribution_has_some_historical_information.
*_


Ratios_of_Normal_Variables_by_George_Marsaglia
{{DEFAULTSORT:Cauchy_Distribution Augustin-Louis_Cauchy Continuous_distributions Probability_distributions_with_non-finite_variance Power_laws Stable_distributions Location-scale_family_probability_distributionshtml" ;"title="X, ^p] = \gamma^p \mathrm(\pi p/2).


Higher moments

The Cauchy distribution does not have finite moments of any order. Some of the higher
raw moment In mathematics, the moments of a function are certain quantitative measures related to the shape of the function's graph. If the function represents mass density, then the zeroth moment is the total mass, the first moment (normalized by total ma ...
s do exist and have a value of infinity, for example, the raw second moment: : \begin \operatorname ^2& \propto \int_^\infty \frac\,dx = \int_^\infty 1 - \frac\,dx \\ pt& = \int_^\infty dx - \int_^\infty \frac\,dx = \int_^\infty dx-\pi = \infty. \end By re-arranging the formula, one can see that the second moment is essentially the infinite integral of a constant (here 1). Higher even-powered raw moments will also evaluate to infinity. Odd-powered raw moments, however, are undefined, which is distinctly different from existing with the value of infinity. The odd-powered raw moments are undefined because their values are essentially equivalent to \infty - \infty since the two halves of the integral both diverge and have opposite signs. The first raw moment is the mean, which, being odd, does not exist. (See also the discussion above about this.) This in turn means that all of the
central moment In probability theory and statistics, a central moment is a moment of a probability distribution of a random variable about the random variable's mean; that is, it is the expected value of a specified integer power of the deviation of the random ...
s and
standardized moment In probability theory and statistics, a standardized moment of a probability distribution is a moment (often a higher degree central moment) that is normalized, typically by a power of the standard deviation, rendering the moment scale invariant. ...
s are undefined since they are all based on the mean. The variance—which is the second central moment—is likewise non-existent (despite the fact that the raw second moment exists with the value infinity). The results for higher moments follow from Hölder's inequality, which implies that higher moments (or halves of moments) diverge if lower ones do.


Moments of truncated distributions

Consider the
truncated distribution In statistics, a truncated distribution is a conditional distribution that results from restricting the domain of some other probability distribution. Truncated distributions arise in practical statistics in cases where the ability to record, or e ...
defined by restricting the standard Cauchy distribution to the interval . Such a truncated distribution has all moments (and the central limit theorem applies for
i.i.d. In probability theory and statistics, a collection of random variables is independent and identically distributed if each random variable has the same probability distribution as the others and all are mutually independent. This property is us ...
observations from it); yet for almost all practical purposes it behaves like a Cauchy distribution.


Estimation of parameters

Because the parameters of the Cauchy distribution do not correspond to a mean and variance, attempting to estimate the parameters of the Cauchy distribution by using a sample mean and a sample variance will not succeed. For example, if an i.i.d. sample of size ''n'' is taken from a Cauchy distribution, one may calculate the sample mean as: :\bar=\frac 1 n \sum_^n x_i Although the sample values x_i will be concentrated about the central value x_0, the sample mean will become increasingly variable as more observations are taken, because of the increased probability of encountering sample points with a large absolute value. In fact, the distribution of the sample mean will be equal to the distribution of the observations themselves; i.e., the sample mean of a large sample is no better (or worse) an estimator of x_0 than any single observation from the sample. Similarly, calculating the sample variance will result in values that grow larger as more observations are taken. Therefore, more robust means of estimating the central value x_0 and the scaling parameter \gamma are needed. One simple method is to take the median value of the sample as an estimator of x_0 and half the sample
interquartile range In descriptive statistics, the interquartile range (IQR) is a measure of statistical dispersion, which is the spread of the data. The IQR may also be called the midspread, middle 50%, fourth spread, or H‑spread. It is defined as the difference ...
as an estimator of \gamma. Other, more precise and robust methods have been developed For example, the
truncated mean A truncated mean or trimmed mean is a statistical measure of central tendency, much like the mean and median. It involves the calculation of the mean after discarding given parts of a probability distribution or sample at the high and low end, an ...
of the middle 24% of the sample
order statistics In statistics, the ''k''th order statistic of a statistical sample is equal to its ''k''th-smallest value. Together with rank statistics, order statistics are among the most fundamental tools in non-parametric statistics and inference. Importa ...
produces an estimate for x_0 that is more efficient than using either the sample median or the full sample mean. However, because of the
fat tails A fat-tailed distribution is a probability distribution that exhibits a large skewness or kurtosis, relative to that of either a normal distribution or an exponential distribution. In common usage, the terms fat-tailed and heavy-tailed are someti ...
of the Cauchy distribution, the efficiency of the estimator decreases if more than 24% of the sample is used.
Maximum likelihood In statistics, maximum likelihood estimation (MLE) is a method of estimation theory, estimating the Statistical parameter, parameters of an assumed probability distribution, given some observed data. This is achieved by Mathematical optimization, ...
can also be used to estimate the parameters x_0 and \gamma. However, this tends to be complicated by the fact that this requires finding the roots of a high degree polynomial, and there can be multiple roots that represent local maxima. Also, while the maximum likelihood estimator is asymptotically efficient, it is relatively inefficient for small samples. The log-likelihood function for the Cauchy distribution for sample size n is: :\hat\ell(x_1,\dotsc,x_n \mid \!x_0,\gamma ) = - n \log (\gamma \pi) - \sum_^n \log \left(1 + \left(\frac\right)^2\right) Maximizing the log likelihood function with respect to x_0 and \gamma by taking the first derivative produces the following system of equations: : \frac = \sum_^n \frac =0 : \frac = \sum_^n \frac - \frac = 0 Note that : \sum_^n \frac is a monotone function in \gamma and that the solution \gamma must satisfy : \min , x_i-x_0, \le \gamma\le \max , x_i-x_0, . Solving just for x_0 requires solving a polynomial of degree 2n-1, and solving just for \,\!\gamma requires solving a polynomial of degree 2n. Therefore, whether solving for one parameter or for both parameters simultaneously, a numerical solution on a computer is typically required. The benefit of maximum likelihood estimation is asymptotic efficiency; estimating x_0 using the sample median is only about 81% as asymptotically efficient as estimating x_0 by maximum likelihood. The truncated sample mean using the middle 24% order statistics is about 88% as asymptotically efficient an estimator of x_0 as the maximum likelihood estimate. When
Newton's method In numerical analysis, Newton's method, also known as the Newton–Raphson method, named after Isaac Newton and Joseph Raphson, is a root-finding algorithm which produces successively better approximations to the roots (or zeroes) of a real-valu ...
is used to find the solution for the maximum likelihood estimate, the middle 24% order statistics can be used as an initial solution for x_0. The shape can be estimated using the median of absolute values, since for location 0 Cauchy variables X\sim\mathrm(0,\gamma), the \mathrm(, X, ) = \gamma the shape parameter.


Multivariate Cauchy distribution

A
random vector In probability, and statistics, a multivariate random variable or random vector is a list of mathematical variables each of whose value is unknown, either because the value has not yet occurred or because there is imperfect knowledge of its value. ...
X=(X_1, \ldots, X_k)^T is said to have the multivariate Cauchy distribution if every linear combination of its components Y=a_1X_1+ \cdots + a_kX_k has a Cauchy distribution. That is, for any constant vector a\in \mathbb R^k, the random variable Y=a^TX should have a univariate Cauchy distribution. The characteristic function of a multivariate Cauchy distribution is given by: :\varphi_X(t) = e^, \! where x_0(t) and \gamma(t) are real functions with x_0(t) a
homogeneous function In mathematics, a homogeneous function is a function of several variables such that, if all its arguments are multiplied by a scalar, then its value is multiplied by some power of this scalar, called the degree of homogeneity, or simply the ''deg ...
of degree one and \gamma(t) a positive homogeneous function of degree one. More formally: :x_0(at) = ax_0(t), :\gamma (at) = , a, \gamma (t), for all t. An example of a bivariate Cauchy distribution can be given by: :f(x, y; x_0,y_0,\gamma)= \left \right. Note that in this example, even though the covariance between x and y is 0, x and y are not
statistically independent Independence is a fundamental notion in probability theory, as in statistics and the theory of stochastic processes. Two events are independent, statistically independent, or stochastically independent if, informally speaking, the occurrence of o ...
. We also can write this formula for complex variable. Then the probability density function of complex cauchy is : :f(z; z_0,\gamma)= \left \right. Analogous to the univariate density, the multidimensional Cauchy density also relates to the
multivariate Student distribution In statistics, the multivariate ''t''-distribution (or multivariate Student distribution) is a multivariate probability distribution. It is a generalization to random vectors of the Student's ''t''-distribution, which is a distribution applicab ...
. They are equivalent when the degrees of freedom parameter is equal to one. The density of a k dimension Student distribution with one degree of freedom becomes: :f(; ,, k)= \frac . Properties and details for this density can be obtained by taking it as a particular case of the multivariate Student density.


Transformation properties

*If X \sim \operatorname(x_0,\gamma) then kX + \ell \sim \textrm(x_0 k+\ell, \gamma , k, ) *If X \sim \operatorname(x_0, \gamma_0) and Y \sim \operatorname(x_1,\gamma_1) are independent, then X+Y \sim \operatorname(x_0+x_1,\gamma_0 +\gamma_1) and X-Y \sim \operatorname(x_0-x_1, \gamma_0+\gamma_1) *If X \sim \operatorname(0,\gamma) then \tfrac \sim \operatorname(0, \tfrac) *
McCullagh's parametrization of the Cauchy distributions In probability theory, the "standard" Cauchy distribution is the probability distribution whose probability density function (pdf) is :f(x) = for ''x'' real. This has median 0, and first and third quartiles respectively −1 and +1. General ...
: McCullagh, P.
"Conditional inference and Cauchy models"
''
Biometrika ''Biometrika'' is a peer-reviewed scientific journal published by Oxford University Press for thBiometrika Trust The editor-in-chief is Paul Fearnhead (Lancaster University). The principal focus of this journal is theoretical statistics. It was es ...
'', volume 79 (1992), pages 247–259
PDF
from McCullagh's homepage.
Expressing a Cauchy distribution in terms of one complex parameter \psi = x_0+i\gamma, define X \sim \operatorname(\psi) to mean X \sim \operatorname(x_0,, \gamma, ). If X \sim \operatorname(\psi) then: \frac \sim \operatorname\left(\frac\right) where a, b, c and d are real numbers. * Using the same convention as above, if X \sim \operatorname(\psi) then: \frac \sim \operatorname\left(\frac\right)where \operatorname is the
circular Cauchy distribution In probability theory and directional statistics, a wrapped Cauchy distribution is a wrapped probability distribution that results from the "wrapping" of the Cauchy distribution around the unit circle. The Cauchy distribution is sometimes known as ...
.


Lévy measure

The Cauchy distribution is the
stable distribution In probability theory, a distribution is said to be stable if a linear combination of two independent random variables with this distribution has the same distribution, up to location and scale parameters. A random variable is said to be stab ...
of index 1. The Lévy–Khintchine representation of such a stable distribution of parameter \gamma is given, for X \sim \operatorname(\gamma, 0, 0)\, by: : \operatorname\left( e^ \right) = \exp\left( \int_ (e^ - 1) \Pi_\gamma(dy) \right) where :\Pi_\gamma(dy) = \left( c_ \frac 1_ + c_ \frac 1_ \right) \, dy and c_, c_ can be expressed explicitly. In the case \gamma = 1 of the Cauchy distribution, one has c_ = c_ . This last representation is a consequence of the formula : \pi , x, = \operatorname\int_ (1 - e^) \, \frac


Related distributions

*\operatorname(0,1) \sim \textrm(\mathrm=1)\, Student's ''t'' distribution *\operatorname(\mu,\sigma) \sim \textrm_(\mu,\sigma)\, non-standardized Student's ''t'' distribution *If X, Y \sim \textrm(0,1)\, X, Y independent, then \tfrac X Y\sim \textrm(0,1)\, *If X \sim \textrm(0,1)\, then \tan \left( \pi \left(X-\tfrac\right) \right) \sim \textrm(0,1)\, *If X \sim \operatorname(0, 1) then \ln(X) \sim \textrm(0, 1) *If X \sim \operatorname(x_0,\gamma) then \tfrac1X \sim \operatorname\left(\tfrac,\tfrac\right) *The Cauchy distribution is a limiting case of a
Pearson distribution The Pearson distribution is a family of continuous probability distribution, continuous probability distributions. It was first published by Karl Pearson in 1895 and subsequently extended by him in 1901 and 1916 in a series of articles on biostat ...
of type 4 *The Cauchy distribution is a special case of a
Pearson distribution The Pearson distribution is a family of continuous probability distribution, continuous probability distributions. It was first published by Karl Pearson in 1895 and subsequently extended by him in 1901 and 1916 in a series of articles on biostat ...
of type 7. *The Cauchy distribution is a
stable distribution In probability theory, a distribution is said to be stable if a linear combination of two independent random variables with this distribution has the same distribution, up to location and scale parameters. A random variable is said to be stab ...
: if X \sim \textrm(1, 0, \gamma, \mu), then X \sim \operatorname(\mu, \gamma). *The Cauchy distribution is a singular limit of a
hyperbolic distribution The hyperbolic distribution is a continuous probability distribution characterized by the logarithm of the probability density function being a hyperbola. Thus the distribution decreases exponentially, which is more slowly than the normal distribu ...
*The
wrapped Cauchy distribution In probability theory and directional statistics, a wrapped Cauchy distribution is a wrapped probability distribution that results from the "wrapping" of the Cauchy distribution around the unit circle. The Cauchy distribution is sometimes known a ...
, taking values on a circle, is derived from the Cauchy distribution by wrapping it around the circle. *If X \sim \textrm(0,1), Z \sim \operatorname(1/2, s^2/2), then Y = \mu + X \sqrt Z \sim \operatorname(\mu,s). For half-Cauchy distributions, the relation holds by setting X \sim \textrm(0,1) I\.


Relativistic Breit–Wigner distribution

In
nuclear Nuclear may refer to: Physics Relating to the nucleus of the atom: * Nuclear engineering *Nuclear physics *Nuclear power *Nuclear reactor *Nuclear weapon *Nuclear medicine *Radiation therapy *Nuclear warfare Mathematics *Nuclear space *Nuclear ...
and
particle physics Particle physics or high energy physics is the study of fundamental particles and forces that constitute matter and radiation. The fundamental particles in the universe are classified in the Standard Model as fermions (matter particles) an ...
, the energy profile of a
resonance Resonance describes the phenomenon of increased amplitude that occurs when the frequency of an applied periodic force (or a Fourier component of it) is equal or close to a natural frequency of the system on which it acts. When an oscillatin ...
is described by the
relativistic Breit–Wigner distribution The relativistic Breit–Wigner distribution (after the 1936 nuclear resonance formula of Gregory Breit and Eugene Wigner) is a continuous probability distribution with the following probability density function, SePythia 6.4 Physics and Manual(pa ...
, while the Cauchy distribution is the (non-relativistic) Breit–Wigner distribution.


Occurrence and applications

*In
spectroscopy Spectroscopy is the field of study that measures and interprets the electromagnetic spectra that result from the interaction between electromagnetic radiation and matter as a function of the wavelength or frequency of the radiation. Matter wa ...
, the Cauchy distribution describes the shape of
spectral line A spectral line is a dark or bright line in an otherwise uniform and continuous spectrum, resulting from emission or absorption of light in a narrow frequency range, compared with the nearby frequencies. Spectral lines are often used to iden ...
s which are subject to
homogeneous broadening Homogeneous broadening is a type of emission spectrum broadening in which all atoms radiating from a specific level under consideration radiate with equal opportunity. If an optical emitter (e.g. an atom) shows homogeneous broadening, its spectra ...
in which all atoms interact in the same way with the frequency range contained in the line shape. Many mechanisms cause homogeneous broadening, most notably collision broadening. Lifetime or natural broadening also gives rise to a line shape described by the Cauchy distribution. *Applications of the Cauchy distribution or its transformation can be found in fields working with exponential growth. A 1958 paper by White derived the test statistic for estimators of \hat for the equation x_=\beta_t+\varepsilon_,\beta>1 and where the maximum likelihood estimator is found using ordinary least squares showed the sampling distribution of the statistic is the Cauchy distribution. *The Cauchy distribution is often the distribution of observations for objects that are spinning. The classic reference for this is called the Gull's lighthouse problem and as in the above section as the Breit–Wigner distribution in particle physics. *In
hydrology Hydrology () is the scientific study of the movement, distribution, and management of water on Earth and other planets, including the water cycle, water resources, and environmental watershed sustainability. A practitioner of hydrology is calle ...
the Cauchy distribution is applied to extreme events such as annual maximum one-day rainfalls and river discharges. The blue picture illustrates an example of fitting the Cauchy distribution to ranked monthly maximum one-day rainfalls showing also the 90%
confidence belt In frequentist statistics, a confidence interval (CI) is a range of estimates for an unknown parameter. A confidence interval is computed at a designated ''confidence level''; the 95% confidence level is most common, but other levels, such as 9 ...
based on the
binomial distribution In probability theory and statistics, the binomial distribution with parameters ''n'' and ''p'' is the discrete probability distribution of the number of successes in a sequence of ''n'' independent experiments, each asking a yes–no quest ...
. The rainfall data are represented by
plotting position Plot or Plotting may refer to: Art, media and entertainment * Plot (narrative), the story of a piece of fiction Music * ''The Plot'' (album), a 1976 album by jazz trumpeter Enrico Rava * The Plot (band), a band formed in 2003 Other * ''Plot'' ...
s as part of the
cumulative frequency analysis Cumulative frequency analysis is the analysis of the frequency of occurrence of values of a phenomenon less than a reference value. The phenomenon may be time- or space-dependent. Cumulative frequency is also called ''frequency of non-exceedance ...
. *The expression for imaginary part of complex
electrical permittivity In electromagnetism, the absolute permittivity, often simply called permittivity and denoted by the Greek letter ''ε'' (epsilon), is a measure of the electric polarizability of a dielectric. A material with high permittivity polarizes more in r ...
according to Lorentz model is a model VAR (
value at risk Value at risk (VaR) is a measure of the risk of loss for investments. It estimates how much a set of investments might lose (with a given probability), given normal market conditions, in a set time period such as a day. VaR is typically used by ...
) producing a much larger probability of extreme risk than Gaussian Distribution.Tong Liu (2012), An intermediate distribution between Gaussian and Cauchy distributions. https://arxiv.org/pdf/1208.5109.pdf


See also

*
Lévy flight A Lévy flight is a random walk in which the step-lengths have a Lévy distribution, a probability distribution that is heavy-tailed. When defined as a walk in a space of dimension greater than one, the steps made are in isotropic random direct ...
and
Lévy process In probability theory, a Lévy process, named after the French mathematician Paul Lévy, is a stochastic process with independent, stationary increments: it represents the motion of a point whose successive displacements are random, in which disp ...
*
Laplace distribution In probability theory and statistics, the Laplace distribution is a continuous probability distribution named after Pierre-Simon Laplace. It is also sometimes called the double exponential distribution, because it can be thought of as two exponen ...
, the Fourier transform of the Cauchy distribution *
Cauchy process In probability theory, a Cauchy process is a type of stochastic process. There are symmetric and asymmetric forms of the Cauchy process. The unspecified term "Cauchy process" is often used to refer to the symmetric Cauchy process. The Cauchy ...
*
Stable process In probability theory, a stable process is a type of stochastic process. It includes stochastic processes whose associated probability distributions are stable distributions. Examples of stable processes include the Wiener process, or Brownian mo ...
*
Slash distribution In probability theory, the slash distribution is the probability distribution of a standard normal variate divided by an independent standard uniform variate. In other words, if the random variable ''Z'' has a normal distribution with zero mean an ...


References


External links

*
Earliest Uses: The entry on Cauchy distribution has some historical information.
*


Ratios of Normal Variables by George Marsaglia
{{DEFAULTSORT:Cauchy Distribution Augustin-Louis Cauchy Continuous distributions Probability distributions with non-finite variance Power laws Stable distributions Location-scale family probability distributions>X, ^p= \gamma^p \mathrm(\pi p/2).


Higher moments

The Cauchy distribution does not have finite moments of any order. Some of the higher
raw moment In mathematics, the moments of a function are certain quantitative measures related to the shape of the function's graph. If the function represents mass density, then the zeroth moment is the total mass, the first moment (normalized by total ma ...
s do exist and have a value of infinity, for example, the raw second moment: : \begin \operatorname ^2& \propto \int_^\infty \frac\,dx = \int_^\infty 1 - \frac\,dx \\ pt& = \int_^\infty dx - \int_^\infty \frac\,dx = \int_^\infty dx-\pi = \infty. \end By re-arranging the formula, one can see that the second moment is essentially the infinite integral of a constant (here 1). Higher even-powered raw moments will also evaluate to infinity. Odd-powered raw moments, however, are undefined, which is distinctly different from existing with the value of infinity. The odd-powered raw moments are undefined because their values are essentially equivalent to \infty - \infty since the two halves of the integral both diverge and have opposite signs. The first raw moment is the mean, which, being odd, does not exist. (See also the discussion above about this.) This in turn means that all of the
central moment In probability theory and statistics, a central moment is a moment of a probability distribution of a random variable about the random variable's mean; that is, it is the expected value of a specified integer power of the deviation of the random ...
s and
standardized moment In probability theory and statistics, a standardized moment of a probability distribution is a moment (often a higher degree central moment) that is normalized, typically by a power of the standard deviation, rendering the moment scale invariant. ...
s are undefined since they are all based on the mean. The variance—which is the second central moment—is likewise non-existent (despite the fact that the raw second moment exists with the value infinity). The results for higher moments follow from Hölder's inequality, which implies that higher moments (or halves of moments) diverge if lower ones do.


Moments of truncated distributions

Consider the
truncated distribution In statistics, a truncated distribution is a conditional distribution that results from restricting the domain of some other probability distribution. Truncated distributions arise in practical statistics in cases where the ability to record, or e ...
defined by restricting the standard Cauchy distribution to the interval . Such a truncated distribution has all moments (and the central limit theorem applies for
i.i.d. In probability theory and statistics, a collection of random variables is independent and identically distributed if each random variable has the same probability distribution as the others and all are mutually independent. This property is us ...
observations from it); yet for almost all practical purposes it behaves like a Cauchy distribution.


Estimation of parameters

Because the parameters of the Cauchy distribution do not correspond to a mean and variance, attempting to estimate the parameters of the Cauchy distribution by using a sample mean and a sample variance will not succeed. For example, if an i.i.d. sample of size ''n'' is taken from a Cauchy distribution, one may calculate the sample mean as: :\bar=\frac 1 n \sum_^n x_i Although the sample values x_i will be concentrated about the central value x_0, the sample mean will become increasingly variable as more observations are taken, because of the increased probability of encountering sample points with a large absolute value. In fact, the distribution of the sample mean will be equal to the distribution of the observations themselves; i.e., the sample mean of a large sample is no better (or worse) an estimator of x_0 than any single observation from the sample. Similarly, calculating the sample variance will result in values that grow larger as more observations are taken. Therefore, more robust means of estimating the central value x_0 and the scaling parameter \gamma are needed. One simple method is to take the median value of the sample as an estimator of x_0 and half the sample
interquartile range In descriptive statistics, the interquartile range (IQR) is a measure of statistical dispersion, which is the spread of the data. The IQR may also be called the midspread, middle 50%, fourth spread, or H‑spread. It is defined as the difference ...
as an estimator of \gamma. Other, more precise and robust methods have been developed For example, the
truncated mean A truncated mean or trimmed mean is a statistical measure of central tendency, much like the mean and median. It involves the calculation of the mean after discarding given parts of a probability distribution or sample at the high and low end, an ...
of the middle 24% of the sample
order statistics In statistics, the ''k''th order statistic of a statistical sample is equal to its ''k''th-smallest value. Together with rank statistics, order statistics are among the most fundamental tools in non-parametric statistics and inference. Importa ...
produces an estimate for x_0 that is more efficient than using either the sample median or the full sample mean. However, because of the
fat tails A fat-tailed distribution is a probability distribution that exhibits a large skewness or kurtosis, relative to that of either a normal distribution or an exponential distribution. In common usage, the terms fat-tailed and heavy-tailed are someti ...
of the Cauchy distribution, the efficiency of the estimator decreases if more than 24% of the sample is used.
Maximum likelihood In statistics, maximum likelihood estimation (MLE) is a method of estimation theory, estimating the Statistical parameter, parameters of an assumed probability distribution, given some observed data. This is achieved by Mathematical optimization, ...
can also be used to estimate the parameters x_0 and \gamma. However, this tends to be complicated by the fact that this requires finding the roots of a high degree polynomial, and there can be multiple roots that represent local maxima. Also, while the maximum likelihood estimator is asymptotically efficient, it is relatively inefficient for small samples. The log-likelihood function for the Cauchy distribution for sample size n is: :\hat\ell(x_1,\dotsc,x_n \mid \!x_0,\gamma ) = - n \log (\gamma \pi) - \sum_^n \log \left(1 + \left(\frac\right)^2\right) Maximizing the log likelihood function with respect to x_0 and \gamma by taking the first derivative produces the following system of equations: : \frac = \sum_^n \frac =0 : \frac = \sum_^n \frac - \frac = 0 Note that : \sum_^n \frac is a monotone function in \gamma and that the solution \gamma must satisfy : \min , x_i-x_0, \le \gamma\le \max , x_i-x_0, . Solving just for x_0 requires solving a polynomial of degree 2n-1, and solving just for \,\!\gamma requires solving a polynomial of degree 2n. Therefore, whether solving for one parameter or for both parameters simultaneously, a numerical solution on a computer is typically required. The benefit of maximum likelihood estimation is asymptotic efficiency; estimating x_0 using the sample median is only about 81% as asymptotically efficient as estimating x_0 by maximum likelihood. The truncated sample mean using the middle 24% order statistics is about 88% as asymptotically efficient an estimator of x_0 as the maximum likelihood estimate. When
Newton's method In numerical analysis, Newton's method, also known as the Newton–Raphson method, named after Isaac Newton and Joseph Raphson, is a root-finding algorithm which produces successively better approximations to the roots (or zeroes) of a real-valu ...
is used to find the solution for the maximum likelihood estimate, the middle 24% order statistics can be used as an initial solution for x_0. The shape can be estimated using the median of absolute values, since for location 0 Cauchy variables X\sim\mathrm(0,\gamma), the \mathrm(, X, ) = \gamma the shape parameter.


Multivariate Cauchy distribution

A
random vector In probability, and statistics, a multivariate random variable or random vector is a list of mathematical variables each of whose value is unknown, either because the value has not yet occurred or because there is imperfect knowledge of its value. ...
X=(X_1, \ldots, X_k)^T is said to have the multivariate Cauchy distribution if every linear combination of its components Y=a_1X_1+ \cdots + a_kX_k has a Cauchy distribution. That is, for any constant vector a\in \mathbb R^k, the random variable Y=a^TX should have a univariate Cauchy distribution. The characteristic function of a multivariate Cauchy distribution is given by: :\varphi_X(t) = e^, \! where x_0(t) and \gamma(t) are real functions with x_0(t) a
homogeneous function In mathematics, a homogeneous function is a function of several variables such that, if all its arguments are multiplied by a scalar, then its value is multiplied by some power of this scalar, called the degree of homogeneity, or simply the ''deg ...
of degree one and \gamma(t) a positive homogeneous function of degree one. More formally: :x_0(at) = ax_0(t), :\gamma (at) = , a, \gamma (t), for all t. An example of a bivariate Cauchy distribution can be given by: :f(x, y; x_0,y_0,\gamma)= \left \right. Note that in this example, even though the covariance between x and y is 0, x and y are not
statistically independent Independence is a fundamental notion in probability theory, as in statistics and the theory of stochastic processes. Two events are independent, statistically independent, or stochastically independent if, informally speaking, the occurrence of o ...
. We also can write this formula for complex variable. Then the probability density function of complex cauchy is : :f(z; z_0,\gamma)= \left \right. Analogous to the univariate density, the multidimensional Cauchy density also relates to the
multivariate Student distribution In statistics, the multivariate ''t''-distribution (or multivariate Student distribution) is a multivariate probability distribution. It is a generalization to random vectors of the Student's ''t''-distribution, which is a distribution applicab ...
. They are equivalent when the degrees of freedom parameter is equal to one. The density of a k dimension Student distribution with one degree of freedom becomes: :f(; ,, k)= \frac . Properties and details for this density can be obtained by taking it as a particular case of the multivariate Student density.


Transformation properties

*If X \sim \operatorname(x_0,\gamma) then kX + \ell \sim \textrm(x_0 k+\ell, \gamma , k, ) *If X \sim \operatorname(x_0, \gamma_0) and Y \sim \operatorname(x_1,\gamma_1) are independent, then X+Y \sim \operatorname(x_0+x_1,\gamma_0 +\gamma_1) and X-Y \sim \operatorname(x_0-x_1, \gamma_0+\gamma_1) *If X \sim \operatorname(0,\gamma) then \tfrac \sim \operatorname(0, \tfrac) *
McCullagh's parametrization of the Cauchy distributions In probability theory, the "standard" Cauchy distribution is the probability distribution whose probability density function (pdf) is :f(x) = for ''x'' real. This has median 0, and first and third quartiles respectively −1 and +1. General ...
: McCullagh, P.
"Conditional inference and Cauchy models"
''
Biometrika ''Biometrika'' is a peer-reviewed scientific journal published by Oxford University Press for thBiometrika Trust The editor-in-chief is Paul Fearnhead (Lancaster University). The principal focus of this journal is theoretical statistics. It was es ...
'', volume 79 (1992), pages 247–259
PDF
from McCullagh's homepage.
Expressing a Cauchy distribution in terms of one complex parameter \psi = x_0+i\gamma, define X \sim \operatorname(\psi) to mean X \sim \operatorname(x_0,, \gamma, ). If X \sim \operatorname(\psi) then: \frac \sim \operatorname\left(\frac\right) where a, b, c and d are real numbers. * Using the same convention as above, if X \sim \operatorname(\psi) then: \frac \sim \operatorname\left(\frac\right)where \operatorname is the
circular Cauchy distribution In probability theory and directional statistics, a wrapped Cauchy distribution is a wrapped probability distribution that results from the "wrapping" of the Cauchy distribution around the unit circle. The Cauchy distribution is sometimes known as ...
.


Lévy measure

The Cauchy distribution is the
stable distribution In probability theory, a distribution is said to be stable if a linear combination of two independent random variables with this distribution has the same distribution, up to location and scale parameters. A random variable is said to be stab ...
of index 1. The Lévy–Khintchine representation of such a stable distribution of parameter \gamma is given, for X \sim \operatorname(\gamma, 0, 0)\, by: : \operatorname\left( e^ \right) = \exp\left( \int_ (e^ - 1) \Pi_\gamma(dy) \right) where :\Pi_\gamma(dy) = \left( c_ \frac 1_ + c_ \frac 1_ \right) \, dy and c_, c_ can be expressed explicitly. In the case \gamma = 1 of the Cauchy distribution, one has c_ = c_ . This last representation is a consequence of the formula : \pi , x, = \operatorname\int_ (1 - e^) \, \frac


Related distributions

*\operatorname(0,1) \sim \textrm(\mathrm=1)\, Student's ''t'' distribution *\operatorname(\mu,\sigma) \sim \textrm_(\mu,\sigma)\, non-standardized Student's ''t'' distribution *If X, Y \sim \textrm(0,1)\, X, Y independent, then \tfrac X Y\sim \textrm(0,1)\, *If X \sim \textrm(0,1)\, then \tan \left( \pi \left(X-\tfrac\right) \right) \sim \textrm(0,1)\, *If X \sim \operatorname(0, 1) then \ln(X) \sim \textrm(0, 1) *If X \sim \operatorname(x_0,\gamma) then \tfrac1X \sim \operatorname\left(\tfrac,\tfrac\right) *The Cauchy distribution is a limiting case of a
Pearson distribution The Pearson distribution is a family of continuous probability distribution, continuous probability distributions. It was first published by Karl Pearson in 1895 and subsequently extended by him in 1901 and 1916 in a series of articles on biostat ...
of type 4 *The Cauchy distribution is a special case of a
Pearson distribution The Pearson distribution is a family of continuous probability distribution, continuous probability distributions. It was first published by Karl Pearson in 1895 and subsequently extended by him in 1901 and 1916 in a series of articles on biostat ...
of type 7. *The Cauchy distribution is a
stable distribution In probability theory, a distribution is said to be stable if a linear combination of two independent random variables with this distribution has the same distribution, up to location and scale parameters. A random variable is said to be stab ...
: if X \sim \textrm(1, 0, \gamma, \mu), then X \sim \operatorname(\mu, \gamma). *The Cauchy distribution is a singular limit of a
hyperbolic distribution The hyperbolic distribution is a continuous probability distribution characterized by the logarithm of the probability density function being a hyperbola. Thus the distribution decreases exponentially, which is more slowly than the normal distribu ...
*The
wrapped Cauchy distribution In probability theory and directional statistics, a wrapped Cauchy distribution is a wrapped probability distribution that results from the "wrapping" of the Cauchy distribution around the unit circle. The Cauchy distribution is sometimes known a ...
, taking values on a circle, is derived from the Cauchy distribution by wrapping it around the circle. *If X \sim \textrm(0,1), Z \sim \operatorname(1/2, s^2/2), then Y = \mu + X \sqrt Z \sim \operatorname(\mu,s). For half-Cauchy distributions, the relation holds by setting X \sim \textrm(0,1) I\.


Relativistic Breit–Wigner distribution

In
nuclear Nuclear may refer to: Physics Relating to the nucleus of the atom: * Nuclear engineering *Nuclear physics *Nuclear power *Nuclear reactor *Nuclear weapon *Nuclear medicine *Radiation therapy *Nuclear warfare Mathematics *Nuclear space *Nuclear ...
and
particle physics Particle physics or high energy physics is the study of fundamental particles and forces that constitute matter and radiation. The fundamental particles in the universe are classified in the Standard Model as fermions (matter particles) an ...
, the energy profile of a
resonance Resonance describes the phenomenon of increased amplitude that occurs when the frequency of an applied periodic force (or a Fourier component of it) is equal or close to a natural frequency of the system on which it acts. When an oscillatin ...
is described by the
relativistic Breit–Wigner distribution The relativistic Breit–Wigner distribution (after the 1936 nuclear resonance formula of Gregory Breit and Eugene Wigner) is a continuous probability distribution with the following probability density function, SePythia 6.4 Physics and Manual(pa ...
, while the Cauchy distribution is the (non-relativistic) Breit–Wigner distribution.


Occurrence and applications

*In
spectroscopy Spectroscopy is the field of study that measures and interprets the electromagnetic spectra that result from the interaction between electromagnetic radiation and matter as a function of the wavelength or frequency of the radiation. Matter wa ...
, the Cauchy distribution describes the shape of
spectral line A spectral line is a dark or bright line in an otherwise uniform and continuous spectrum, resulting from emission or absorption of light in a narrow frequency range, compared with the nearby frequencies. Spectral lines are often used to iden ...
s which are subject to
homogeneous broadening Homogeneous broadening is a type of emission spectrum broadening in which all atoms radiating from a specific level under consideration radiate with equal opportunity. If an optical emitter (e.g. an atom) shows homogeneous broadening, its spectra ...
in which all atoms interact in the same way with the frequency range contained in the line shape. Many mechanisms cause homogeneous broadening, most notably collision broadening. Lifetime or natural broadening also gives rise to a line shape described by the Cauchy distribution. *Applications of the Cauchy distribution or its transformation can be found in fields working with exponential growth. A 1958 paper by White derived the test statistic for estimators of \hat for the equation x_=\beta_t+\varepsilon_,\beta>1 and where the maximum likelihood estimator is found using ordinary least squares showed the sampling distribution of the statistic is the Cauchy distribution. *The Cauchy distribution is often the distribution of observations for objects that are spinning. The classic reference for this is called the Gull's lighthouse problem and as in the above section as the Breit–Wigner distribution in particle physics. *In
hydrology Hydrology () is the scientific study of the movement, distribution, and management of water on Earth and other planets, including the water cycle, water resources, and environmental watershed sustainability. A practitioner of hydrology is calle ...
the Cauchy distribution is applied to extreme events such as annual maximum one-day rainfalls and river discharges. The blue picture illustrates an example of fitting the Cauchy distribution to ranked monthly maximum one-day rainfalls showing also the 90%
confidence belt In frequentist statistics, a confidence interval (CI) is a range of estimates for an unknown parameter. A confidence interval is computed at a designated ''confidence level''; the 95% confidence level is most common, but other levels, such as 9 ...
based on the
binomial distribution In probability theory and statistics, the binomial distribution with parameters ''n'' and ''p'' is the discrete probability distribution of the number of successes in a sequence of ''n'' independent experiments, each asking a yes–no quest ...
. The rainfall data are represented by
plotting position Plot or Plotting may refer to: Art, media and entertainment * Plot (narrative), the story of a piece of fiction Music * ''The Plot'' (album), a 1976 album by jazz trumpeter Enrico Rava * The Plot (band), a band formed in 2003 Other * ''Plot'' ...
s as part of the
cumulative frequency analysis Cumulative frequency analysis is the analysis of the frequency of occurrence of values of a phenomenon less than a reference value. The phenomenon may be time- or space-dependent. Cumulative frequency is also called ''frequency of non-exceedance ...
. *The expression for imaginary part of complex
electrical permittivity In electromagnetism, the absolute permittivity, often simply called permittivity and denoted by the Greek letter ''ε'' (epsilon), is a measure of the electric polarizability of a dielectric. A material with high permittivity polarizes more in r ...
according to Lorentz model is a model VAR (
value at risk Value at risk (VaR) is a measure of the risk of loss for investments. It estimates how much a set of investments might lose (with a given probability), given normal market conditions, in a set time period such as a day. VaR is typically used by ...
) producing a much larger probability of extreme risk than Gaussian Distribution.Tong Liu (2012), An intermediate distribution between Gaussian and Cauchy distributions. https://arxiv.org/pdf/1208.5109.pdf


See also

*
Lévy flight A Lévy flight is a random walk in which the step-lengths have a Lévy distribution, a probability distribution that is heavy-tailed. When defined as a walk in a space of dimension greater than one, the steps made are in isotropic random direct ...
and
Lévy process In probability theory, a Lévy process, named after the French mathematician Paul Lévy, is a stochastic process with independent, stationary increments: it represents the motion of a point whose successive displacements are random, in which disp ...
*
Laplace distribution In probability theory and statistics, the Laplace distribution is a continuous probability distribution named after Pierre-Simon Laplace. It is also sometimes called the double exponential distribution, because it can be thought of as two exponen ...
, the Fourier transform of the Cauchy distribution *
Cauchy process In probability theory, a Cauchy process is a type of stochastic process. There are symmetric and asymmetric forms of the Cauchy process. The unspecified term "Cauchy process" is often used to refer to the symmetric Cauchy process. The Cauchy ...
*
Stable process In probability theory, a stable process is a type of stochastic process. It includes stochastic processes whose associated probability distributions are stable distributions. Examples of stable processes include the Wiener process, or Brownian mo ...
*
Slash distribution In probability theory, the slash distribution is the probability distribution of a standard normal variate divided by an independent standard uniform variate. In other words, if the random variable ''Z'' has a normal distribution with zero mean an ...


References


External links

*
Earliest Uses: The entry on Cauchy distribution has some historical information.
*


Ratios of Normal Variables by George Marsaglia
{{DEFAULTSORT:Cauchy Distribution Augustin-Louis Cauchy Continuous distributions Probability distributions with non-finite variance Power laws Stable distributions Location-scale family probability distributions