A ratio distribution (also known as a quotient distribution) is a

Secondly, integrating the horizontal strips upward over all ''y'' yields the volume of probability above the line :$F\_R(R)\; =\; \backslash int\_0^\backslash infty\; f\_y(y)\; \backslash left(\backslash int\_0^\; f\_x(x)dx\; \backslash right)\; dy$ Finally, differentiate $F\_R(R)$ with respect to $R$ to get the pdf $f\_R(R)$. :$f\_R(R)\; =\; \backslash frac\; \backslash left;\; href="/html/ALL/s/\backslash int\_0^\backslash infty\_f\_y(y)\_\backslash left(\backslash int\_0^\_f\_x(x)dx\_\backslash right)\_dy\_\backslash right.html"\; ;"title="\backslash int\_0^\backslash infty\; f\_y(y)\; \backslash left(\backslash int\_0^\; f\_x(x)dx\; \backslash right)\; dy\; \backslash right">\backslash int\_0^\backslash infty\; f\_y(y)\; \backslash left(\backslash int\_0^\; f\_x(x)dx\; \backslash right)\; dy\; \backslash right$ Move the differentiation inside the integral: :$f\_R(R)\; =\; \backslash int\_0^\backslash infty\; f\_y(y)\; \backslash left(\backslash frac\; \backslash int\_0^\; f\_x(x)dx\; \backslash right)\; dy$ and since :$\backslash frac\; \backslash int\_0^\; f\_x(x)dx\; =\; yf\_x(Ry)$ then :$f\_R(R)\; =\; \backslash int\_0^\backslash infty\; f\_y(y)\; \backslash ;\; f\_x(Ry)\; \backslash ;\; y\; \backslash ;\; dy$ As an example, find the pdf of the ratio ''R'' when : $f\_x(x)\; =\; \backslash alpha\; e^,\; \backslash ;\backslash ;\backslash ;\backslash ;\; f\_y(y)\; =\; \backslash beta\; e^,\; \backslash ;\backslash ;\backslash ;\; x,y\; \backslash ge\; 0$ We have :$\backslash int\_0^\; f\_x(x)dx\; =\; -\; e^\; \backslash vert\_0^\; =\; 1-\; e^$ thus :$\backslash begin\; F\_R(R)\; \&=\; \backslash int\_0^\backslash infty\; f\_y(y)\; \backslash left(\; 1-\; e^\; \backslash right)\; dy\; =\backslash int\_0^\backslash infty\; \backslash beta\; e^\; \backslash left(\; 1-\; e^\; \backslash right)\; dy\; \backslash \backslash \; \&\; =\; 1\; -\; \backslash frac\; \backslash \backslash \; \&\; =\; \backslash frac\; \backslash end$ Differentiation wrt. ''R'' yields the pdf of ''R'' :$f\_R(R)\; =\backslash frac\; \backslash left(\; \backslash frac\; \backslash right)\; =\; \backslash frac$

_{X}'', ''σ_{X}''^{2}) and ''Y'' = ''N''(''μ_{Y}'', ''σ_{Y}''^{2}) ratio ''Z'' = ''X''/''Y'' is given exactly by the following expression, derived in several sources:
: $p\_Z(z)=\; \backslash frac\; \backslash frac\; \backslash left;\; href="/html/ALL/s/Phi\_\backslash left(\_\backslash frac\backslash right)\_-\_\backslash Phi\_\backslash left(-\backslash frac\backslash right)\_\backslash right.html"\; ;"title="Phi\; \backslash left(\; \backslash frac\backslash right)\; -\; \backslash Phi\; \backslash left(-\backslash frac\backslash right)\; \backslash right">Phi\; \backslash left(\; \backslash frac\backslash right)\; -\; \backslash Phi\; \backslash left(-\backslash frac\backslash right)\; \backslash right$
where
: $a(z)=\; \backslash sqrt$
: $b(z)=\; \backslash frac\; z\; +\; \backslash frac$
: $c\; =\; \backslash frac\; +\; \backslash frac$
: $d(z)\; =\; e^$
and $\backslash Phi$ is the normal cumulative distribution function:
: $\backslash Phi(t)=\; \backslash int\_^\backslash ,\; \backslash frac\; e^\backslash \; du\; \backslash ,\; .$
Under certain conditions, a normal approximation is possible, with variance:
:$\backslash sigma\_z^2=\backslash frac\; \backslash left(\backslash frac\; +\; \backslash frac\backslash right)$

Cauchy distribution
The Cauchy distribution, named after Augustin Cauchy, is a continuous probability distribution. It is also known, especially among physicists, as the Lorentz distribution (after Hendrik Lorentz), Cauchy–Lorentz distribution, Lorentz(ian) f ...

with median equal to zero and shape factor $a$
: $p\_X(x,\; a)\; =\; \backslash frac$
then the ratio distribution for the random variable $Z\; =\; X/Y$ is
: $p\_Z(z,\; a)\; =\; \backslash frac\; \backslash ln(z^2).$
This distribution does not depend on $a$ and the result stated by Springer (p158 Question 4.6) is not correct.
The ratio distribution is similar to but not the same as the product distribution of the random variable $W=XY$:
: $p\_W(w,\; a)\; =\; \backslash frac\; \backslash ln\; \backslash left(\backslash frac\backslash right).$
More generally, if two independent random variables ''X'' and ''Y'' each follow a Cauchy distribution
The Cauchy distribution, named after Augustin Cauchy, is a continuous probability distribution. It is also known, especially among physicists, as the Lorentz distribution (after Hendrik Lorentz), Cauchy–Lorentz distribution, Lorentz(ian) f ...

with median equal to zero and shape factor $a$ and $b$ respectively, then:
# The ratio distribution for the random variable $Z\; =\; X/Y$ is $$p\_Z(z,\; a,b)\; =\; \backslash frac\; \backslash ln\; \backslash left(\backslash frac\backslash right).$$
# The product distribution for the random variable $W\; =\; XY$ is $$p\_W(w,\; a,b)\; =\; \backslash frac\; \backslash ln\; \backslash left(\backslash frac\backslash right).$$
The result for the ratio distribution can be obtained from the product distribution by replacing $b$ with $\backslash frac.$

_{1} and ''α''_{2} and their scale parameters both set to unity, that is, $U\; \backslash sim\; \backslash Gamma\; (\; \backslash alpha\_1\; ,\; 1),\; V\; \backslash sim\; \backslash Gamma(\backslash alpha\_2,\; 1)$, where $\backslash Gamma\; (x;\backslash alpha,1)\; =\; \backslash frac$, then
: $\backslash frac\; \backslash sim\; \backslash beta(\; \backslash alpha\_1,\; \backslash alpha\_2\; ),\; \backslash qquad\; \backslash text\; =\; \backslash frac$
: $\backslash frac\; \backslash sim\; \backslash beta\text{'}(\backslash alpha\_1,\backslash alpha\_2),\; \backslash qquad\; \backslash qquad\; \backslash text\; =\; \backslash frac,\; \backslash ;\; \backslash alpha\_2\; >\; 1$
: $\backslash frac\; \backslash sim\; \backslash beta\text{'}(\backslash alpha\_2,\; \backslash alpha\_1),\; \backslash qquad\; \backslash qquad\; \backslash text\; =\; \backslash frac,\; \backslash ;\; \backslash alpha\_1\; >\; 1$
If $U\; \backslash sim\; \backslash Gamma\; (x;\backslash alpha,1)$, then $\backslash theta\; U\; \backslash sim\; \backslash Gamma\; (x;\backslash alpha,\backslash theta)\; =\; \backslash frac$. Note that here ''θ'' is a

then $Y\; =\; \backslash frac$ is distributed as $f\_Y(Y,\; \backslash varphi)$ with $\backslash varphi\; =\; \backslash frac$ The distribution of ''Y'' is limited here to the interval ,1 It can be generalized by scaling such that if $Y\; \backslash sim\; f\_Y(Y,\backslash varphi)$ then : $\backslash Theta\; Y\; \backslash sim\; f\_Y(\; Y,\backslash varphi,\; \backslash Theta)$ where $f\_Y(\; Y,\backslash varphi,\; \backslash Theta)\; =\; \backslash frac\; \backslash beta\; \backslash left\; (\backslash frac\; ,\; \backslash alpha\_1,\; \backslash alpha\_2\; \backslash right),\; \backslash ;\backslash ;\backslash ;\; 0\; \backslash le\; Y\; \backslash le\; \backslash Theta$ : $\backslash Theta\; Y$ is then a sample from $\backslash frac$

_{1}) and ''Y'' ~ Binomial(''m'',''p''_{2}) and ''X'', ''Y'' are independent. Let ''T'' = (''X''/''n'')/(''Y''/''m'').
Then log(''T'') is approximately normally distributed with mean log(''p''_{1}/''p''_{2}) and variance ((1/''p''_{1}) − 1)/''n'' + ((1/''p''_{2}) − 1)/''m''.
The binomial ratio distribution is of significance in clinical trials: if the distribution of ''T'' is known as above, the probability of a given ratio arising purely by chance can be estimated, i.e. a false positive trial. A number of papers compare the robustness of different approximations for the binomial ratio.

Ratio Distribution

at

Normal Ratio Distribution

at

Ratio Distributions

at MathPages Algebra of random variables Statistical ratios Types of probability distributions

probability distribution
In probability theory and statistics, a probability distribution is the mathematical Function (mathematics), function that gives the probabilities of occurrence of different possible outcomes for an Experiment (probability theory), experiment. ...

constructed as the distribution of the ratio
In mathematics, a ratio shows how many times one number contains another. For example, if there are eight oranges and six lemons in a bowl of fruit, then the ratio of oranges to lemons is eight to six (that is, 8:6, which is equivalent to the ...

of random variable
A random variable (also called random quantity, aleatory variable, or stochastic variable) is a mathematical formalization of a quantity or object which depends on random events. It is a mapping or a function from possible outcomes (e.g., the p ...

s having two other known distributions.
Given two (usually independent
Independent or Independents may refer to:
Arts, entertainment, and media Artist groups
* Independents (artist group), a group of modernist painters based in the New Hope, Pennsylvania, area of the United States during the early 1930s
* Independen ...

) random variables ''X'' and ''Y'', the distribution of the random variable ''Z'' that is formed as the ratio ''Z'' = ''X''/''Y'' is a ''ratio distribution''.
An example is the Cauchy distribution
The Cauchy distribution, named after Augustin Cauchy, is a continuous probability distribution. It is also known, especially among physicists, as the Lorentz distribution (after Hendrik Lorentz), Cauchy–Lorentz distribution, Lorentz(ian) f ...

(also called the ''normal ratio distribution''), which comes about as the ratio of two normally distributed
In statistics, a normal distribution or Gaussian distribution is a type of continuous probability distribution for a real number, real-valued random variable. The general form of its probability density function is
:
f(x) = \frac e^
The param ...

variables with zero mean.
Two other distributions often used in test-statistics are also ratio distributions:
the ''t''-distribution arises from a Gaussian
Carl Friedrich Gauss
Johann Carl Friedrich Gauss (; german: Gauß ; la, Carolus Fridericus Gauss; 30 April 177723 February 1855) was a German mathematician and physicist who made significant contributions to many fields in mathematics and ...

random variable divided by an independent chi-distributed random variable,
while the ''F''-distribution originates from the ratio of two independent chi-squared distributed random variables.
More general ratio distributions have been considered in the literature.
Often the ratio distributions are heavy-tailed, and it may be difficult to work with such distributions and develop an associated statistical test
A statistical hypothesis test is a method of statistical inference used to decide whether the data at hand sufficiently support a particular hypothesis.
Hypothesis testing allows us to make probabilistic statements about population parameters.
...

.
A method based on the median
In statistics and probability theory, the median is the value separating the higher half from the lower half of a Sample (statistics), data sample, a statistical population, population, or a probability distribution. For a data set, it may be th ...

has been suggested as a "work-around".
Algebra of random variables

The ratio is one type of algebra for random variables: Related to the ratio distribution are the product distribution, sum distribution and difference distribution. More generally, one may talk of combinations of sums, differences, products and ratios. Many of these distributions are described in Melvin D. Springer's book from 1979 ''The Algebra of Random Variables''. The algebraic rules known with ordinary numbers do not apply for the algebra of random variables. For example, if a product is ''C = AB'' and a ratio is ''D=C/A'' it does not necessarily mean that the distributions of ''D'' and ''B'' are the same. Indeed, a peculiar effect is seen for theCauchy distribution
The Cauchy distribution, named after Augustin Cauchy, is a continuous probability distribution. It is also known, especially among physicists, as the Lorentz distribution (after Hendrik Lorentz), Cauchy–Lorentz distribution, Lorentz(ian) f ...

: The product and the ratio of two independent Cauchy distributions (with the same scale parameter and the location parameter set to zero) will give the same distribution.
This becomes evident when regarding the Cauchy distribution as itself a ratio distribution of two Gaussian distributions of zero means: Consider two Cauchy random variables, $C\_1$ and $C\_2$ each constructed from two Gaussian distributions $C\_1=G\_1/G\_2$ and $C\_2\; =\; G\_3/G\_4$ then
: $\backslash frac\; =\; \backslash frac\; =\; \backslash frac\; =\; \backslash frac\; \backslash times\; \backslash frac\; =\; C\_1\; \backslash times\; C\_3,$
where $C\_3\; =\; G\_4/G\_3$. The first term is the ratio of two Cauchy distributions while the last term is the product of two such distributions.
Derivation

A way of deriving the ratio distribution of $Z\; =\; X/Y$ from the joint distribution of the two other random variables ''X , Y'' , with joint pdf $p\_(x,y)$, is by integration of the following form : $p\_Z(z)\; =\; \backslash int^\_\; ,\; y,\; \backslash ,\; p\_(zy,\; y)\; \backslash ,\; dy.$ If the two variables are independent then $p\_(x,y)\; =\; p\_X(x)\; p\_Y(y)$ and this becomes : $p\_Z(z)\; =\; \backslash int^\_\; ,\; y,\; \backslash ,\; p\_X(zy)\; p\_Y(y)\; \backslash ,\; dy.$ This may not be straightforward. By way of example take the classical problem of the ratio of two standard Gaussian samples. The joint pdf is :$p\_(x,y)\; =\; \backslash frac\; \backslash exp\backslash left(-\backslash frac\; \backslash right)\; \backslash exp\; \backslash left(-\backslash frac\; \backslash right)$ Defining $Z\; =\; X/Y$ we have :$\backslash begin\; p\_Z(z)\; \&=\; \backslash frac\; \backslash int\_^\; \backslash ,\; ,\; y,\; \backslash ,\; \backslash exp\backslash left(-\backslash frac\; \backslash right)\; \backslash ,\; \backslash exp\backslash left(-\backslash frac\; \backslash right)\; \backslash ,\; dy\; \backslash \backslash \; \&=\; \backslash frac\; \backslash int\_^\; \backslash ,,\; y,\; \backslash ,\; \backslash exp\backslash left(-\backslash frac\; \backslash right)\; \backslash ,\; dy\; \backslash end$ Using the known definite integral $\backslash int\_0^\; \backslash ,\; x\; \backslash ,\; \backslash exp\backslash left(-cx^2\; \backslash right)\; \backslash ,\; dx\; =\; \backslash frac$ we get :$p\_Z(z)\; =\; \backslash frac$ which is the Cauchy distribution, or Student's ''t'' distribution with ''n'' = 1 TheMellin transform In mathematics, the Mellin transform is an integral transform that may be regarded as the multiplicative group, multiplicative version of the two-sided Laplace transform. This integral transform is closely connected to the theory of Dirichlet series ...

has also been suggested for derivation of ratio distributions.
In the case of positive independent variables, proceed as follows. The diagram shows a separable bivariate distribution $f\_(x,y)=f\_x(x)f\_y(y)$ which has support in the positive quadrant $x,y\; >\; 0$ and we wish to find the pdf of the ratio $R=\; X/Y$. The hatched volume above the line $y\; =\; x/\; R$ represents the cumulative distribution of the function $f\_(x,y)$ multiplied with the logical function $X/Y\; \backslash le\; R$. The density is first integrated in horizontal strips; the horizontal strip at height ''y'' extends from ''x'' = 0 to ''x = Ry'' and has incremental probability $f\_y(y)dy\; \backslash int\_0^\; f\_x(x)\; \backslash ,dx$.Secondly, integrating the horizontal strips upward over all ''y'' yields the volume of probability above the line :$F\_R(R)\; =\; \backslash int\_0^\backslash infty\; f\_y(y)\; \backslash left(\backslash int\_0^\; f\_x(x)dx\; \backslash right)\; dy$ Finally, differentiate $F\_R(R)$ with respect to $R$ to get the pdf $f\_R(R)$. :$f\_R(R)\; =\; \backslash frac\; \backslash left;\; href="/html/ALL/s/\backslash int\_0^\backslash infty\_f\_y(y)\_\backslash left(\backslash int\_0^\_f\_x(x)dx\_\backslash right)\_dy\_\backslash right.html"\; ;"title="\backslash int\_0^\backslash infty\; f\_y(y)\; \backslash left(\backslash int\_0^\; f\_x(x)dx\; \backslash right)\; dy\; \backslash right">\backslash int\_0^\backslash infty\; f\_y(y)\; \backslash left(\backslash int\_0^\; f\_x(x)dx\; \backslash right)\; dy\; \backslash right$ Move the differentiation inside the integral: :$f\_R(R)\; =\; \backslash int\_0^\backslash infty\; f\_y(y)\; \backslash left(\backslash frac\; \backslash int\_0^\; f\_x(x)dx\; \backslash right)\; dy$ and since :$\backslash frac\; \backslash int\_0^\; f\_x(x)dx\; =\; yf\_x(Ry)$ then :$f\_R(R)\; =\; \backslash int\_0^\backslash infty\; f\_y(y)\; \backslash ;\; f\_x(Ry)\; \backslash ;\; y\; \backslash ;\; dy$ As an example, find the pdf of the ratio ''R'' when : $f\_x(x)\; =\; \backslash alpha\; e^,\; \backslash ;\backslash ;\backslash ;\backslash ;\; f\_y(y)\; =\; \backslash beta\; e^,\; \backslash ;\backslash ;\backslash ;\; x,y\; \backslash ge\; 0$ We have :$\backslash int\_0^\; f\_x(x)dx\; =\; -\; e^\; \backslash vert\_0^\; =\; 1-\; e^$ thus :$\backslash begin\; F\_R(R)\; \&=\; \backslash int\_0^\backslash infty\; f\_y(y)\; \backslash left(\; 1-\; e^\; \backslash right)\; dy\; =\backslash int\_0^\backslash infty\; \backslash beta\; e^\; \backslash left(\; 1-\; e^\; \backslash right)\; dy\; \backslash \backslash \; \&\; =\; 1\; -\; \backslash frac\; \backslash \backslash \; \&\; =\; \backslash frac\; \backslash end$ Differentiation wrt. ''R'' yields the pdf of ''R'' :$f\_R(R)\; =\backslash frac\; \backslash left(\; \backslash frac\; \backslash right)\; =\; \backslash frac$

Moments of random ratios

FromMellin transform In mathematics, the Mellin transform is an integral transform that may be regarded as the multiplicative group, multiplicative version of the two-sided Laplace transform. This integral transform is closely connected to the theory of Dirichlet series ...

theory, for distributions existing only on the positive half-line $x\; \backslash ge\; 0$, we have the product identity $\backslash operatorname;\; href="/html/ALL/s/UV)^p\_.html"\; ;"title="UV)^p\; ">UV)^p$ provided $U,\; \backslash ;\; V$ are independent. For the case of a ratio of samples like $\backslash operatorname;\; href="/html/ALL/s/X/Y)^p.html"\; ;"title="X/Y)^p">X/Y)^p$, in order to make use of this identity it is necessary to use moments of the inverse distribution. Set $1/Y\; =\; Z$ such that $\backslash operatorname;\; href="/html/ALL/s/XZ)^p\_.html"\; ;"title="XZ)^p\; ">XZ)^p$Means and variances of random ratios

In the Product distribution section, and derived fromMellin transform In mathematics, the Mellin transform is an integral transform that may be regarded as the multiplicative group, multiplicative version of the two-sided Laplace transform. This integral transform is closely connected to the theory of Dirichlet series ...

theory (see section above), it is found that the mean of a product of independent variables is equal to the product of their means. In the case of ratios, we have
:$\backslash operatorname(X/Y)\; =\; \backslash operatorname(X)\backslash operatorname(1/Y)$
which, in terms of probability distributions, is equivalent to
: $\backslash operatorname(X/Y)\; =\; \backslash int\_^\backslash infty\; x\; f\_x(x)\; \backslash ,\; dx\; \backslash times\; \backslash int\_^\backslash infty\; y^\; f\_y(y)\; \backslash ,\; dy$
Note that $\backslash operatorname(1/Y)\; \backslash neq\; \backslash frac$ i.e., $\backslash int\_^\backslash infty\; y^\; f\_y(y)\; \backslash ,\; dy\; \backslash ne\; \backslash frac$
The variance of a ratio of independent variables is
:$\backslash begin\; \backslash operatorname(X/Y)\; \&\; =\; \backslash operatorname(;\; href="/html/ALL/s//Y.html"\; ;"title="/Y">/Y$
Normal ratio distributions

Uncorrelated central normal ratio

When ''X'' and ''Y'' are independent and have aGaussian distribution
In statistics, a normal distribution or Gaussian distribution is a type of continuous probability distribution for a real number, real-valued random variable. The general form of its probability density function is
:
f(x) = \frac e^
The param ...

with zero mean, the form of their ratio distribution is a Cauchy distribution
The Cauchy distribution, named after Augustin Cauchy, is a continuous probability distribution. It is also known, especially among physicists, as the Lorentz distribution (after Hendrik Lorentz), Cauchy–Lorentz distribution, Lorentz(ian) f ...

.
This can be derived by setting $Z\; =\; X/Y\; =\; \backslash tan\; \backslash theta$ then showing that $\backslash theta$ has circular symmetry. For a bivariate uncorrelated Gaussian distribution we have
: $\backslash begin\; p(x,y)\; \&=\; \backslash tfrac\; e^\; \backslash times\; \backslash tfrac\; e^\; \backslash \backslash \; \&=\; \backslash tfrac\; e^\; \backslash \backslash \; \&\; =\; \backslash tfrac\; e^\; \backslash text\; r^2\; =\; x^2\; +\; y^2\; \backslash end$
If $p(x,y)$ is a function only of ''r'' then $\backslash theta$ is uniformly distributed on $;\; href="/html/ALL/s/,\_2\backslash pi\_.html"\; ;"title=",\; 2\backslash pi\; ">,\; 2\backslash pi$ with density $1/2\backslash pi$ so the problem reduces to finding the probability distribution of ''Z'' under the mapping
: $Z\; =\; X/Y\; =\; \backslash tan\; \backslash theta$
We have, by conservation of probability
: $p\_z(z)\; ,\; dz,\; =\; p\_(\backslash theta),\; d\backslash theta,$
and since $dz/d\backslash theta\; =\; 1/\; \backslash cos^2\; \backslash theta$
: $p\_z(z)\; =\; \backslash frac\; =\; \backslash tfrac$
and setting $\backslash cos^2\; \backslash theta\; =\; \backslash frac=\; \backslash frac$ we get
: $p\_z(z)\; =\; \backslash frac$
There is a spurious factor of 2 here. Actually, two values of $\backslash theta$ spaced by $\backslash pi$ map onto the same value of ''z'', the density is doubled, and the final result is
: $p\_z(z)\; =\; \backslash frac\; ,\; \backslash ;\backslash ;\; -\backslash infty\; <\; z\; <\; \backslash infty$
When either of the two Normal distributions is non-central then the result for the distribution of the ratio is much more complicated and is given below in the succinct form presented by David Hinkley. The trigonometric method for a ratio does however extend to radial distributions like bivariate normals or a bivariate Student ''t'' in which the density depends only on radius $r\; =\; \backslash sqrt$. It does not extend to the ratio of two independent Student ''t'' distributions which give the Cauchy ratio shown in a section below for one degree of freedom.
Uncorrelated noncentral normal ratio

In the absence of correlation $(\backslash operatorname(X,Y)=0)$, theprobability density function
In probability theory, a probability density function (PDF), or density of a continuous random variable, is a function whose value at any given sample (or point) in the sample space (the set of possible values taken by the random variable) ...

of the two normal variables ''X'' = ''N''(''μCorrelated central normal ratio

The above expression becomes more complicated when the variables ''X'' and ''Y'' are correlated. If $\backslash mu\_x\; =\; \backslash mu\_y\; =\; 0$ but $\backslash sigma\_X\; \backslash neq\; \backslash sigma\_Y$ and $\backslash rho\; \backslash neq\; 0$ the more general Cauchy distribution is obtained : $p\_Z(z)\; =\; \backslash frac\; \backslash frac,$ where ''ρ'' is thecorrelation coefficient
A correlation coefficient is a numerical measure of some type of correlation and dependence, correlation, meaning a statistical relationship between two variable (mathematics), variables. The variables may be two column (database), columns of a giv ...

between ''X'' and ''Y'' and
: $\backslash alpha\; =\; \backslash rho\; \backslash frac,$
: $\backslash beta\; =\; \backslash frac\; \backslash sqrt.$
The complex distribution has also been expressed with Kummer's confluent hypergeometric function or the Hermite function.
Correlated noncentral normal ratio

Approximations to correlated noncentral normal ratio

A transformation to the log domain was suggested by Katz(1978) (see binomial section below). Let the ratio be :$T\; \backslash sim\; \backslash frac\; =\; \backslash frac\; =\; \backslash frac\backslash frac$. Take logs to get :$\backslash log\_e(T)\; =\; \backslash log\_e\; \backslash left(\backslash frac\; \backslash right)\; +\; \backslash log\_e\; \backslash left(\; 1+\; \backslash frac\; \backslash right)\; -\; \backslash log\_e\; \backslash left(\; 1+\; \backslash frac\; \backslash right)\; .$ Since $\backslash log\_e(1+\backslash delta)\; =\; \backslash delta\; -\; \backslash frac\; +\; \backslash frac\; +\; \backslash cdots$ then asymptotically :$\backslash log\_e(T)\; \backslash approx\; \backslash log\_e\; \backslash left(\backslash frac\; \backslash right)+\; \backslash frac\; -\; \backslash frac\; \backslash sim\; \backslash log\_e\; \backslash left(\backslash frac\; \backslash right)\; +\; \backslash mathbb\; \backslash left(\; 0,\; \backslash frac\; +\; \backslash frac\; \backslash right)\; .$ Alternatively, Geary (1930) suggested that : $t\; \backslash approx\; \backslash frac$ has approximately a standard Gaussian distribution: This transformation has been called the ''Geary–Hinkley transformation''; the approximation is good if ''Y'' is unlikely to assume negative values, basically $\backslash mu\_y\; >\; 3\backslash sigma\_y$.Exact correlated noncentral normal ratio

Geary showed how the correlated ratio $z$ could be transformed into a near-Gaussian form and developed an approximation for $t$ dependent on the probability of negative denominator values $x+\backslash mu\_x<0$ being vanishingly small. Fieller's later correlated ratio analysis is exact but care is needed when used with modern math packages and similar problems may occur in some of Marsaglia's equations. Pham-Ghia has exhaustively discussed these methods. Hinkley's correlated results are exact but it is shown below that the correlated ratio condition can be transformed simply into an uncorrelated one so only the simplified Hinkley equations above are required, not the full correlated ratio version. Let the ratio be: :$z=\backslash frac$ in which $x,\; y$ are zero-mean correlated normal variables with variances $\backslash sigma\_x^2,\; \backslash sigma\_y^2$ and $X,\; Y$ have means $\backslash mu\_x,\; \backslash mu\_y.$ Write $x\text{'}=x-\backslash rho\; y\backslash sigma\_x\; /\backslash sigma\_y$ such that $x\text{'},\; y$ become uncorrelated and $x\text{'}$ has standard deviation :$\backslash sigma\_x\text{'}\; =\; \backslash sigma\_x\; \backslash sqrt\; .$ The ratio: :$z=\backslash frac$ is invariant under this transformation and retains the same pdf. The $y$ term in the numerator is made separable by expanding: :$=x\text{'}+\backslash mu\_x\; -\backslash rho\; \backslash mu\_y\; \backslash frac\; +\; \backslash rho\; (y+\backslash mu\_y)\backslash frac$ to get :$z=\backslash frac\; +\; \backslash rho\; \backslash frac$ in which $\backslash mu\text{'}\_x=\backslash mu\_x\; -\; \backslash rho\; \backslash mu\_y\; \backslash frac$ and ''z'' has now become a ratio of uncorrelated non-central normal samples with an invariant ''z''-offset. Finally, to be explicit, the pdf of the ratio $z$ for correlated variables is found by inputting the modified parameters $\backslash sigma\_x\text{'},\; \backslash mu\_x\text{'},\; \backslash sigma\_y,\; \backslash mu\_y$ and $\backslash rho\text{'}=0$ into the Hinkley equation above which returns the pdf for the correlated ratio with a constant offset $-\; \backslash rho\; \backslash frac$ on $z$. The figures above show an example of a positively correlated ratio with $\backslash sigma\_x=\; \backslash sigma\_y=1,\; \backslash mu\_x=0,\; \backslash mu\_y=0.5,\; \backslash rho\; =\; 0.975$ in which the shaded wedges represent the increment of area selected by given ratio $x/y\; \backslash in;\; href="/html/ALL/s/,\_r\_+\_\backslash delta.html"\; ;"title=",\; r\; +\; \backslash delta">,\; r\; +\; \backslash delta$ which accumulates probability where they overlap the distribution. The theoretical distribution, derived from the equations under discussion combined with Hinkley's equations, is highly consistent with a simulation result using 5,000 samples. In the top figure it is easily understood that for a ratio $z=x/y=1$ the wedge almost bypasses the distribution mass altogether and this coincides with a near-zero region in the theoretical pdf. Conversely as $x/y$ reduces toward zero the line collects a higher probability. This transformation will be recognized as being the same as that used by Geary (1932) as a partial result in his ''eqn viii '' but whose derivation and limitations were hardly explained. Thus the first part of Geary's transformation to approximate Gaussianity in the previous section is actually exact and not dependent on the positivity of ''Y''. The offset result is also consistent with the "Cauchy" correlated zero-mean Gaussian ratio distribution in the first section. Marsaglia has applied the same result but using a nonlinear method to achieve it.Complex normal ratio

The ratio of correlated zero-mean circularly symmetric complex normal distributed variables was determined by Baxley et al. The joint distribution of ''x'', ''y'' is :$f\_(x,y)\; =\; \backslash frac\; \backslash exp\; \backslash left\; (\; -\; \backslash beginx\; \backslash \backslash \; y\; \backslash end^H\; \backslash Sigma\; ^\backslash beginx\; \backslash \backslash \; y\; \backslash end\; \backslash right\; )$ where :$\backslash Sigma\; =\; \backslash begin\; \backslash sigma\_x^2\; \&\; \backslash rho\; \backslash sigma\_x\; \backslash sigma\_y\; \backslash \backslash \; \backslash rho^*\; \backslash sigma\_x\; \backslash sigma\_y\; \&\; \backslash sigma\_y^2\; \backslash end,\; \backslash ;\backslash ;\; x=x\_r+ix\_i,\; \backslash ;\backslash ;\; y=y\_r+iy\_i$ $(\backslash cdot)^H$ is an Hermitian transpose and :$\backslash rho\; =\; \backslash rho\_r\; +i\; \backslash rho\_i\; =\; \backslash operatorname\; \backslash bigg(\backslash frac\; \backslash bigg\; )\backslash ;\backslash ;\; \backslash in\; \backslash ;\backslash left\; ,\; \backslash mathbb\; \backslash \; \backslash le\; 1$ The PDF of $Z\; =\; X/Y$ is found to be :$\backslash begin\; f\_(z\_r,z\_i)\; \&\; =\; \backslash frac\; \backslash Biggr\; (\; \backslash frac\; +\; \backslash frac\; -2\backslash frac\; \backslash Biggr)^\; \backslash \backslash \; \&\; =\; \backslash frac\; \backslash Biggr\; (\; \backslash ;\backslash ;\; \backslash Biggr\; ,\; \backslash frac\; -\; \backslash frac\; \backslash Biggr\; ,\; ^2\; +\backslash frac\; \backslash Biggr)^\; \backslash end$ In the usual event that $\backslash sigma\_x\; =\; \backslash sigma\_y$ we get :$f\_(z\_r,z\_i)\; =\; \backslash frac$ Further closed-form results for the CDF are also given. The graph shows the pdf of the ratio of two complex normal variables with a correlation coefficient of $\backslash rho\; =\; 0.7\; \backslash exp\; (i\; \backslash pi\; /4)$. The pdf peak occurs at roughly the complex conjugate of a scaled down $\backslash rho$.Ratio of log-normal

The ratio of independent or correlated log-normals is log-normal. This follows, because if $X\_1$ and $X\_2$ are log-normally distributed, then $\backslash ln(X\_1)$ and $\backslash ln(X\_2)$ are normally distributed. If they are independent or their logarithms follow a bivarate normal distribution, then the logarithm of their ratio is the difference of independent or correlated normally distributed random variables, which is normally distributed.Note, however, that $X\_1$ and $X\_2$ can be individually log-normally distributed without having a bivariate log-normal distribution. As of 2022-06-08 the Wikipedia article on "Copula (probability theory)
In probability theory
Probability theory is the branch of mathematics concerned with probability. Although there are several different probability interpretations, probability theory treats the concept in a rigorous mathematical manner by expre ...

" includes a density and contour plot of two Normal marginals joint with a Gumbel copula, where the joint distribution is not bivariate normal.
This is important for many applications requiring the ratio of random variables that must be positive, where joint distribution of $X\_1$ and $X\_2$ is adequately approximated by a log-normal. This is a common result of the multiplicative central limit theorem, also known as Gibrat's law, when $X\_i$ is the result of an accumulation of many small percentage changes and must be positive and approximately log-normally distributed.
Uniform ratio distribution

With two independent random variables following a uniform distribution, e.g., : $p\_X(x)\; =\; \backslash begin\; 1\; \&\; 0\; <\; x\; <\; 1\; \backslash \backslash \; 0\; \&\; \backslash text\; \backslash end$ the ratio distribution becomes : $p\_Z(z)\; =\; \backslash begin\; 1/2\; \backslash qquad\; \&\; 0\; <\; z\; <\; 1\; \backslash \backslash \; \backslash frac\; \backslash qquad\; \&\; z\; \backslash geq\; 1\; \backslash \backslash \; 0\; \backslash qquad\; \&\; \backslash text\; \backslash end$Cauchy ratio distribution

If two independent random variables, ''X'' and ''Y'' each follow aRatio of standard normal to standard uniform

If ''X'' has a standard normal distribution and ''Y'' has a standard uniform distribution, then ''Z'' = ''X'' / ''Y'' has a distribution known as the '' slash distribution'', with probability density function :$p\_Z(z)\; =\; \backslash begin\; \backslash left;\; href="/html/ALL/s/\backslash varphi(0)\_-\_\backslash varphi(z)\_\backslash right.html"\; ;"title="\backslash varphi(0)\; -\; \backslash varphi(z)\; \backslash right">\backslash varphi(0)\; -\; \backslash varphi(z)\; \backslash right$ where φ(''z'') is the probability density function of the standard normal distribution.Chi-squared, Gamma, Beta distributions

Let ''G'' be a normal(0,1) distribution, ''Y'' and ''Z'' bechi-squared distribution
In probability theory
Probability theory is the branch of mathematics concerned with probability. Although there are several different probability interpretations, probability theory treats the concept in a rigorous mathematical manner by ex ...

s with ''m'' and ''n'' degrees of freedom
Degrees of freedom (often abbreviated df or DOF) refers to the number of independent variable
Dependent and independent variables are variables in mathematical modeling, statistical model
A statistical model is a mathematical model that emb ...

respectively, all independent, with $f\_\backslash chi\; (x,k)\; =\; \backslash frac$. Then
: $\backslash frac\; \backslash sim\; t\_m$ the Student's ''t'' distribution
: $\backslash frac\; =\; F\_$ i.e. Fisher's F-test
An ''F''-test is any statistical test in which the test statistic has an F-distribution, ''F''-distribution under the null hypothesis. It is most often used when model selection, comparing statistical models that have been fitted to a data set, in ...

distribution
: $\backslash frac\; \backslash sim\; \backslash beta(\; \backslash tfrac,\backslash tfrac\; )$ the beta distribution
In probability theory and statistics, the beta distribution is a family of continuous probability distributions defined on the interval , 1in terms of two positive Statistical parameter, parameters, denoted by ''alpha'' (''α'') and ''beta'' ...

: $\backslash ;\backslash ;\backslash frac\; \backslash sim\; \backslash beta\text{'}(\; \backslash tfrac,\backslash tfrac\; )$ the ''standard'' beta prime distribution
In probability theory and statistics, the beta prime distribution (also known as inverted beta distribution or beta distribution of the second kindJohnson et al (1995), p 248) is an probability distribution#Continuous probability distribution, ab ...

If $V\_1\; \backslash sim\; \_^2(\backslash lambda)$, a noncentral chi-squared distribution, and $V\_2\; \backslash sim\; \_^2(0)$ and $V\_1$ is independent of $V\_2$ then
: $\backslash frac\; \backslash sim\; F\text{'}\_(\backslash lambda)$, a noncentral F-distribution.
$\backslash frac\; F\text{'}\_\; =\; \backslash beta\text{'}(\; \backslash tfrac,\backslash tfrac)\; \backslash text\; F\text{'}\_\; =\; \backslash beta\text{'}(\; \backslash tfrac,\backslash tfrac\; ,1,\backslash tfrac)$
defines $F\text{'}\_$, Fisher's F density distribution, the PDF of the ratio of two Chi-squares with ''m, n'' degrees of freedom.
The CDF of the Fisher density, found in F-tables is defined in the beta prime distribution
In probability theory and statistics, the beta prime distribution (also known as inverted beta distribution or beta distribution of the second kindJohnson et al (1995), p 248) is an probability distribution#Continuous probability distribution, ab ...

article.
If we enter an ''F''-test table with ''m'' = 3, ''n'' = 4 and 5% probability in the right tail, the critical value is found to be 6.59. This coincides with the integral
: $F\_(6.59)\; =\; \backslash int\_^\backslash infty\; \backslash beta\text{'}(x;\; \backslash tfrac,\backslash tfrac,1,\backslash tfrac\; )\; dx\; =\; 0.05$
For gamma distribution
In probability theory and statistics, the gamma distribution is a two-statistical parameter, parameter family of continuous probability distributions. The exponential distribution, Erlang distribution, and chi-square distribution are special ca ...

s ''U'' and ''V'' with arbitrary shape parameters ''α''scale parameter
In probability theory and statistics, a scale parameter is a special kind of numerical parameter of a parametric family of probability distributions. The larger the scale parameter, the more spread out the distribution.
Definition
If a family of ...

, rather than a rate parameter.
If $U\; \backslash sim\; \backslash Gamma(\backslash alpha\_1,\; \backslash theta\_1\; ),\backslash ;\; V\; \backslash sim\; \backslash Gamma(\backslash alpha\_2,\; \backslash theta\_2\; )$, then by rescaling the $\backslash theta$ parameter to unity we have
: $\backslash frac\; =\; \backslash frac\; \backslash sim\; \backslash beta(\; \backslash alpha\_1,\; \backslash alpha\_2\; )$
: $\backslash frac\; =\; \backslash frac\; \backslash frac\backslash sim\; \backslash beta\text{'}(\; \backslash alpha\_1,\; \backslash alpha\_2\; )$
Thus
: $\backslash frac\; \backslash sim\; \backslash beta\text{\'}(\; \backslash alpha\_1,\; \backslash alpha\_2,\; 1,\; \backslash frac\; )\; \backslash quad\; \backslash text\; \backslash operatorname\; \backslash left;\; href="/html/ALL/s/\backslash frac\_\_\_\backslash right.html"\; ;"title="\backslash frac\; \backslash right">\backslash frac\; \backslash right$
in which $\backslash beta\text{'}(\backslash alpha,\backslash beta,p,q)$ represents the ''generalised'' beta prime distribution.
In the foregoing it is apparent that if $X\; \backslash sim\; \backslash beta\text{'}(\; \backslash alpha\_1,\; \backslash alpha\_2,\; 1,\; 1\; )\; \backslash equiv\; \backslash beta\text{'}(\; \backslash alpha\_1,\; \backslash alpha\_2\; )$ then $\backslash theta\; X\; \backslash sim\; \backslash beta\text{'}(\; \backslash alpha\_1,\; \backslash alpha\_2,\; 1,\; \backslash theta\; )$. More explicitly, since
: $\backslash beta\text{'}(x;\; \backslash alpha\_1,\; \backslash alpha\_2,\; 1,\; R\; )\; =\; \backslash frac\; \backslash beta\text{'}\; (\backslash frac\; ;\; \backslash alpha\_1,\; \backslash alpha\_2)$
if $U\; \backslash sim\; \backslash Gamma(\backslash alpha\_1,\; \backslash theta\_1\; ),\; V\; \backslash sim\; \backslash Gamma(\backslash alpha\_2,\; \backslash theta\_2\; )$
then
:$\backslash frac\; \backslash sim\; \backslash frac\; \backslash beta\text{'}\; (\; \backslash frac\; ;\; \backslash alpha\_1,\; \backslash alpha\_2\; )\; =\; \backslash frac\; \backslash cdot\; \backslash frac\; ,\; \backslash ;\backslash ;\; x\; \backslash ge\; 0$
where
: $R\; =\; \backslash frac\; ,\; \backslash ;\; \backslash ;\backslash ;\; B(\; \backslash alpha\_1,\; \backslash alpha\_2\; )\; =\; \backslash frac$
Rayleigh Distributions

If ''X'', ''Y'' are independent samples from theRayleigh distribution
In probability theory and statistics, the Rayleigh distribution is a continuous probability distribution for nonnegative-valued random variables. Up to rescaling, it coincides with the chi distribution with two degrees of freedom.
The distributi ...

$f\_r(r)\; =\; (r/\backslash sigma^2)\; e^\; ,\; \backslash ;\backslash ;\; r\; \backslash ge\; 0$, the ratio ''Z'' = ''X''/''Y'' follows the distribution
:$f\_z(z)\; =\; \backslash frac,\; \backslash ;\backslash ;\; z\; \backslash ge\; 0$
and has cdf
:$F\_z(z)\; =\; 1\; -\; \backslash frac\; =\; \backslash frac,\; \backslash ;\backslash ;\backslash ;\; z\; \backslash ge\; 0$
The Rayleigh distribution has scaling as its only parameter. The distribution of $Z\; =\; \backslash alpha\; X/Y$ follows
:$f\_z(z,\backslash alpha)\; =\; \backslash frac,\; \backslash ;\backslash ;\; z\; >\; 0$
and has cdf
:$F\_z(z,\; \backslash alpha)\; =\; \backslash frac,\; \backslash ;\backslash ;\backslash ;\; z\; \backslash ge\; 0$
Fractional gamma distributions (including chi, chi-squared, exponential, Rayleigh and Weibull)

Thegeneralized gamma distribution
The generalized gamma distribution is a Continuous probability distribution, continuous probability distribution with two shape parameters (and a scale parameter). It is a generalization of the gamma distribution which has one shape parameter (and ...

is
: $f(x;a,d,r)=\backslash frac\; x^\; e^\; \backslash ;\; x\; \backslash ge\; 0;\; \backslash ;\backslash ;\; a,\; \backslash ;\; d,\; \backslash ;r\; >\; 0$
which includes the regular gamma, chi, chi-squared, exponential, Rayleigh, Nakagami and Weibull distributions involving fractional powers. Note that here ''a'' is a scale parameter
In probability theory and statistics, a scale parameter is a special kind of numerical parameter of a parametric family of probability distributions. The larger the scale parameter, the more spread out the distribution.
Definition
If a family of ...

, rather than a rate parameter; ''d'' is a shape parameter.
: If $U\; \backslash sim\; f(x;a\_1,d\_1,r),\; \backslash ;\; \backslash ;\; V\; \backslash sim\; f(x;a\_2,d\_2,r)\; \backslash text\; W\; =\; U/V$
: then $g(w)\; =\; \backslash frac\; \backslash frac\; ,\; \backslash ;\; \backslash ;\; w>0$
:where $B(u,v)\; =\; \backslash frac$
Modelling a mixture of different scaling factors

In the ratios above, Gamma samples, ''U'', ''V'' may have differing sample sizes $\backslash alpha\_1,\; \backslash alpha\_2$ but must be drawn from the same distribution $\backslash frac$ with equal scaling $\backslash theta$. In situations where ''U'' and ''V'' are differently scaled, a variables transformation allows the modified random ratio pdf to be determined. Let $X\; =\; \backslash frac\; =\; \backslash frac$ where $U\; \backslash sim\; \backslash Gamma(\backslash alpha\_1,\backslash theta),\; V\; \backslash sim\; \backslash Gamma(\backslash alpha\_2,\backslash theta),\; \backslash theta$ arbitrary and, from above, $X\; \backslash sim\; Beta(\backslash alpha\_1,\; \backslash alpha\_2),\; B\; =\; V/U\; \backslash sim\; Beta\text{'}(\backslash alpha\_2,\; \backslash alpha\_1)$. Rescale ''V'' arbitrarily, defining $Y\; \backslash sim\; \backslash frac\; =\; \backslash frac\; ,\; \backslash ;\backslash ;\; 0\; \backslash le\; \backslash varphi\; \backslash le\; \backslash infty$ We have $B\; =\; \backslash frac$ and substitution into ''Y'' gives $Y\; =\; \backslash frac\; ,\; dY/dX\; =\; \backslash frac$ Transforming ''X'' to ''Y'' gives $f\_Y(Y)\; =\; \backslash frac\; =\; \backslash frac$ Noting $X\; =\; \backslash frac$ we finally have : $f\_Y(Y,\; \backslash varphi)\; =\; \backslash frac\; \backslash beta\; \backslash left\; (\backslash frac\; ,\; \backslash alpha\_1,\; \backslash alpha\_2\; \backslash right),\; \backslash ;\backslash ;\backslash ;\; 0\; \backslash le\; Y\; \backslash le\; 1$ Thus, if $U\; \backslash sim\; \backslash Gamma(\backslash alpha\_1,\backslash theta\_1)$ and $V\; \backslash sim\; \backslash Gamma(\backslash alpha\_2,\backslash theta\_2)$then $Y\; =\; \backslash frac$ is distributed as $f\_Y(Y,\; \backslash varphi)$ with $\backslash varphi\; =\; \backslash frac$ The distribution of ''Y'' is limited here to the interval ,1 It can be generalized by scaling such that if $Y\; \backslash sim\; f\_Y(Y,\backslash varphi)$ then : $\backslash Theta\; Y\; \backslash sim\; f\_Y(\; Y,\backslash varphi,\; \backslash Theta)$ where $f\_Y(\; Y,\backslash varphi,\; \backslash Theta)\; =\; \backslash frac\; \backslash beta\; \backslash left\; (\backslash frac\; ,\; \backslash alpha\_1,\; \backslash alpha\_2\; \backslash right),\; \backslash ;\backslash ;\backslash ;\; 0\; \backslash le\; Y\; \backslash le\; \backslash Theta$ : $\backslash Theta\; Y$ is then a sample from $\backslash frac$

Reciprocals of samples from beta distributions

Though not ratio distributions of two variables, the following identities for one variable are useful: : If $X\; \backslash sim\; \backslash beta\; (\backslash alpha,\backslash beta)$ then $\backslash mathbf\; x\; =\; \backslash frac\; \backslash sim\; \backslash beta\text{'}(\backslash alpha,\backslash beta)$ : If $\backslash mathbf\; Y\; \backslash sim\; \backslash beta\text{'}\; (\backslash alpha,\backslash beta)$ then $y\; =\; \backslash frac\; \backslash sim\; \backslash beta\text{'}(\backslash beta,\backslash alpha)$ combining the latter two equations yields : If $X\; \backslash sim\; \backslash beta\; (\backslash alpha,\backslash beta)$ then $\backslash mathbf\; x\; =\; \backslash frac\; -1\; \backslash sim\; \backslash beta\text{'}(\backslash beta,\backslash alpha)$. : : If $\backslash mathbf\; Y\; \backslash sim\; \backslash beta\text{'}\; (\backslash alpha,\backslash beta)$ then $y\; =\; \backslash frac\; \backslash sim\; \backslash beta(\backslash alpha,\backslash beta)$ since $\backslash frac\; =\; \backslash frac\; \backslash sim\; \backslash beta(\backslash beta,\backslash alpha)$ then : $1+\backslash mathbf\; Y\; \backslash sim\; \backslash \; ^$, the distribution of the reciprocals of $\backslash beta(\backslash beta,\backslash alpha)$ samples. If $U\; \backslash sim\; \backslash Gamma\; (\; \backslash alpha\; ,\; 1),\; V\; \backslash sim\; \backslash Gamma(\backslash beta,\; 1)$ then $\backslash frac\; \backslash sim\; \backslash beta\text{'}\; (\; \backslash alpha,\; \backslash beta\; )$ and : $\backslash frac\; =\; \backslash frac\; \backslash sim\; \backslash beta(\backslash alpha,\backslash beta)$ Further results can be found in the Inverse distribution article. * If $X,\; \backslash ;\; Y$ are independent exponential random variables with mean ''μ'', then ''X'' − ''Y'' is a double exponential random variable with mean 0 and scale ''μ''.Binomial distribution

This result was first derived by Katz et al. in 1978.Katz D. ''et al''.(1978) Obtaining confidence intervals for the risk ratio in cohort studies. Biometrics 34:469–474 Suppose ''X'' ~ Binomial(''n'',''p''Poisson and truncated Poisson distributions

In the ratio of Poisson variables ''R = X/Y'' there is a problem that ''Y'' is zero with finite probability so ''R'' is undefined. To counter this, we consider the truncated, or censored, ratio ''R' = X/Y where zero sample of ''Y'' are discounted. Moreover, in many medical-type surveys, there are systematic problems with the reliability of the zero samples of both X and Y and it may be good practice to ignore the zero samples anyway. The probability of a null Poisson sample being $e^$, the generic pdf of a left truncated Poisson distribution is :$\backslash tilde\; p\_x(x;\backslash lambda)=\; \backslash frac\; ,\; \backslash ;\backslash ;\backslash ;\; x\; \backslash in\; 1,2,3,\; \backslash cdots$ which sums to unity. Following Cohen, for ''n'' independent trials, the multidimensional truncated pdf is :$\backslash tilde\; p(x\_1,\; x\_2,\; \backslash dots\; ,x\_n;\backslash lambda)=\; \backslash frac\; \backslash prod\_^n,\; \backslash ;\backslash ;\backslash ;\; x\_i\; \backslash in\; 1,2,3,\; \backslash cdots$ and the log likelihood becomes :$L\; =\; \backslash ln\; (\backslash tilde\; p)\; =-n\backslash ln\; (1-e^)\; -n\; \backslash lambda\; +\; \backslash ln(\backslash lambda)\; \backslash sum\_1^n\; x\_i\; -\; \backslash ln\; \backslash prod\_1^n\; (x\_i!),\; \backslash ;\backslash ;\backslash ;\; x\_i\; \backslash in\; 1,2,3,\; \backslash cdots$ On differentiation we get :$dL/d\backslash lambda\; =\; \backslash frac\; +\; \backslash frac\backslash sum\_^n\; x\_i$ and setting to zero gives the maximum likelihood estimate $\backslash hat\; \backslash lambda\_$ :$\backslash frac\; =\; \backslash frac\; \backslash sum\_^n\; x\_i\; =\; \backslash bar\; x$ Note that as $\backslash hat\; \backslash lambda\; \backslash to\; 0$ then $\backslash bar\; x\; \backslash to\; 1$ so the truncated maximum likelihood $\backslash lambda$ estimate, though correct for both truncated and untruncated distributions, gives a truncated mean $\backslash bar\; x$ value which is highly biassed relative to the untruncated one. Nevertheless it appears that $\backslash bar\; x$ is asufficient statistic
In statistics, a statistic is ''sufficient'' with respect to a statistical model and its associated unknown parameter if "no other statistic that can be calculated from the same sample (statistics), sample provides any additional information as to ...

for $\backslash lambda$ since $\backslash hat\; \backslash lambda\_$ depends on the data only through the sample mean $\backslash bar\; x\; =\; \backslash frac\; \backslash sum\_^n\; x\_i$ in the previous equation which is consistent with the methodology of the conventional Poisson distribution
In probability theory and statistics, the Poisson distribution is a discrete probability distribution that expresses the probability of a given number of events occurring in a fixed interval of time or space if these events occur with a known co ...

.
Absent any closed form solutions, the following approximate reversion for truncated $\backslash lambda$ is valid over the whole range $0\; \backslash le\; \backslash lambda\; \backslash le\; \backslash infty;\; \backslash ;\; 1\; \backslash le\; \backslash bar\; x\; \backslash le\; \backslash infty$.
:$\backslash hat\; \backslash lambda\; =\; \backslash bar\; x\; -\; e^\; -\; 0.07(\backslash bar\; x\; -1)e^\; +\; \backslash epsilon,\; \backslash ;\backslash ;\backslash ;,\; \backslash epsilon\; ,\; <\; 0.006$
which compares with the non-truncated version which is simply $\backslash hat\; \backslash lambda\; =\; \backslash bar\; x$. Taking the ratio $R\; =\; \backslash hat\; \backslash lambda\_X\; /\; \backslash hat\; \backslash lambda\_Y$ is a valid operation even though $\backslash hat\; \backslash lambda\_X$ may use a non-truncated model while $\backslash hat\; \backslash lambda\_Y$ has a left-truncated one.
The asymptotic large-$n\backslash lambda\; \backslash text\backslash hat\; \backslash lambda$ (and Cramér–Rao bound) is
:$\backslash mathbb\; (\; \backslash hat\; \backslash lambda\; )\; \backslash ge\; -\; \backslash left(\; \backslash mathbb\backslash left;\; href="/html/ALL/s/\backslash frac\_\backslash right.html"\; ;"title="\backslash frac\; \backslash right">\backslash frac\; \backslash right$
in which substituting ''L'' gives
:$\backslash frac\; =\; -n\; \backslash left;\; href="/html/ALL/s/\backslash frac\_-\_\_\backslash frac\_\backslash right.html"\; ;"title="\backslash frac\; -\; \backslash frac\; \backslash right">\backslash frac\; -\; \backslash frac\; \backslash right$
Then substituting $\backslash bar\; x$ from the equation above, we get Cohen's variance estimate
:$\backslash mathbb\; (\; \backslash hat\; \backslash lambda\; )\; \backslash ge\; \backslash frac\; \backslash frac$
The variance of the point estimate of the mean $\backslash lambda$, on the basis of ''n'' trials, decreases asymptotically to zero as ''n'' increases to infinity. For small $\backslash lambda$ it diverges from the truncated pdf variance in Springael for example, who quotes a variance of
:$\backslash mathbb\; (\; \backslash lambda)\; =\; \backslash frac\; \backslash left;\; href="/html/ALL/s/1\_-\_\backslash frac\backslash right.html"\; ;"title="1\; -\; \backslash frac\backslash right">1\; -\; \backslash frac\backslash right$
for ''n'' samples in the left-truncated pdf shown at the top of this section. Cohen showed that the variance of the estimate relative to the variance of the pdf, $\backslash mathbb\; (\; \backslash hat\; \backslash lambda)\; /\; \backslash mathbb\; (\; \backslash lambda)$, ranges from 1 for large $\backslash lambda$ (100% efficient) up to 2 as $\backslash lambda$ approaches zero (50% efficient).
These mean and variance parameter estimates, together with parallel estimates for ''X'', can be applied to Normal or Binomial approximations for the Poisson ratio. Samples from trials may not be a good fit for the Poisson process; a further discussion of Poisson truncation is by Dietz and Bohning and there is a Zero-truncated Poisson distribution Wikipedia entry.
Double Lomax distribution

This distribution is the ratio of twoLaplace distribution
In probability theory and statistics, the Laplace distribution is a continuous probability distribution named after Pierre-Simon Laplace. It is also sometimes called the double exponential distribution, because it can be thought of as two exponen ...

s.Bindu P and Sangita K (2015) Double Lomax distribution and its applications. Statistica LXXV (3) 331–342 Let ''X'' and ''Y'' be standard Laplace identically distributed random variables and let ''z'' = ''X'' / ''Y''. Then the probability distribution of ''z'' is
: $f(\; x\; )\; =\; \backslash frac$
Let the mean of the ''X'' and ''Y'' be ''a''. Then the standard double Lomax distribution is symmetric around ''a''.
This distribution has an infinite mean and variance.
If ''Z'' has a standard double Lomax distribution, then 1/''Z'' also has a standard double Lomax distribution.
The standard Lomax distribution is unimodal and has heavier tails than the Laplace distribution.
For 0 < ''a'' < 1, the ''a''-th moment exists.
: $E(\; Z^a\; )\; =\; \backslash frac$
where Γ is the gamma function
In mathematics, the gamma function (represented by , the capital letter gamma from the Greek alphabet) is one commonly used extension of the factorial function to complex numbers. The gamma function is defined for all complex numbers except th ...

.
Ratio distributions in multivariate analysis

Ratio distributions also appear inmultivariate analysis
Multivariate statistics is a subdivision of statistics
Statistics (from German language, German: ''wikt:Statistik#German, Statistik'', "description of a State (polity), state, a country") is the discipline that concerns the collection, o ...

. If the random matrices X and Y follow a Wishart distribution then the ratio of the determinant
In mathematics, the determinant is a Scalar (mathematics), scalar value that is a function (mathematics), function of the entries of a square matrix. It characterizes some properties of the matrix and the linear map represented by the matrix. In p ...

s
: $\backslash varphi\; =\; ,\; \backslash mathbf,\; /,\; \backslash mathbf,$
is proportional to the product of independent F random variables. In the case where X and Y are from independent standardized Wishart distributions then the ratio
: $\backslash Lambda\; =$
has a Wilks' lambda distribution.
Ratios of Quadratic Forms involving Wishart Matrices

Probability distribution can be derived from random quadratic forms :$r\; =\; V^T\; A\; V$ where $V$ and/or $A$ are random. If ''A'' is the inverse of another matrix ''B'' then $r\; =\; V^T\; B^\; V$ is a random ratio in some sense, frequently arising in Least Squares estimation problems. In the Gaussian case if ''A'' is a matrix drawn from a complex Wishart distribution $A\backslash sim\; W\_C(A\_0,k,p)$ of dimensionality ''p x p'' and ''k'' degrees of freedom with $k\; \backslash ge\; p$ while $V$ is an arbitrary complex vector with Hermitian (conjugate) transpose $(.)^H$, the ratio :$r\; =\; k\backslash frac$ follows the Gamma distribution : $p\_1(r)\; =\; \backslash frac\; ,\; \backslash ;\backslash ;\backslash ;\; r\; \backslash ge\; 0$ The result arises in least squares adaptive Wiener filtering - see eqn(A13) of. Note that the original article contends that the distribution is $p\_1(r)\; =\; r^\; \backslash ;\; e^\backslash ;\; /\; \backslash Gamma(k-p)$. Similarly, for full-rank ( $k\; \backslash ge\; p\; )$ zero-mean real-valued Wishart matrix samples $W\; \backslash sim\; W(\backslash Sigma,k,p)$, and ''V'' a random vector independent of ''W'', the ratio :$r\; =\; \backslash frac\; \backslash sim\; \backslash chi^2\_$ This result is usually attributed to Muirhead (1982). Given complex Wishart matrix $A\backslash sim\; W\_C(I,k,p)$, the ratio :$\backslash rho\; =\; \backslash frac$ follows the Beta distribution (see eqn(47) of) : $p\_2(\backslash rho)\; =\; (1-\backslash rho)^\; \backslash rho^\; \backslash frac\; ,\; \backslash ;\backslash ;\backslash ;\; 0\; \backslash le\; \backslash rho\; \backslash le\; 1$ The result arises in the performance analysis of constrained least squares filtering and derives from a more complex but ultimately equivalent ratio that if $A\backslash sim\; W\_C(A\_0,n,p)$ then :$\backslash rho\; =\; \backslash frac$ In its simplest form, if $A\_\backslash sim\; W\_C(I,k,p)$ and $W^\; =\; \backslash left\; (W^\; \backslash right\; )\_$ then the ratio of the (1,1) inverse element squared to the sum of modulus squares of the whole top row elements has distribution : $\backslash rho\; =\; \backslash frac\; \backslash sim\; \backslash beta(\; p-1,\; k-p+2\; )$See also

* Relationships among probability distributions * Inverse distribution (also known as reciprocal distribution) * Product distribution *Ratio estimator The ratio estimator is a statistical parameter and is defined to be the ratio of Arithmetic mean, means of two random variables. Ratio estimates are Bias (statistics), biased and corrections must be made when they are used in experimental or survey ...

* Slash distribution
Notes

References

{{ReflistExternal links

Ratio Distribution

at

MathWorld
''MathWorld'' is an online mathematics reference work, created and largely written by Eric W. Weisstein. It is sponsored by and licensed to Wolfram Research, Inc. and was partially funded by the National Science Foundation's National Science Dig ...

Normal Ratio Distribution

at

MathWorld
''MathWorld'' is an online mathematics reference work, created and largely written by Eric W. Weisstein. It is sponsored by and licensed to Wolfram Research, Inc. and was partially funded by the National Science Foundation's National Science Dig ...

Ratio Distributions

at MathPages Algebra of random variables Statistical ratios Types of probability distributions