Ratio distribution

TheInfoList

OR:

A ratio distribution (also known as a quotient distribution) is a
probability distribution In probability theory and statistics, a probability distribution is the mathematical Function (mathematics), function that gives the probabilities of occurrence of different possible outcomes for an Experiment (probability theory), experiment. ...
constructed as the distribution of the
ratio In mathematics, a ratio shows how many times one number contains another. For example, if there are eight oranges and six lemons in a bowl of fruit, then the ratio of oranges to lemons is eight to six (that is, 8:6, which is equivalent to the ...

of
random variable A random variable (also called random quantity, aleatory variable, or stochastic variable) is a mathematical formalization of a quantity or object which depends on random events. It is a mapping or a function from possible outcomes (e.g., the p ...
s having two other known distributions. Given two (usually
independent Independent or Independents may refer to: Arts, entertainment, and media Artist groups * Independents (artist group), a group of modernist painters based in the New Hope, Pennsylvania, area of the United States during the early 1930s * Independen ...
) random variables ''X'' and ''Y'', the distribution of the random variable ''Z'' that is formed as the ratio ''Z'' = ''X''/''Y'' is a ''ratio distribution''. An example is the
Cauchy distribution The Cauchy distribution, named after Augustin Cauchy, is a continuous probability distribution. It is also known, especially among physicists, as the Lorentz distribution (after Hendrik Lorentz), Cauchy–Lorentz distribution, Lorentz(ian) f ...

(also called the ''normal ratio distribution''), which comes about as the ratio of two
normally distributed In statistics, a normal distribution or Gaussian distribution is a type of continuous probability distribution for a real number, real-valued random variable. The general form of its probability density function is : f(x) = \frac e^ The param ...

variables with zero mean. Two other distributions often used in test-statistics are also ratio distributions: the ''t''-distribution arises from a
Gaussian Carl Friedrich Gauss Johann Carl Friedrich Gauss (; german: Gauß ; la, Carolus Fridericus Gauss; 30 April 177723 February 1855) was a German mathematician and physicist who made significant contributions to many fields in mathematics and ...

random variable divided by an independent chi-distributed random variable, while the ''F''-distribution originates from the ratio of two independent chi-squared distributed random variables. More general ratio distributions have been considered in the literature. Often the ratio distributions are heavy-tailed, and it may be difficult to work with such distributions and develop an associated
statistical test A statistical hypothesis test is a method of statistical inference used to decide whether the data at hand sufficiently support a particular hypothesis. Hypothesis testing allows us to make probabilistic statements about population parameters. ...
. A method based on the
median In statistics and probability theory, the median is the value separating the higher half from the lower half of a Sample (statistics), data sample, a statistical population, population, or a probability distribution. For a data set, it may be th ...

has been suggested as a "work-around".

# Algebra of random variables

The ratio is one type of algebra for random variables: Related to the ratio distribution are the product distribution, sum distribution and difference distribution. More generally, one may talk of combinations of sums, differences, products and ratios. Many of these distributions are described in Melvin D. Springer's book from 1979 ''The Algebra of Random Variables''. The algebraic rules known with ordinary numbers do not apply for the algebra of random variables. For example, if a product is ''C = AB'' and a ratio is ''D=C/A'' it does not necessarily mean that the distributions of ''D'' and ''B'' are the same. Indeed, a peculiar effect is seen for the
Cauchy distribution The Cauchy distribution, named after Augustin Cauchy, is a continuous probability distribution. It is also known, especially among physicists, as the Lorentz distribution (after Hendrik Lorentz), Cauchy–Lorentz distribution, Lorentz(ian) f ...

: The product and the ratio of two independent Cauchy distributions (with the same scale parameter and the location parameter set to zero) will give the same distribution. This becomes evident when regarding the Cauchy distribution as itself a ratio distribution of two Gaussian distributions of zero means: Consider two Cauchy random variables, $C_1$ and $C_2$ each constructed from two Gaussian distributions $C_1=G_1/G_2$ and $C_2 = G_3/G_4$ then : $\frac = \frac = \frac = \frac \times \frac = C_1 \times C_3,$ where $C_3 = G_4/G_3$. The first term is the ratio of two Cauchy distributions while the last term is the product of two such distributions.

# Derivation

A way of deriving the ratio distribution of $Z = X/Y$ from the joint distribution of the two other random variables ''X , Y'' , with joint pdf $p_\left(x,y\right)$, is by integration of the following form : $p_Z\left(z\right) = \int^_ , y, \, p_\left(zy, y\right) \, dy.$ If the two variables are independent then $p_\left(x,y\right) = p_X\left(x\right) p_Y\left(y\right)$ and this becomes : $p_Z\left(z\right) = \int^_ , y, \, p_X\left(zy\right) p_Y\left(y\right) \, dy.$ This may not be straightforward. By way of example take the classical problem of the ratio of two standard Gaussian samples. The joint pdf is :$p_\left(x,y\right) = \frac \exp\left\left(-\frac \right\right) \exp \left\left(-\frac \right\right)$ Defining $Z = X/Y$ we have :$\begin p_Z\left(z\right) &= \frac \int_^ \, , y, \, \exp\left\left(-\frac \right\right) \, \exp\left\left(-\frac \right\right) \, dy \\ &= \frac \int_^ \,, y, \, \exp\left\left(-\frac \right\right) \, dy \end$ Using the known definite integral $\int_0^ \, x \, \exp\left(-cx^2 \right) \, dx = \frac$ we get :$p_Z\left(z\right) = \frac$ which is the Cauchy distribution, or Student's ''t'' distribution with ''n'' = 1 The
Mellin transform In mathematics, the Mellin transform is an integral transform that may be regarded as the multiplicative group, multiplicative version of the two-sided Laplace transform. This integral transform is closely connected to the theory of Dirichlet series ...
has also been suggested for derivation of ratio distributions. In the case of positive independent variables, proceed as follows. The diagram shows a separable bivariate distribution $f_\left(x,y\right)=f_x\left(x\right)f_y\left(y\right)$ which has support in the positive quadrant $x,y > 0$ and we wish to find the pdf of the ratio $R= X/Y$. The hatched volume above the line $y = x/ R$ represents the cumulative distribution of the function $f_\left(x,y\right)$ multiplied with the logical function $X/Y \le R$. The density is first integrated in horizontal strips; the horizontal strip at height ''y'' extends from ''x'' = 0 to ''x = Ry'' and has incremental probability $f_y(y)dy \int_0^ f_x(x) \,dx$.
Secondly, integrating the horizontal strips upward over all ''y'' yields the volume of probability above the line :$F_R\left(R\right) = \int_0^\infty f_y\left(y\right) \left\left(\int_0^ f_x\left(x\right)dx \right\right) dy$ Finally, differentiate $F_R\left(R\right)$ with respect to $R$ to get the pdf $f_R\left(R\right)$. : Move the differentiation inside the integral: :$f_R\left(R\right) = \int_0^\infty f_y\left(y\right) \left\left(\frac \int_0^ f_x\left(x\right)dx \right\right) dy$ and since :$\frac \int_0^ f_x\left(x\right)dx = yf_x\left(Ry\right)$ then :$f_R\left(R\right) = \int_0^\infty f_y\left(y\right) \; f_x\left(Ry\right) \; y \; dy$ As an example, find the pdf of the ratio ''R'' when : $f_x\left(x\right) = \alpha e^, \;\;\;\; f_y\left(y\right) = \beta e^, \;\;\; x,y \ge 0$ We have :$\int_0^ f_x\left(x\right)dx = - e^ \vert_0^ = 1- e^$ thus :$\begin F_R\left(R\right) &= \int_0^\infty f_y\left(y\right) \left\left( 1- e^ \right\right) dy =\int_0^\infty \beta e^ \left\left( 1- e^ \right\right) dy \\ & = 1 - \frac \\ & = \frac \end$ Differentiation wrt. ''R'' yields the pdf of ''R'' :$f_R\left(R\right) =\frac \left\left( \frac \right\right) = \frac$

# Moments of random ratios

From
Mellin transform In mathematics, the Mellin transform is an integral transform that may be regarded as the multiplicative group, multiplicative version of the two-sided Laplace transform. This integral transform is closely connected to the theory of Dirichlet series ...
theory, for distributions existing only on the positive half-line $x \ge 0$, we have the product identity provided $U, \; V$ are independent. For the case of a ratio of samples like , in order to make use of this identity it is necessary to use moments of the inverse distribution. Set $1/Y = Z$ such that

# Means and variances of random ratios

In the Product distribution section, and derived from
Mellin transform In mathematics, the Mellin transform is an integral transform that may be regarded as the multiplicative group, multiplicative version of the two-sided Laplace transform. This integral transform is closely connected to the theory of Dirichlet series ...
theory (see section above), it is found that the mean of a product of independent variables is equal to the product of their means. In the case of ratios, we have :$\operatorname\left(X/Y\right) = \operatorname\left(X\right)\operatorname\left(1/Y\right)$ which, in terms of probability distributions, is equivalent to : $\operatorname\left(X/Y\right) = \int_^\infty x f_x\left(x\right) \, dx \times \int_^\infty y^ f_y\left(y\right) \, dy$ Note that $\operatorname\left(1/Y\right) \neq \frac$ i.e., $\int_^\infty y^ f_y\left(y\right) \, dy \ne \frac$ The variance of a ratio of independent variables is :

# Normal ratio distributions

## Uncorrelated central normal ratio

When ''X'' and ''Y'' are independent and have a
Gaussian distribution In statistics, a normal distribution or Gaussian distribution is a type of continuous probability distribution for a real number, real-valued random variable. The general form of its probability density function is : f(x) = \frac e^ The param ...
with zero mean, the form of their ratio distribution is a
Cauchy distribution The Cauchy distribution, named after Augustin Cauchy, is a continuous probability distribution. It is also known, especially among physicists, as the Lorentz distribution (after Hendrik Lorentz), Cauchy–Lorentz distribution, Lorentz(ian) f ...

. This can be derived by setting $Z = X/Y = \tan \theta$ then showing that $\theta$ has circular symmetry. For a bivariate uncorrelated Gaussian distribution we have : $\begin p\left(x,y\right) &= \tfrac e^ \times \tfrac e^ \\ &= \tfrac e^ \\ & = \tfrac e^ \text r^2 = x^2 + y^2 \end$ If $p\left(x,y\right)$ is a function only of ''r'' then $\theta$ is uniformly distributed on with density $1/2\pi$ so the problem reduces to finding the probability distribution of ''Z'' under the mapping : $Z = X/Y = \tan \theta$ We have, by conservation of probability : $p_z\left(z\right) , dz, = p_\left(\theta\right), d\theta,$ and since $dz/d\theta = 1/ \cos^2 \theta$ : $p_z\left(z\right) = \frac = \tfrac$ and setting $\cos^2 \theta = \frac= \frac$ we get : $p_z\left(z\right) = \frac$ There is a spurious factor of 2 here. Actually, two values of $\theta$ spaced by $\pi$ map onto the same value of ''z'', the density is doubled, and the final result is : $p_z\left(z\right) = \frac , \;\; -\infty < z < \infty$ When either of the two Normal distributions is non-central then the result for the distribution of the ratio is much more complicated and is given below in the succinct form presented by David Hinkley. The trigonometric method for a ratio does however extend to radial distributions like bivariate normals or a bivariate Student ''t'' in which the density depends only on radius $r = \sqrt$. It does not extend to the ratio of two independent Student ''t'' distributions which give the Cauchy ratio shown in a section below for one degree of freedom.

## Uncorrelated noncentral normal ratio

In the absence of correlation $\left(\operatorname\left(X,Y\right)=0\right)$, the
probability density function In probability theory, a probability density function (PDF), or density of a continuous random variable, is a function whose value at any given sample (or point) in the sample space (the set of possible values taken by the random variable) ...
of the two normal variables ''X'' = ''N''(''μX'', ''σX''2) and ''Y'' = ''N''(''μY'', ''σY''2) ratio ''Z'' = ''X''/''Y'' is given exactly by the following expression, derived in several sources: : where : $a\left(z\right)= \sqrt$ : $b\left(z\right)= \frac z + \frac$ : $c = \frac + \frac$ : $d\left(z\right) = e^$ and $\Phi$ is the normal cumulative distribution function: : $\Phi\left(t\right)= \int_^\, \frac e^\ du \, .$ Under certain conditions, a normal approximation is possible, with variance: :$\sigma_z^2=\frac \left\left(\frac + \frac\right\right)$

## Correlated central normal ratio

The above expression becomes more complicated when the variables ''X'' and ''Y'' are correlated. If $\mu_x = \mu_y = 0$ but $\sigma_X \neq \sigma_Y$ and $\rho \neq 0$ the more general Cauchy distribution is obtained : $p_Z\left(z\right) = \frac \frac,$ where ''ρ'' is the
correlation coefficient A correlation coefficient is a numerical measure of some type of correlation and dependence, correlation, meaning a statistical relationship between two variable (mathematics), variables. The variables may be two column (database), columns of a giv ...
between ''X'' and ''Y'' and : $\alpha = \rho \frac,$ : $\beta = \frac \sqrt.$ The complex distribution has also been expressed with Kummer's confluent hypergeometric function or the Hermite function.

## Correlated noncentral normal ratio

### Approximations to correlated noncentral normal ratio

A transformation to the log domain was suggested by Katz(1978) (see binomial section below). Let the ratio be :$T \sim \frac = \frac = \frac\frac$. Take logs to get :$\log_e\left(T\right) = \log_e \left\left(\frac \right\right) + \log_e \left\left( 1+ \frac \right\right) - \log_e \left\left( 1+ \frac \right\right) .$ Since $\log_e\left(1+\delta\right) = \delta - \frac + \frac + \cdots$ then asymptotically :$\log_e\left(T\right) \approx \log_e \left\left(\frac \right\right)+ \frac - \frac \sim \log_e \left\left(\frac \right\right) + \mathbb \left\left( 0, \frac + \frac \right\right) .$ Alternatively, Geary (1930) suggested that : $t \approx \frac$ has approximately a standard Gaussian distribution: This transformation has been called the ''Geary–Hinkley transformation''; the approximation is good if ''Y'' is unlikely to assume negative values, basically $\mu_y > 3\sigma_y$.

### Exact correlated noncentral normal ratio

Geary showed how the correlated ratio $z$ could be transformed into a near-Gaussian form and developed an approximation for $t$ dependent on the probability of negative denominator values $x+\mu_x<0$ being vanishingly small. Fieller's later correlated ratio analysis is exact but care is needed when used with modern math packages and similar problems may occur in some of Marsaglia's equations. Pham-Ghia has exhaustively discussed these methods. Hinkley's correlated results are exact but it is shown below that the correlated ratio condition can be transformed simply into an uncorrelated one so only the simplified Hinkley equations above are required, not the full correlated ratio version. Let the ratio be: :$z=\frac$ in which $x, y$ are zero-mean correlated normal variables with variances $\sigma_x^2, \sigma_y^2$ and $X, Y$ have means $\mu_x, \mu_y.$ Write $x\text{'}=x-\rho y\sigma_x /\sigma_y$ such that $x\text{'}, y$ become uncorrelated and $x\text{'}$ has standard deviation :$\sigma_x\text{'} = \sigma_x \sqrt .$ The ratio: :$z=\frac$ is invariant under this transformation and retains the same pdf. The $y$ term in the numerator is made separable by expanding: :$=x\text{'}+\mu_x -\rho \mu_y \frac + \rho \left(y+\mu_y\right)\frac$ to get :$z=\frac + \rho \frac$ in which $\mu'_x=\mu_x - \rho \mu_y \frac$ and ''z'' has now become a ratio of uncorrelated non-central normal samples with an invariant ''z''-offset. Finally, to be explicit, the pdf of the ratio $z$ for correlated variables is found by inputting the modified parameters $\sigma_x\text{'}, \mu_x\text{'}, \sigma_y, \mu_y$ and $\rho\text{'}=0$ into the Hinkley equation above which returns the pdf for the correlated ratio with a constant offset $- \rho \frac$ on $z$. The figures above show an example of a positively correlated ratio with $\sigma_x= \sigma_y=1, \mu_x=0, \mu_y=0.5, \rho = 0.975$ in which the shaded wedges represent the increment of area selected by given ratio which accumulates probability where they overlap the distribution. The theoretical distribution, derived from the equations under discussion combined with Hinkley's equations, is highly consistent with a simulation result using 5,000 samples. In the top figure it is easily understood that for a ratio $z=x/y=1$ the wedge almost bypasses the distribution mass altogether and this coincides with a near-zero region in the theoretical pdf. Conversely as $x/y$ reduces toward zero the line collects a higher probability. This transformation will be recognized as being the same as that used by Geary (1932) as a partial result in his ''eqn viii '' but whose derivation and limitations were hardly explained. Thus the first part of Geary's transformation to approximate Gaussianity in the previous section is actually exact and not dependent on the positivity of ''Y''. The offset result is also consistent with the "Cauchy" correlated zero-mean Gaussian ratio distribution in the first section. Marsaglia has applied the same result but using a nonlinear method to achieve it.

## Complex normal ratio

The ratio of correlated zero-mean circularly symmetric complex normal distributed variables was determined by Baxley et al. The joint distribution of ''x'', ''y'' is :$f_\left(x,y\right) = \frac \exp \left \left( - \beginx \\ y \end^H \Sigma ^\beginx \\ y \end \right \right)$ where :$\Sigma = \begin \sigma_x^2 & \rho \sigma_x \sigma_y \\ \rho^* \sigma_x \sigma_y & \sigma_y^2 \end, \;\; x=x_r+ix_i, \;\; y=y_r+iy_i$ $\left(\cdot\right)^H$ is an Hermitian transpose and :$\rho = \rho_r +i \rho_i = \operatorname \bigg\left(\frac \bigg \right)\;\; \in \;\left , \mathbb \ \le 1$ The PDF of $Z = X/Y$ is found to be :$\begin f_\left(z_r,z_i\right) & = \frac \Biggr \left( \frac + \frac -2\frac \Biggr\right)^ \\ & = \frac \Biggr \left( \;\; \Biggr , \frac - \frac \Biggr , ^2 +\frac \Biggr\right)^ \end$ In the usual event that $\sigma_x = \sigma_y$ we get :$f_\left(z_r,z_i\right) = \frac$ Further closed-form results for the CDF are also given. The graph shows the pdf of the ratio of two complex normal variables with a correlation coefficient of $\rho = 0.7 \exp \left(i \pi /4\right)$. The pdf peak occurs at roughly the complex conjugate of a scaled down $\rho$.

# Ratio of log-normal

The ratio of independent or correlated log-normals is log-normal. This follows, because if $X_1$ and $X_2$ are log-normally distributed, then $\ln\left(X_1\right)$ and $\ln\left(X_2\right)$ are normally distributed. If they are independent or their logarithms follow a bivarate normal distribution, then the logarithm of their ratio is the difference of independent or correlated normally distributed random variables, which is normally distributed.Note, however, that $X_1$ and $X_2$ can be individually log-normally distributed without having a bivariate log-normal distribution. As of 2022-06-08 the Wikipedia article on "
Copula (probability theory) In probability theory Probability theory is the branch of mathematics concerned with probability. Although there are several different probability interpretations, probability theory treats the concept in a rigorous mathematical manner by expre ...
" includes a density and contour plot of two Normal marginals joint with a Gumbel copula, where the joint distribution is not bivariate normal.
This is important for many applications requiring the ratio of random variables that must be positive, where joint distribution of $X_1$ and $X_2$ is adequately approximated by a log-normal. This is a common result of the multiplicative central limit theorem, also known as Gibrat's law, when $X_i$ is the result of an accumulation of many small percentage changes and must be positive and approximately log-normally distributed.

# Uniform ratio distribution

With two independent random variables following a uniform distribution, e.g., : $p_X\left(x\right) = \begin 1 & 0 < x < 1 \\ 0 & \text \end$ the ratio distribution becomes : $p_Z\left(z\right) = \begin 1/2 \qquad & 0 < z < 1 \\ \frac \qquad & z \geq 1 \\ 0 \qquad & \text \end$

# Cauchy ratio distribution

If two independent random variables, ''X'' and ''Y'' each follow a
Cauchy distribution The Cauchy distribution, named after Augustin Cauchy, is a continuous probability distribution. It is also known, especially among physicists, as the Lorentz distribution (after Hendrik Lorentz), Cauchy–Lorentz distribution, Lorentz(ian) f ...

with median equal to zero and shape factor $a$ : $p_X\left(x, a\right) = \frac$ then the ratio distribution for the random variable $Z = X/Y$ is : $p_Z\left(z, a\right) = \frac \ln\left(z^2\right).$ This distribution does not depend on $a$ and the result stated by Springer (p158 Question 4.6) is not correct. The ratio distribution is similar to but not the same as the product distribution of the random variable $W=XY$: : $p_W\left(w, a\right) = \frac \ln \left\left(\frac\right\right).$ More generally, if two independent random variables ''X'' and ''Y'' each follow a
Cauchy distribution The Cauchy distribution, named after Augustin Cauchy, is a continuous probability distribution. It is also known, especially among physicists, as the Lorentz distribution (after Hendrik Lorentz), Cauchy–Lorentz distribution, Lorentz(ian) f ...

with median equal to zero and shape factor $a$ and $b$ respectively, then: # The ratio distribution for the random variable $Z = X/Y$ is $p_Z(z, a,b) = \frac \ln \left(\frac\right).$ # The product distribution for the random variable $W = XY$ is $p_W(w, a,b) = \frac \ln \left(\frac\right).$ The result for the ratio distribution can be obtained from the product distribution by replacing $b$ with $\frac.$

# Ratio of standard normal to standard uniform

If ''X'' has a standard normal distribution and ''Y'' has a standard uniform distribution, then ''Z'' = ''X'' / ''Y'' has a distribution known as the '' slash distribution'', with probability density function : where φ(''z'') is the probability density function of the standard normal distribution.

# Chi-squared, Gamma, Beta distributions

Let ''G'' be a normal(0,1) distribution, ''Y'' and ''Z'' be
chi-squared distribution In probability theory Probability theory is the branch of mathematics concerned with probability. Although there are several different probability interpretations, probability theory treats the concept in a rigorous mathematical manner by ex ...
s with ''m'' and ''n''
degrees of freedom Degrees of freedom (often abbreviated df or DOF) refers to the number of independent variable Dependent and independent variables are variables in mathematical modeling, statistical model A statistical model is a mathematical model that emb ...
respectively, all independent, with $f_\chi \left(x,k\right) = \frac$. Then : $\frac \sim t_m$ the Student's ''t'' distribution : $\frac = F_$ i.e. Fisher's
F-test An ''F''-test is any statistical test in which the test statistic has an F-distribution, ''F''-distribution under the null hypothesis. It is most often used when model selection, comparing statistical models that have been fitted to a data set, in ...
distribution : $\frac \sim \beta\left( \tfrac,\tfrac \right)$ the
beta distribution In probability theory and statistics, the beta distribution is a family of continuous probability distributions defined on the interval , 1in terms of two positive Statistical parameter, parameters, denoted by ''alpha'' (''α'') and ''beta'' ...
: $\;\;\frac \sim \beta\text{'}\left( \tfrac,\tfrac \right)$ the ''standard''
beta prime distribution In probability theory and statistics, the beta prime distribution (also known as inverted beta distribution or beta distribution of the second kindJohnson et al (1995), p 248) is an probability distribution#Continuous probability distribution, ab ...
If $V_1 \sim _^2\left(\lambda\right)$, a noncentral chi-squared distribution, and $V_2 \sim _^2\left(0\right)$ and $V_1$ is independent of $V_2$ then : $\frac \sim F\text{'}_\left(\lambda\right)$, a noncentral F-distribution. $\frac F\text{'}_ = \beta\text{'}\left( \tfrac,\tfrac\right) \text F\text{'}_ = \beta\text{'}\left( \tfrac,\tfrac ,1,\tfrac\right)$ defines $F\text{'}_$, Fisher's F density distribution, the PDF of the ratio of two Chi-squares with ''m, n'' degrees of freedom. The CDF of the Fisher density, found in F-tables is defined in the
beta prime distribution In probability theory and statistics, the beta prime distribution (also known as inverted beta distribution or beta distribution of the second kindJohnson et al (1995), p 248) is an probability distribution#Continuous probability distribution, ab ...
article. If we enter an ''F''-test table with ''m'' = 3, ''n'' = 4 and 5% probability in the right tail, the critical value is found to be 6.59. This coincides with the integral : $F_\left(6.59\right) = \int_^\infty \beta\text{'}\left(x; \tfrac,\tfrac,1,\tfrac \right) dx = 0.05$ For
gamma distribution In probability theory and statistics, the gamma distribution is a two-statistical parameter, parameter family of continuous probability distributions. The exponential distribution, Erlang distribution, and chi-square distribution are special ca ...
s ''U'' and ''V'' with arbitrary shape parameters ''α''1 and ''α''2 and their scale parameters both set to unity, that is, $U \sim \Gamma \left( \alpha_1 , 1\right), V \sim \Gamma\left(\alpha_2, 1\right)$, where $\Gamma \left(x;\alpha,1\right) = \frac$, then : $\frac \sim \beta\left( \alpha_1, \alpha_2 \right), \qquad \text = \frac$ : $\frac \sim \beta\text{'}\left(\alpha_1,\alpha_2\right), \qquad \qquad \text = \frac, \; \alpha_2 > 1$ : $\frac \sim \beta\text{'}\left(\alpha_2, \alpha_1\right), \qquad \qquad \text = \frac, \; \alpha_1 > 1$ If $U \sim \Gamma \left(x;\alpha,1\right)$, then $\theta U \sim \Gamma \left(x;\alpha,\theta\right) = \frac$. Note that here ''θ'' is a
scale parameter In probability theory and statistics, a scale parameter is a special kind of numerical parameter of a parametric family of probability distributions. The larger the scale parameter, the more spread out the distribution. Definition If a family of ...
, rather than a rate parameter. If $U \sim \Gamma\left(\alpha_1, \theta_1 \right),\; V \sim \Gamma\left(\alpha_2, \theta_2 \right)$, then by rescaling the $\theta$ parameter to unity we have : $\frac = \frac \sim \beta\left( \alpha_1, \alpha_2 \right)$ : $\frac = \frac \frac\sim \beta\text{'}\left( \alpha_1, \alpha_2 \right)$ Thus : in which $\beta\text{'}\left(\alpha,\beta,p,q\right)$ represents the ''generalised'' beta prime distribution. In the foregoing it is apparent that if $X \sim \beta\text{'}\left( \alpha_1, \alpha_2, 1, 1 \right) \equiv \beta\text{'}\left( \alpha_1, \alpha_2 \right)$ then $\theta X \sim \beta\text{'}\left( \alpha_1, \alpha_2, 1, \theta \right)$. More explicitly, since : $\beta\text{'}\left(x; \alpha_1, \alpha_2, 1, R \right) = \frac \beta\text{'} \left(\frac ; \alpha_1, \alpha_2\right)$ if $U \sim \Gamma\left(\alpha_1, \theta_1 \right), V \sim \Gamma\left(\alpha_2, \theta_2 \right)$ then :$\frac \sim \frac \beta\text{'} \left( \frac ; \alpha_1, \alpha_2 \right) = \frac \cdot \frac , \;\; x \ge 0$ where : $R = \frac , \; \;\; B\left( \alpha_1, \alpha_2 \right) = \frac$

# Rayleigh Distributions

If ''X'', ''Y'' are independent samples from the
Rayleigh distribution In probability theory and statistics, the Rayleigh distribution is a continuous probability distribution for nonnegative-valued random variables. Up to rescaling, it coincides with the chi distribution with two degrees of freedom. The distributi ...
$f_r\left(r\right) = \left(r/\sigma^2\right) e^ , \;\; r \ge 0$, the ratio ''Z'' = ''X''/''Y'' follows the distribution :$f_z\left(z\right) = \frac, \;\; z \ge 0$ and has cdf :$F_z\left(z\right) = 1 - \frac = \frac, \;\;\; z \ge 0$ The Rayleigh distribution has scaling as its only parameter. The distribution of $Z = \alpha X/Y$ follows :$f_z\left(z,\alpha\right) = \frac, \;\; z > 0$ and has cdf :$F_z\left(z, \alpha\right) = \frac, \;\;\; z \ge 0$

# Fractional gamma distributions (including chi, chi-squared, exponential, Rayleigh and Weibull)

The
generalized gamma distribution The generalized gamma distribution is a Continuous probability distribution, continuous probability distribution with two shape parameters (and a scale parameter). It is a generalization of the gamma distribution which has one shape parameter (and ...
is : $f\left(x;a,d,r\right)=\frac x^ e^ \; x \ge 0; \;\; a, \; d, \;r > 0$ which includes the regular gamma, chi, chi-squared, exponential, Rayleigh, Nakagami and Weibull distributions involving fractional powers. Note that here ''a'' is a
scale parameter In probability theory and statistics, a scale parameter is a special kind of numerical parameter of a parametric family of probability distributions. The larger the scale parameter, the more spread out the distribution. Definition If a family of ...
, rather than a rate parameter; ''d'' is a shape parameter. : If $U \sim f\left(x;a_1,d_1,r\right), \; \; V \sim f\left(x;a_2,d_2,r\right) \text W = U/V$ : then $g(w) = \frac \frac , \; \; w>0$ :where $B\left(u,v\right) = \frac$

## Modelling a mixture of different scaling factors

In the ratios above, Gamma samples, ''U'', ''V'' may have differing sample sizes $\alpha_1, \alpha_2$ but must be drawn from the same distribution $\frac$ with equal scaling $\theta$. In situations where ''U'' and ''V'' are differently scaled, a variables transformation allows the modified random ratio pdf to be determined. Let $X = \frac = \frac$ where $U \sim \Gamma\left(\alpha_1,\theta\right), V \sim \Gamma\left(\alpha_2,\theta\right), \theta$ arbitrary and, from above, $X \sim Beta\left(\alpha_1, \alpha_2\right), B = V/U \sim Beta\text{'}\left(\alpha_2, \alpha_1\right)$. Rescale ''V'' arbitrarily, defining $Y \sim \frac = \frac , \;\; 0 \le \varphi \le \infty$ We have $B = \frac$ and substitution into ''Y'' gives $Y = \frac , dY/dX = \frac$ Transforming ''X'' to ''Y'' gives $f_Y\left(Y\right) = \frac = \frac$ Noting $X = \frac$ we finally have : $f_Y\left(Y, \varphi\right) = \frac \beta \left \left(\frac , \alpha_1, \alpha_2 \right\right), \;\;\; 0 \le Y \le 1$ Thus, if $U \sim \Gamma\left(\alpha_1,\theta_1\right)$ and $V \sim \Gamma\left(\alpha_2,\theta_2\right)$
then $Y = \frac$ is distributed as $f_Y\left(Y, \varphi\right)$ with $\varphi = \frac$ The distribution of ''Y'' is limited here to the interval ,1 It can be generalized by scaling such that if $Y \sim f_Y\left(Y,\varphi\right)$ then : $\Theta Y \sim f_Y\left( Y,\varphi, \Theta\right)$ where $f_Y\left( Y,\varphi, \Theta\right) = \frac \beta \left \left(\frac , \alpha_1, \alpha_2 \right\right), \;\;\; 0 \le Y \le \Theta$ : $\Theta Y$ is then a sample from $\frac$

# Reciprocals of samples from beta distributions

Though not ratio distributions of two variables, the following identities for one variable are useful: : If $X \sim \beta \left(\alpha,\beta\right)$ then $\mathbf x = \frac \sim \beta\text{'}\left(\alpha,\beta\right)$ : If $\mathbf Y \sim \beta\text{'} \left(\alpha,\beta\right)$ then $y = \frac \sim \beta\text{'}\left(\beta,\alpha\right)$ combining the latter two equations yields : If $X \sim \beta \left(\alpha,\beta\right)$ then $\mathbf x = \frac -1 \sim \beta\text{'}\left(\beta,\alpha\right)$. : : If $\mathbf Y \sim \beta\text{'} \left(\alpha,\beta\right)$ then $y = \frac \sim \beta\left(\alpha,\beta\right)$ since $\frac = \frac \sim \beta\left(\beta,\alpha\right)$ then : $1+\mathbf Y \sim \ ^$, the distribution of the reciprocals of $\beta\left(\beta,\alpha\right)$ samples. If $U \sim \Gamma \left( \alpha , 1\right), V \sim \Gamma\left(\beta, 1\right)$ then $\frac \sim \beta\text{'} \left( \alpha, \beta \right)$ and : $\frac = \frac \sim \beta\left(\alpha,\beta\right)$ Further results can be found in the Inverse distribution article. * If $X, \; Y$ are independent exponential random variables with mean ''μ'', then ''X'' − ''Y'' is a double exponential random variable with mean 0 and scale ''μ''.

# Binomial distribution

This result was first derived by Katz et al. in 1978.Katz D. ''et al''.(1978) Obtaining confidence intervals for the risk ratio in cohort studies. Biometrics 34:469–474 Suppose ''X'' ~ Binomial(''n'',''p''1) and ''Y'' ~ Binomial(''m'',''p''2) and ''X'', ''Y'' are independent. Let ''T'' = (''X''/''n'')/(''Y''/''m''). Then log(''T'') is approximately normally distributed with mean log(''p''1/''p''2) and variance ((1/''p''1) − 1)/''n'' + ((1/''p''2) − 1)/''m''. The binomial ratio distribution is of significance in clinical trials: if the distribution of ''T'' is known as above, the probability of a given ratio arising purely by chance can be estimated, i.e. a false positive trial. A number of papers compare the robustness of different approximations for the binomial ratio.

# Poisson and truncated Poisson distributions

In the ratio of Poisson variables ''R = X/Y'' there is a problem that ''Y'' is zero with finite probability so ''R'' is undefined. To counter this, we consider the truncated, or censored, ratio ''R' = X/Y where zero sample of ''Y'' are discounted. Moreover, in many medical-type surveys, there are systematic problems with the reliability of the zero samples of both X and Y and it may be good practice to ignore the zero samples anyway. The probability of a null Poisson sample being $e^$, the generic pdf of a left truncated Poisson distribution is :$\tilde p_x\left(x;\lambda\right)= \frac , \;\;\; x \in 1,2,3, \cdots$ which sums to unity. Following Cohen, for ''n'' independent trials, the multidimensional truncated pdf is :$\tilde p\left(x_1, x_2, \dots ,x_n;\lambda\right)= \frac \prod_^n, \;\;\; x_i \in 1,2,3, \cdots$ and the log likelihood becomes :$L = \ln \left(\tilde p\right) =-n\ln \left(1-e^\right) -n \lambda + \ln\left(\lambda\right) \sum_1^n x_i - \ln \prod_1^n \left(x_i!\right), \;\;\; x_i \in 1,2,3, \cdots$ On differentiation we get :$dL/d\lambda = \frac + \frac\sum_^n x_i$ and setting to zero gives the maximum likelihood estimate $\hat \lambda_$ :$\frac = \frac \sum_^n x_i = \bar x$ Note that as $\hat \lambda \to 0$ then $\bar x \to 1$ so the truncated maximum likelihood $\lambda$ estimate, though correct for both truncated and untruncated distributions, gives a truncated mean $\bar x$ value which is highly biassed relative to the untruncated one. Nevertheless it appears that $\bar x$ is a
sufficient statistic In statistics, a statistic is ''sufficient'' with respect to a statistical model and its associated unknown parameter if "no other statistic that can be calculated from the same sample (statistics), sample provides any additional information as to ...
for $\lambda$ since $\hat \lambda_$ depends on the data only through the sample mean $\bar x = \frac \sum_^n x_i$ in the previous equation which is consistent with the methodology of the conventional
Poisson distribution In probability theory and statistics, the Poisson distribution is a discrete probability distribution that expresses the probability of a given number of events occurring in a fixed interval of time or space if these events occur with a known co ...
. Absent any closed form solutions, the following approximate reversion for truncated $\lambda$ is valid over the whole range $0 \le \lambda \le \infty; \; 1 \le \bar x \le \infty$. :$\hat \lambda = \bar x - e^ - 0.07\left(\bar x -1\right)e^ + \epsilon, \;\;\;, \epsilon , < 0.006$ which compares with the non-truncated version which is simply $\hat \lambda = \bar x$. Taking the ratio $R = \hat \lambda_X / \hat \lambda_Y$ is a valid operation even though $\hat \lambda_X$ may use a non-truncated model while $\hat \lambda_Y$ has a left-truncated one. The asymptotic large-$n\lambda \text\hat \lambda$ (and Cramér–Rao bound) is : in which substituting ''L'' gives : Then substituting $\bar x$ from the equation above, we get Cohen's variance estimate :$\mathbb \left( \hat \lambda \right) \ge \frac \frac$ The variance of the point estimate of the mean $\lambda$, on the basis of ''n'' trials, decreases asymptotically to zero as ''n'' increases to infinity. For small $\lambda$ it diverges from the truncated pdf variance in Springael for example, who quotes a variance of : for ''n'' samples in the left-truncated pdf shown at the top of this section. Cohen showed that the variance of the estimate relative to the variance of the pdf, $\mathbb \left( \hat \lambda\right) / \mathbb \left( \lambda\right)$, ranges from 1 for large $\lambda$ (100% efficient) up to 2 as $\lambda$ approaches zero (50% efficient). These mean and variance parameter estimates, together with parallel estimates for ''X'', can be applied to Normal or Binomial approximations for the Poisson ratio. Samples from trials may not be a good fit for the Poisson process; a further discussion of Poisson truncation is by Dietz and Bohning and there is a Zero-truncated Poisson distribution Wikipedia entry.

# Double Lomax distribution

This distribution is the ratio of two
Laplace distribution In probability theory and statistics, the Laplace distribution is a continuous probability distribution named after Pierre-Simon Laplace. It is also sometimes called the double exponential distribution, because it can be thought of as two exponen ...
s.Bindu P and Sangita K (2015) Double Lomax distribution and its applications. Statistica LXXV (3) 331–342 Let ''X'' and ''Y'' be standard Laplace identically distributed random variables and let ''z'' = ''X'' / ''Y''. Then the probability distribution of ''z'' is : $f\left( x \right) = \frac$ Let the mean of the ''X'' and ''Y'' be ''a''. Then the standard double Lomax distribution is symmetric around ''a''. This distribution has an infinite mean and variance. If ''Z'' has a standard double Lomax distribution, then 1/''Z'' also has a standard double Lomax distribution. The standard Lomax distribution is unimodal and has heavier tails than the Laplace distribution. For 0 < ''a'' < 1, the ''a''-th moment exists. : $E\left( Z^a \right) = \frac$ where Γ is the
gamma function In mathematics, the gamma function (represented by , the capital letter gamma from the Greek alphabet) is one commonly used extension of the factorial function to complex numbers. The gamma function is defined for all complex numbers except th ...
.

# Ratio distributions in multivariate analysis

Ratio distributions also appear in
multivariate analysis Multivariate statistics is a subdivision of statistics Statistics (from German language, German: ''wikt:Statistik#German, Statistik'', "description of a State (polity), state, a country") is the discipline that concerns the collection, o ...
. If the random matrices X and Y follow a Wishart distribution then the ratio of the
determinant In mathematics, the determinant is a Scalar (mathematics), scalar value that is a function (mathematics), function of the entries of a square matrix. It characterizes some properties of the matrix and the linear map represented by the matrix. In p ...
s : $\varphi = , \mathbf, /, \mathbf,$ is proportional to the product of independent F random variables. In the case where X and Y are from independent standardized Wishart distributions then the ratio : $\Lambda =$ has a Wilks' lambda distribution.

## Ratios of Quadratic Forms involving Wishart Matrices

Probability distribution can be derived from random quadratic forms :$r = V^T A V$ where $V$ and/or $A$ are random. If ''A'' is the inverse of another matrix ''B'' then $r = V^T B^ V$ is a random ratio in some sense, frequently arising in Least Squares estimation problems. In the Gaussian case if ''A'' is a matrix drawn from a complex Wishart distribution $A\sim W_C\left(A_0,k,p\right)$ of dimensionality ''p x p'' and ''k'' degrees of freedom with $k \ge p$ while $V$ is an arbitrary complex vector with Hermitian (conjugate) transpose $\left(.\right)^H$, the ratio :$r = k\frac$ follows the Gamma distribution : $p_1\left(r\right) = \frac , \;\;\; r \ge 0$ The result arises in least squares adaptive Wiener filtering - see eqn(A13) of. Note that the original article contends that the distribution is $p_1\left(r\right) = r^ \; e^\; / \Gamma\left(k-p\right)$. Similarly, for full-rank ( $k \ge p \right)$ zero-mean real-valued Wishart matrix samples $W \sim W\left(\Sigma,k,p\right)$, and ''V'' a random vector independent of ''W'', the ratio :$r = \frac \sim \chi^2_$ This result is usually attributed to Muirhead (1982). Given complex Wishart matrix $A\sim W_C\left(I,k,p\right)$, the ratio :$\rho = \frac$ follows the Beta distribution (see eqn(47) of) : $p_2\left(\rho\right) = \left(1-\rho\right)^ \rho^ \frac , \;\;\; 0 \le \rho \le 1$ The result arises in the performance analysis of constrained least squares filtering and derives from a more complex but ultimately equivalent ratio that if $A\sim W_C\left(A_0,n,p\right)$ then :$\rho = \frac$ In its simplest form, if $A_\sim W_C\left(I,k,p\right)$ and $W^ = \left \left(W^ \right \right)_$ then the ratio of the (1,1) inverse element squared to the sum of modulus squares of the whole top row elements has distribution : $\rho = \frac \sim \beta\left( p-1, k-p+2 \right)$

* Relationships among probability distributions * Inverse distribution (also known as reciprocal distribution) * Product distribution *
Ratio estimator The ratio estimator is a statistical parameter and is defined to be the ratio of Arithmetic mean, means of two random variables. Ratio estimates are Bias (statistics), biased and corrections must be made when they are used in experimental or survey ...
* Slash distribution

{{Reflist