Distribution Of The Product Of Two Random Variables
   HOME

TheInfoList



OR:

A product distribution is a
probability distribution In probability theory and statistics, a probability distribution is the mathematical function that gives the probabilities of occurrence of different possible outcomes for an experiment. It is a mathematical description of a random phenomenon i ...
constructed as the distribution of the
product Product may refer to: Business * Product (business), an item that serves as a solution to a specific consumer problem. * Product (project management), a deliverable or set of deliverables that contribute to a business solution Mathematics * Produ ...
of
random variable A random variable (also called random quantity, aleatory variable, or stochastic variable) is a mathematical formalization of a quantity or object which depends on random events. It is a mapping or a function from possible outcomes (e.g., the po ...
s having two other known distributions. Given two
statistically independent Independence is a fundamental notion in probability theory, as in statistics and the theory of stochastic processes. Two events are independent, statistically independent, or stochastically independent if, informally speaking, the occurrence of o ...
random variables ''X'' and ''Y'', the distribution of the random variable ''Z'' that is formed as the product Z = XY is a ''product distribution''.


Algebra of random variables

The product is one type of algebra for random variables: Related to the product distribution are the
ratio distribution A ratio distribution (also known as a quotient distribution) is a probability distribution constructed as the distribution of the ratio of random variables having two other known distributions. Given two (usually independent) random variables ''X'' ...
, sum distribution (see
List of convolutions of probability distributions In probability theory, the probability distribution of the sum of two or more independent random variables is the convolution of their individual distributions. The term is motivated by the fact that the probability mass function or probability ...
) and difference distribution. More generally, one may talk of combinations of sums, differences, products and ratios. Many of these distributions are described in Melvin D. Springer's book from 1979 ''The Algebra of Random Variables''.


Derivation for independent random variables

If X and Y are two independent, continuous random variables, described by probability density functions f_X and f_Y then the probability density function of Z = XY is :f_Z(z) = \int^\infty_ f_X(x) f_Y( z/x) \frac\, dx.


Proof

We first write the
cumulative distribution function In probability theory and statistics, the cumulative distribution function (CDF) of a real-valued random variable X, or just distribution function of X, evaluated at x, is the probability that X will take a value less than or equal to x. Ev ...
of Z starting with its definition :\begin F_Z(z) & \, \stackrel\ \mathbb(Z\leq z) \\ & = \mathbb(XY\leq z) \\ & = \mathbb(XY\leq z , X \geq 0) + \mathbb(XY\leq z , X \leq 0)\\ & = \mathbb(Y\leq z/X , X \geq 0) + \mathbb(Y\geq z/X , X \leq 0)\\ & = \int^\infty_0 f_X(x) \int^_ f_Y(y)\, dy \,dx +\int^0_ f_X(x) \int^\infty_ f_Y(y)\, dy \,dx \end We find the desired probability density function by taking the derivative of both sides with respect to z . Since on the right hand side, z appears only in the integration limits, the derivative is easily performed using the
fundamental theorem of calculus The fundamental theorem of calculus is a theorem that links the concept of differentiating a function (calculating its slopes, or rate of change at each time) with the concept of integrating a function (calculating the area under its graph, or ...
and the
chain rule In calculus, the chain rule is a formula that expresses the derivative of the composition of two differentiable functions and in terms of the derivatives of and . More precisely, if h=f\circ g is the function such that h(x)=f(g(x)) for every , ...
. (Note the negative sign that is needed when the variable occurs in the lower limit of the integration.) : \begin f_Z(z) & = \int^\infty_0 f_X(x) f_Y(z/x) \frac\,dx -\int^0_ f_X(x) f_Y(z/x) \frac \,dx \\ & = \int^\infty_0 f_X(x) f_Y(z/x) \frac\,dx + \int_^0 f_X(x) f_Y(z/x) \frac \,dx \\ & = \int^\infty_ f_X(x) f_Y(z/x) \frac\, dx. \end where the absolute value is used to conveniently combine the two terms.


Alternate proof

A faster more compact proof begins with the same step of writing the cumulative distribution of Z starting with its definition: : \begin F_Z(z) & \overset \ \ \mathbb(Z\leq z) \\ & = \mathbb(XY\leq z) \\ & = \int^\infty_ \int^\infty_ f_X(x) f_Y(y) u(z-xy) \, dy \,dx \end where u(\cdot) is the
Heaviside step function The Heaviside step function, or the unit step function, usually denoted by or (but sometimes , or ), is a step function, named after Oliver Heaviside (1850–1925), the value of which is zero for negative arguments and one for positive argume ...
and serves to limit the region of integration to values of x and y satisfying xy\leq z. We find the desired probability density function by taking the derivative of both sides with respect to z . : \begin f_Z(z) & = \int^\infty_ \int^\infty_ f_X(x) f_Y(y) \delta(z-xy) \, dy \,dx\\ & = \int^\infty_ f_X(x) f_Y(z/x) \left int^\infty_ \delta(z-xy) \, dy \right,dx\\ & = \int^\infty_ f_X(x) f_Y(z/x) \frac\, dx. \end where we utilize the translation and scaling properties of the
Dirac delta function In mathematics, the Dirac delta distribution ( distribution), also known as the unit impulse, is a generalized function or distribution over the real numbers, whose value is zero everywhere except at zero, and whose integral over the entire ...
\delta. A more intuitive description of the procedure is illustrated in the figure below. The joint pdf f_X(x) f_Y(y) exists in the x-y plane and an arc of constant z value is shown as the shaded line. To find the marginal probability f_Z(z) on this arc, integrate over increments of area dx\,dy \;f(x,y) on this contour. Starting with y= \frac , we have dy = -\frac \, dx = -\frac \, dx . So the probability increment is \delta p = f(x,y) \,dx\,, dy, = f_X(x)f_Y(z/x) \frac \, dx \, dx . Since z=yx implies dz=y\,dx , we can relate the probability increment to the z-increment, namely \delta p = f_X(x)f_Y(z/x) \frac \, dx \, dz . Then integration over x, yields f_Z(z)=\int f_X(x)f_Y(z/x) \frac \, dx .


A Bayesian interpretation

Let X \sim f(x) be a random sample drawn from probability distribution f_x(x). Scaling X by \theta generates a sample from scaled distribution \theta X \sim \frac f_X \left ( \frac \right ) which can be written as a conditional distribution g_x(x, \theta)= \frac f_x \left ( \frac \right ). Letting \theta be a random variable with pdf f_\theta ( \theta ) , the distribution of the scaled sample becomes f_X ( \theta x ) = g_X(x\mid \theta) f_\theta (\theta) and integrating out \theta we get h_x(x) = \int_^\infty g_X(x, \theta) f_\theta (\theta) d\theta so \theta X is drawn from this distribution \theta X \sim h_X(x) . However, substituting the definition of g we also have h_X(x) = \int_^\infty \frac f_x \left ( \frac \right ) f_\theta (\theta) \, d\theta which has the same form as the product distribution above. Thus the Bayesian posterior distribution h_X(x) is the distribution of the product of the two independent random samples \theta and X . For the case of one variable being discrete, let \theta have probability P_i at levels \theta_i with \sum_i P_i = 1 . The conditional density is f_X ( x \mid \theta_i ) = \frac f_x \left ( \frac \right ) . Therefore f_X ( \theta x ) = \sum \frac f_X \left ( \frac \right ).


Expectation of product of random variables

When two random variables are statistically independent, the expectation of their product is the product of their expectations. This can be proved from the
law of total expectation The proposition in probability theory known as the law of total expectation, the law of iterated expectations (LIE), Adam's law, the tower rule, and the smoothing theorem, among other names, states that if X is a random variable whose expected v ...
: :\operatorname(X Y) = \operatorname ( \operatorname (X Y \mid Y)) In the inner expression, is a constant. Hence: :\operatorname (X Y \mid Y) = Y\cdot \operatorname \mid Y/math> :\operatorname(X Y) = \operatorname ( Y\cdot \operatorname \mid Y This is true even if and are statistically dependent in which case \operatorname \mid Y/math> is a function of . In the special case in which and are statistically independent, it is a constant independent of . Hence: :\operatorname(X Y) = \operatorname( Y\cdot \operatorname :\operatorname(X Y) = \operatorname(X) \cdot \operatorname(Y)


Variance of the product of independent random variables

Let X, Y be uncorrelated random variables with means \mu_X, \mu_Y, and variances \sigma_X^2, \sigma_Y^2 . If, additionally, the random variables X^2 and Y^2 are uncorrelated, then the variance of the product ''XY'' is : \operatorname(XY) = (\sigma_X^2 + \mu_X^2 )(\sigma_Y^2 + \mu_Y^2 ) -\mu_X^2 \mu_Y^2 In the case of the product of more than two variables, if X_1 \cdots X_n, \;\; n>2 are statistically independent then the variance of their product is : \operatorname(X_1X_2\cdots X_n) = \prod_^n (\sigma_i^2 + \mu_i^2 ) -\prod_^n \mu_i^2


Characteristic function of product of random variables

Assume ''X'', ''Y'' are independent random variables. The characteristic function of ''X'' is \varphi_X(t), and the distribution of ''Y'' is known. Then from the
law of total expectation The proposition in probability theory known as the law of total expectation, the law of iterated expectations (LIE), Adam's law, the tower rule, and the smoothing theorem, among other names, states that if X is a random variable whose expected v ...
, we have :\begin \varphi_Z(t) & =\operatorname(e^) \\ & = \operatorname ( \operatorname (e^ \mid Y)) \\ & = \operatorname ( \varphi_X(tY)) \end If the characteristic functions and distributions of both ''X'' and ''Y'' are known, then alternatively, \varphi_Z(t) = \operatorname(\varphi_Y(tX)) also holds.


Mellin transform

The
Mellin transform In mathematics, the Mellin transform is an integral transform that may be regarded as the multiplicative version of the two-sided Laplace transform. This integral transform is closely connected to the theory of Dirichlet series, and is often used i ...
of a distribution f(x) with support ''only'' on x \ge 0 and having a random sample X is : \mathcalf(x) = \varphi(s)=\int_0^\infty x^ f(x) \, dx = \operatorname X^ The inverse transform is : \mathcal^\varphi (s) = f(x)=\frac \int_^ x^ \varphi(s)\, ds. if X \text Y are two independent random samples from different distributions, then the Mellin transform of their product is equal to the product of their Mellin transforms: : \mathcal_(s) = \mathcal_X(s)\mathcal_Y(s) If ''s'' is restricted to integer values, a simpler result is : \operatorname XY)^n= \operatorname ^n\; \operatorname ^n Thus the moments of the random product XY are the product of the corresponding moments of X \text Y and this extends to non-integer moments, for example : \operatorname = \operatorname ^\; \operatorname Y^ The pdf of a function can be reconstructed from its moments using the
saddlepoint approximation method The saddlepoint approximation method, initially proposed by Daniels (1954) is a specific example of the mathematical saddlepoint technique applied to statistics. It provides a highly accurate approximation formula for any PDF Portable Docum ...
. A further result is that for independent ''X'', ''Y'' : \operatorname ^pY^q= \operatorname ^p\operatorname ^q Gamma distribution example To illustrate how the product of moments yields a much simpler result than finding the moments of the distribution of the product, let X,Y be sampled from two
Gamma distribution In probability theory and statistics, the gamma distribution is a two-parameter family of continuous probability distributions. The exponential distribution, Erlang distribution, and chi-square distribution are special cases of the gamma distri ...
s, f_(x;\theta,1) = \Gamma(\theta)^ x^ e^ with parameters \theta = \alpha, \beta whose moments are : \operatorname ^p= \int_0^\infty x^p \Gamma (x, \theta)\,dx = \frac . Multiplying the corresponding moments gives the Mellin transform result : \operatorname XY)^p= \operatorname ^p\; \operatorname ^p= \frac \; \frac Independently, it is known that the product of two independent Gamma-distributed samples (~Gamma(α,1) and Gamma(β,1)) has a
K-distribution In probability and statistics, the generalized K-distribution is a three-parameter family of continuous probability distributions. The distribution arises by compounding two gamma distributions. In each case, a re-parametrization of the usual f ...
: : f(z,\alpha,\beta) = 2 \Gamma(\alpha)^ \Gamma(\beta)^z^K_(2\sqrt z) = \fracf_\left(\frac;1,\alpha,\beta\right), \; z\ge 0 To find the moments of this, make the change of variable y = 2 \sqrt z , simplifying similar integrals to: : \int_0^\infty z^p K_\nu (2 \sqrt z) \, dz = 2^ \int_0^\infty y^ K_\nu (y) \, dy thus : 2 \int_0^\infty z^ K_ (2 \sqrt z) \, dz = 2^ \int_0^\infty y^ K_(y) \, dy The definite integral : \int_0^\infty y^\mu K_\nu (y) \,dy = 2^ \Gamma \left ( \frac \right ) \Gamma \left (\frac \right ) is well documented and we have finally : \begin E ^p& = \frac \Gamma \left ( \frac \right ) \Gamma \left( \frac \right ) \\ \\ & = \frac \end which, after some difficulty, has agreed with the moment product result above. If ''X'', ''Y'' are drawn independently from Gamma distributions with shape parameters \alpha, \; \beta then : \operatorname ^pY^q= \operatorname ^p\; \operatorname ^q= \frac \; \frac This type of result is universally true, since for bivariate independent variables f_(x,y) = f_X(x) f_Y(y) thus :\begin \operatorname ^pY^q& = \int_^\infty \int_^\infty x^p y^q f_(x,y) \, dy \, dx \\ & = \int_^\infty x^p \Big \int_^\infty y^q f_Y(y)\, dy \Big f_X(x) \, dx \\ & = \int_^\infty x^p f_X(x) \, dx \int_^\infty y^q f_Y(y) \, dy \\ & = \operatorname ^p\; \operatorname ^q\end or equivalently it is clear that X^p \text Y^q are independent variables.


Special cases


Lognormal distributions

The distribution of the product of two random variables which have
lognormal distribution In probability theory, a log-normal (or lognormal) distribution is a continuous probability distribution of a random variable whose logarithm is normally distributed. Thus, if the random variable is log-normally distributed, then has a normal ...
s is again lognormal. This is itself a special case of a more general set of results where the logarithm of the product can be written as the sum of the logarithms. Thus, in cases where a simple result can be found in the
list of convolutions of probability distributions In probability theory, the probability distribution of the sum of two or more independent random variables is the convolution of their individual distributions. The term is motivated by the fact that the probability mass function or probability ...
, where the distributions to be convolved are those of the logarithms of the components of the product, the result might be transformed to provide the distribution of the product. However this approach is only useful where the logarithms of the components of the product are in some standard families of distributions.


Uniformly distributed independent random variables

Let Z be the product of two independent variables Z= X_1 X_2 each uniformly distributed on the interval ,1 possibly the outcome of a copula transformation. As noted in "Lognormal Distributions" above, PDF convolution operations in the Log domain correspond to the product of sample values in the original domain. Thus, making the transformation u=\ln (x) , such that p_U(u) \, , du, = p_X(x) \, , dx, , each variate is distributed independently on ''u'' as : p_U (u) =\frac = \frac = e^u, \;\; -\infty < u \le 0 . and the convolution of the two distributions is the autoconvolution : c(y) = \int_^y e^u e^ du = - \int_^0 e^y du = -y e^y , \;\; -\infty < y \le 0 Next retransform the variable to z=e^y yielding the distribution : c_2(z) = c_Y(y)/, dz/dy, = \frac = -y = \ln (1/z) on the interval ,1 For the product of multiple (> 2) independent samples the
characteristic function In mathematics, the term "characteristic function" can refer to any of several distinct concepts: * The indicator function of a subset, that is the function ::\mathbf_A\colon X \to \, :which for a given subset ''A'' of ''X'', has value 1 at points ...
route is favorable. If we define \tilde = -y then c(\tilde) above is a
Gamma distribution In probability theory and statistics, the gamma distribution is a two-parameter family of continuous probability distributions. The exponential distribution, Erlang distribution, and chi-square distribution are special cases of the gamma distri ...
of shape 1 and scale factor 1, c(\tilde) = \tilde e^ , and its known CF is (1 - i t)^ . Note that , d\tilde, = , dy, so the Jacobian of the transformation is unity. The convolution of n independent samples from \tilde therefore has CF (1 - i t)^ which is known to be the CF of a Gamma distribution of shape n: : c_n (\tilde) = \Gamma(n)^ \tilde^ e^ = \Gamma(n)^ (-y)^ e^y . Making the inverse transformation z=e^y we get the PDF of the product of the n samples: : f_n(z) = \frac = \Gamma(n)^ \Big (-\log z \Big )^ e^y / e^y = \frac \;\;\; 0 < z \le 1 The following, more conventional, derivation from Stackexchange is consistent with this result. First of all, letting Z_2 = X_1 X_2 its CDF is : \begin F_(z) = \Pr \Big Z_2 \le z \Big & = \int_^1 \Pr \Big X_2 \le \frac \Big f_ (x) \, dx \\ & = \int_^z 1 dx + \int_^1 \frac \, dx \\ & = z - z\log z, \;\; 0 < z \le 1 \end The density of z_2 \text f(z_2) = -\log(z_2) Multiplying by a third independent sample gives distribution function : \begin F_(z) = \Pr \Big _3 \le z \Big & = \int_^1 \Pr \Big X_3 \le \frac \Bigf_(x) \, dx \\ & = -\int_^z \log(x) \, dx - \int_^1 \frac \log(x) \,dx \\ & = -z \Big (\log(z) - 1 \Big ) + \fracz \log^2 (z) \end Taking the derivative yields f_(z) = \frac \log^2 (z), \;\; 0 < z \le 1. The author of the note conjectures that, in general, f_(z) = \frac, \;\; 0 < z \le 1 The figure illustrates the nature of the integrals above. The shaded area within the unit square and below the line z = xy, represents the CDF of z. This divides into two parts. The first is for 0 < x < z where the increment of area in the vertical slot is just equal to ''dx''. The second part lies below the ''xy'' line, has ''y''-height ''z/x'', and incremental area ''dx z/x''.


Independent central-normal distributions

The product of two independent Normal samples follows a
modified Bessel function Bessel functions, first defined by the mathematician Daniel Bernoulli and then generalized by Friedrich Bessel, are canonical solutions of Bessel's differential equation x^2 \frac + x \frac + \left(x^2 - \alpha^2 \right)y = 0 for an arbitrary ...
. Let x, y be samples from a Normal(0,1) distribution and z=xy . Then : p_Z(z) = \frac , \;\;\; -\infty < z < +\infty
The variance of this distribution could be determined, in principle, by a definite integral from Gradsheyn and Ryzhik, : \int_0^\infty x^\mu K_\nu (ax) \, dx = 2^ a^ \Gamma \Big ( \frac \Big ) \Gamma \Big ( \frac \Big), \;\; a>0, \;\nu + 1 \pm \mu >0 thus \int_^\infty \frac \, dz = \frac \; \Gamma^2 \Big (\frac \Big ) = 1 A much simpler result, stated in a section above, is that the variance of the product of zero-mean independent samples is equal to the product of their variances. Since the variance of each Normal sample is one, the variance of the product is also one.


Correlated central-normal distributions

The product of correlated Normal samples case was recently addressed by Nadarajaha and Pogány. Let X \text Y be zero mean, unit variance, normally distributed variates with correlation coefficient \rho \text Z = XY Then : f_Z(z) = \frac\exp \left ( \frac \right ) K_0 \left ( \frac \right ) Mean and variance: For the mean we have \operatorname = \rho from the definition of correlation coefficient. The variance can be found by transforming from two unit variance zero mean uncorrelated variables ''U, V''. Let : X = U, \;\; Y = \rho U + \sqrt V Then '' X, Y'' are unit variance variables with correlation coefficient \rho and : (XY)^2 = U^2 \bigg ( \rho U + \sqrt V \bigg )^2 = U^2 \bigg (\rho^2 U^2 + 2\rho \sqrtUV + (1-\rho^2) V^2 \bigg ) Removing odd-power terms, whose expectations are obviously zero, we get : \operatorname XY)^2= \rho^2\operatorname ^4+ (1-\rho^2)\operatorname ^2operatorname ^2 = 3 \rho^2 + (1-\rho^2) = 1 + 2 \rho^2 Since (\operatorname ^2 = \rho^2 we have :: \operatorname(Z) = \operatorname ^2- (\operatorname ^2= 1 + 2 \rho^2 - \rho^2 = 1 + \rho^2 High correlation asymptote In the highly correlated case, \rho \rightarrow 1 the product converges on the square of one sample. In this case the K_0 asymptote is K_0(x) \rightarrow \sqrt e^ \text x = \frac \rightarrow \infty and
: \begin p(z) & \rightarrow \frac \exp \left (\frac \right) \sqrt \exp\left (-\frac \right) \\ & = \frac \exp \Bigg (\frac \Bigg) \\ & = \frac \exp \Bigg (\frac \Bigg), \;\; z>0 \\ & \rightarrow \frac e^, \;\; \text \rho \rightarrow 1 \\ \end which is a
Chi-squared distribution In probability theory and statistics, the chi-squared distribution (also chi-square or \chi^2-distribution) with k degrees of freedom is the distribution of a sum of the squares of k independent standard normal random variables. The chi-squa ...
with one degree of freedom. Multiple correlated samples. Nadarajaha et al. further show that if Z_1, Z_2,..Z_n \text n iid random variables sampled from f_Z(z) and \bar = \tfrac \sum Z_i is their mean then
: f_\bar Z( z)= \frac , z, ^ \exp \left( \fracz \right) _(, z , ), \;\; -\infty < z < \infty. where ''W'' is the Whittaker function while \beta = \frac, \;\; \gamma = \frac . Using the identity W_(x)= \sqrt K_\nu(x/2), \;\; x \ge 0 , see for example the DLMF compilation. eqn(13.13.9), this expression can be somewhat simplified to : f_\bar z( z)= \frac , z, ^ \exp \left (\fracz \right ) \sqrt \; K_ \left ( \frac , z, \right ), \;\; -\infty < z < \infty. The pdf gives the distribution of a sample covariance. The approximate distribution of a correlation coefficient can be found via the
Fisher transformation In statistics, the Fisher transformation (or Fisher ''z''-transformation) of a Pearson correlation coefficient is its inverse hyperbolic tangent (artanh). When the sample correlation coefficient ''r'' is near 1 or -1, its distribution is high ...
. Multiple non-central correlated samples. The distribution of the product of correlated non-central normal samples was derived by Cui et al. and takes the form of an infinite series of modified Bessel functions of the first kind. Moments of product of correlated central normal samples For a central
normal distribution In statistics, a normal distribution or Gaussian distribution is a type of continuous probability distribution for a real-valued random variable. The general form of its probability density function is : f(x) = \frac e^ The parameter \mu ...
N(0,1) the moments are : \operatorname ^p= \frac\int_^\infty x^p \exp (-\tfrac ) \, dx= \begin 0 & \textp\text \\ \sigma^p (p-1)!! & \textp\text \end where n!! denotes the
double factorial In mathematics, the double factorial or semifactorial of a number , denoted by , is the product of all the integers from 1 up to that have the same parity (odd or even) as . That is, :n!! = \prod_^ (n-2k) = n (n-2) (n-4) \cdots. For even , the ...
. If X,Y \sim \text(0,1) are central correlated variables, the simplest bivariate case of the multivariate normal moment problem described by Kan, then : \operatorname ^pY^q= \begin 0 & \textp+q \text \\ \frac \sum_^t \frac & \text p \text q \text \\ \frac \sum_^t \frac & \text p \text q \text \end where : \rho is the correlation coefficient and t = \min( ,q2) eeds checking


Correlated non-central normal distributions

The distribution of the product of non-central correlated normal samples was derived by Cui et al. and takes the form of an infinite series. These product distributions are somewhat comparable to the
Wishart distribution In statistics, the Wishart distribution is a generalization to multiple dimensions of the gamma distribution. It is named in honor of John Wishart, who first formulated the distribution in 1928. It is a family of probability distributions define ...
. The latter is the ''joint'' distribution of the four elements (actually only three independent elements) of a sample covariance matrix. If x_t, y_t are samples from a bivariate time series then the W = \sum_^K \dbinom ^T is a Wishart matrix with ''K'' degrees of freedom. The product distributions above are the unconditional distribution of the aggregate of ''K'' > 1 samples of W_ .


Independent complex-valued central-normal distributions

Let u_1, v_1, u_2, v_2 be independent samples from a normal(0,1) distribution.
Setting z_1 = u_1 + i v_1 \text z_2 = u_2 + i v_2 \text z_1, z_2 are independent zero-mean complex normal samples with circular symmetry. Their complex variances are \operatorname , z_i, = 2. The density functions of : r_i \equiv , z_i, = (u_i^2 + v_i^2)^, \;\;i=1,2 are Rayleigh distributions defined as: : f_r(r_i) = r_i e^ \text \sqrt \text \frac The variable y_i \equiv r_i^2 is clearly Chi-squared with two degrees of freedom and has PDF : f_(y_i) = \tfrac e^ \text 2 Wells et al. show that the density function of s \equiv , z_1 z_2, is : f_s (s) = s K_0(s), \;\; s \ge 0 and the cumulative distribution function of s is :P(a) = \Pr \le a = \int_^a s K_0(s) ds = 1 - a K_1(a) Thus the polar representation of the product of two uncorrelated complex Gaussian samples is : f_(s,\theta) = f_s(s) p_\theta(\theta) \text p(\theta) \text ,2\pi. The first and second moments of this distribution can be found from the integral in ''Normal Distributions'' above
: m_1 = \int_0^\infty s^2 K_0 (s) \, dx = 2 \Gamma^2 ( \tfrac ) = 2 (\tfrac )^2 = \frac : m_2 = \int_0^\infty s^3 K_0 (s) \, dx = 2^2 \Gamma^2 ( \tfrac ) = 4 Thus its variance is \operatorname (s) = m_2 - m_1^2 = 4 - \frac . Further, the density of z \equiv s^2 = ^2 = ^2 ^2 = y_1 y_2 corresponds to the product of two independent Chi-square samples y_i each with two DoF. Writing these as scaled Gamma distributions f_y(y_i) = \tfrac e^ \text \theta = 2 then, from the Gamma products below, the density of the product is : f_Z(z) = \tfrac K_0(\sqrt) \text \operatorname(z) = 4


Independent complex-valued noncentral normal distributions

The product of non-central independent complex Gaussians is described by O’Donoughue and Moura and forms a double infinite series of modified Bessel functions of the first and second types.


Gamma distributions

The product of two independent Gamma samples, z = x_1 x_2 , defining \Gamma(x;k_i,\theta_i) = \frac , follows : \begin p_Z(z) & = \frac \frac K_ \left( 2 \sqrt \right) \\ \\ & = \frac \frac K_ \left(2 \sqrt y \right) \text y = \frac \\ \end


Beta distributions

Nagar et al. define a correlated bivariate beta distribution : f(x,y) = \frac , \;\;\;0< x,y <1 where : B(a,b,c) = \frac Then the pdf of ''Z'' = ''XY'' is given by : f_Z(z) = \frac (a+c,a+c; a+b+2c; 1-z), \;\;\; 0< z <1 where is the Gauss hypergeometric function defined by the Euler integral : (a,b,c,z) = \frac \int_0^1 v^ (1-v)^ (1-vz)^ \, dv Note that multivariate distributions are not generally unique, apart from the Gaussian case, and there may be alternatives.


Uniform and gamma distributions

The distribution of the product of a random variable having a
uniform distribution Uniform distribution may refer to: * Continuous uniform distribution * Discrete uniform distribution * Uniform distribution (ecology) * Equidistributed sequence In mathematics, a sequence (''s''1, ''s''2, ''s''3, ...) of real numbers is said to be ...
on (0,1) with a random variable having a
gamma distribution In probability theory and statistics, the gamma distribution is a two-parameter family of continuous probability distributions. The exponential distribution, Erlang distribution, and chi-square distribution are special cases of the gamma distri ...
with shape parameter equal to 2, is an
exponential distribution In probability theory and statistics, the exponential distribution is the probability distribution of the time between events in a Poisson point process, i.e., a process in which events occur continuously and independently at a constant average ...
. A more general case of this concerns the distribution of the product of a random variable having a
beta distribution In probability theory and statistics, the beta distribution is a family of continuous probability distributions defined on the interval , 1in terms of two positive parameters, denoted by ''alpha'' (''α'') and ''beta'' (''β''), that appear as ...
with a random variable having a
gamma distribution In probability theory and statistics, the gamma distribution is a two-parameter family of continuous probability distributions. The exponential distribution, Erlang distribution, and chi-square distribution are special cases of the gamma distri ...
: for some cases where the parameters of the two component distributions are related in a certain way, the result is again a gamma distribution but with a changed shape parameter. The
K-distribution In probability and statistics, the generalized K-distribution is a three-parameter family of continuous probability distributions. The distribution arises by compounding two gamma distributions. In each case, a re-parametrization of the usual f ...
is an example of a non-standard distribution that can be defined as a product distribution (where both components have a gamma distribution).


Gamma and Pareto distributions

The product of ''n'' Gamma and ''m'' Pareto independent samples was derived by Nadarajah.


See also

* Algebra of random variables *
Sum of independent random variables In probability theory and statistics, there are several relationships among probability distributions. These relations can be categorized in the following groups: *One distribution is a special case of another with a broader parameter space *T ...


Notes


References

* * {{DEFAULTSORT:Product Distribution Types of probability distributions Algebra of random variables