HOME

TheInfoList



OR:

In
probability theory Probability theory is the branch of mathematics concerned with probability. Although there are several different probability interpretations, probability theory treats the concept in a rigorous mathematical manner by expressing it through a set ...
and statistics, the noncentral chi-squared distribution (or noncentral chi-square distribution, noncentral \chi^2 distribution) is a noncentral generalization of the
chi-squared distribution In probability theory and statistics, the chi-squared distribution (also chi-square or \chi^2-distribution) with k degrees of freedom is the distribution of a sum of the squares of k independent standard normal random variables. The chi-squar ...
. It often arises in the
power analysis Power analysis is a form of side channel attack in which the attacker studies the power consumption of a cryptographic hardware device. These attacks rely on basic physical properties of the device: semiconductor devices are governed by the ...
of statistical tests in which the null distribution is (perhaps asymptotically) a chi-squared distribution; important examples of such tests are the
likelihood-ratio test In statistics, the likelihood-ratio test assesses the goodness of fit of two competing statistical models based on the ratio of their likelihoods, specifically one found by maximization over the entire parameter space and another found after im ...
s.


Definitions


Background

Let (X_1,X_2, \ldots, X_i, \ldots,X_k) be ''k''
independent Independent or Independents may refer to: Arts, entertainment, and media Artist groups * Independents (artist group), a group of modernist painters based in the New Hope, Pennsylvania, area of the United States during the early 1930s * Independ ...
, normally distributed random variables with means \mu_i and unit variances. Then the random variable : \sum_^k X_i^2 is distributed according to the noncentral chi-squared distribution. It has two parameters: k which specifies the number of degrees of freedom (i.e. the number of X_i), and \lambda which is related to the mean of the random variables X_i by: : \lambda=\sum_^k \mu_i^2. \lambda is sometimes called the noncentrality parameter. Note that some references define \lambda in other ways, such as half of the above sum, or its square root. This distribution arises in
multivariate statistics Multivariate statistics is a subdivision of statistics encompassing the simultaneous observation and analysis of more than one outcome variable. Multivariate statistics concerns understanding the different aims and background of each of the dif ...
as a derivative of the
multivariate normal distribution In probability theory and statistics, the multivariate normal distribution, multivariate Gaussian distribution, or joint normal distribution is a generalization of the one-dimensional ( univariate) normal distribution to higher dimensions. One ...
. While the central
chi-squared distribution In probability theory and statistics, the chi-squared distribution (also chi-square or \chi^2-distribution) with k degrees of freedom is the distribution of a sum of the squares of k independent standard normal random variables. The chi-squar ...
is the squared
norm Naturally occurring radioactive materials (NORM) and technologically enhanced naturally occurring radioactive materials (TENORM) consist of materials, usually industrial wastes or by-products enriched with radioactive elements found in the envi ...
of a
random vector In probability, and statistics, a multivariate random variable or random vector is a list of mathematical variables each of whose value is unknown, either because the value has not yet occurred or because there is imperfect knowledge of its value ...
with N(0_k,I_k) distribution (i.e., the squared distance from the origin to a point taken at random from that distribution), the non-central \chi^2 is the squared norm of a random vector with N(\mu,I_k) distribution. Here 0_k is a zero vector of length ''k'', \mu = (\mu_1, \ldots, \mu_k) and I_k is the identity matrix of size ''k''.


Density

The
probability density function In probability theory, a probability density function (PDF), or density of a continuous random variable, is a function whose value at any given sample (or point) in the sample space (the set of possible values taken by the random variable) ca ...
(pdf) is given by : f_X(x; k,\lambda) = \sum_^\infty \frac f_(x), where Y_q is distributed as chi-squared with q degrees of freedom. From this representation, the noncentral chi-squared distribution is seen to be a Poisson-weighted mixture of central chi-squared distributions. Suppose that a random variable ''J'' has a
Poisson distribution In probability theory and statistics, the Poisson distribution is a discrete probability distribution that expresses the probability of a given number of events occurring in a fixed interval of time or space if these events occur with a known co ...
with mean \lambda/2, and the
conditional distribution In probability theory and statistics, given two jointly distributed random variables X and Y, the conditional probability distribution of Y given X is the probability distribution of Y when X is known to be a particular value; in some cases the ...
of ''Z'' given ''J'' = ''i'' is chi-squared with ''k'' + 2''i'' degrees of freedom. Then the unconditional distribution of ''Z'' is non-central chi-squared with ''k'' degrees of freedom, and non-centrality parameter \lambda. Alternatively, the pdf can be written as : f_X(x;k,\lambda)=\frac 1 2 e^ \left (\frac x \lambda \right)^ I_(\sqrt) where I_\nu(y) is a modified
Bessel function Bessel functions, first defined by the mathematician Daniel Bernoulli and then generalized by Friedrich Bessel, are canonical solutions of Bessel's differential equation x^2 \frac + x \frac + \left(x^2 - \alpha^2 \right)y = 0 for an arbitrar ...
of the first kind given by : I_\nu(y) = (y/2)^\nu \sum_^\infty \frac. Using the relation between
Bessel function Bessel functions, first defined by the mathematician Daniel Bernoulli and then generalized by Friedrich Bessel, are canonical solutions of Bessel's differential equation x^2 \frac + x \frac + \left(x^2 - \alpha^2 \right)y = 0 for an arbitrar ...
s and hypergeometric functions, the pdf can also be written as: :f_X(x;k,\lambda)= _0F_1(;k/2;\lambda x/4)\frac 1 ^ x^. Siegel (1979) discusses the case ''k'' = 0 specifically (
zero degrees of freedom In statistics, the non-central chi-squared distribution with zero degrees of freedom can be used in testing the null hypothesis that a sample is from a uniform distribution on the interval (0, 1). This distribution was introduced by Andrew ...
), in which case the distribution has a discrete component at zero.


Derivation of the pdf

The derivation of the probability density function is most easily done by performing the following steps: # Since X_1,\ldots,X_k have unit variances, their joint distribution is spherically symmetric, up to a location shift. # The spherical symmetry then implies that the distribution of X=X_1^2+\cdots+X_k^2 depends on the means only through the squared length, \lambda=\mu_1^2+\cdots+\mu_k^2. Without loss of generality, we can therefore take \mu_1=\sqrt and \mu_2=\cdots=\mu_k=0. # Now derive the density of X=X_1^2 (i.e. the ''k'' = 1 case). Simple transformation of random variables shows that :::\beginf_X(x,1,\lambda) &= \frac\left( \phi(\sqrt-\sqrt) + \phi(\sqrt+\sqrt) \right )\\ &= \frac e^ \cosh(\sqrt), \end ::where \phi(\cdot) is the standard normal density. # Expand the cosh term in a Taylor series. This gives the Poisson-weighted mixture representation of the density, still for ''k'' = 1. The indices on the chi-squared random variables in the series above are 1 + 2''i'' in this case. # Finally, for the general case. We've assumed, without loss of generality, that X_2,\ldots,X_k are standard normal, and so X_2^2+\cdots+X_k^2 has a ''central'' chi-squared distribution with (''k'' − 1) degrees of freedom, independent of X_1^2. Using the poisson-weighted mixture representation for X_1^2, and the fact that the sum of chi-squared random variables is also a chi-square, completes the result. The indices in the series are (1 + 2''i'') + (''k'' − 1) = ''k'' + 2''i'' as required.


Properties


Moment generating function

The
moment-generating function In probability theory and statistics, the moment-generating function of a real-valued random variable is an alternative specification of its probability distribution. Thus, it provides the basis of an alternative route to analytical results compare ...
is given by :M(t;k,\lambda)=\frac.


Moments

The first few raw moments are: :\mu'_1=k+\lambda :\mu'_2=(k+\lambda)^2 + 2(k + 2\lambda) :\mu'_3=(k+\lambda)^3 + 6(k+\lambda)(k+2\lambda)+8(k+3\lambda) :\mu'_4=(k+\lambda)^4+12(k+\lambda)^2(k+2\lambda)+4(11k^2+44k\lambda+36\lambda^2)+48(k+4\lambda) The first few central moments are: :\mu_2=2(k+2\lambda)\, :\mu_3=8(k+3\lambda)\, :\mu_4=12(k+2\lambda)^2+48(k+4\lambda)\, The ''n''th
cumulant In probability theory and statistics, the cumulants of a probability distribution are a set of quantities that provide an alternative to the '' moments'' of the distribution. Any two probability distributions whose moments are identical will have ...
is :\kappa_n=2^(n-1)!(k+n\lambda).\, Hence :\mu'_n = 2^(n-1)!(k+n\lambda)+\sum_^ \frac(k+j\lambda )\mu'_.


Cumulative distribution function

Again using the relation between the central and noncentral chi-squared distributions, the cumulative distribution function (cdf) can be written as :P(x; k, \lambda ) = e^\; \sum_^\infty \frac Q(x; k+2j) where Q(x; k)\, is the cumulative distribution function of the central chi-squared distribution with ''k'' degrees of freedom which is given by :Q(x;k)=\frac\, :and where \gamma(k,z)\, is the lower incomplete gamma function. The Marcum Q-function Q_M(a,b) can also be used to represent the cdf. :P(x; k, \lambda) = 1 - Q_ \left( \sqrt, \sqrt \right) When the degrees of freedom ''k'' is positive odd integer, we have a closed form expression for the complementary cumulative distribution function given byA. Annamalai, C. Tellambura and John Matyjas (2009). "A New Twist on the Generalized Marcum Q-Function ''Q''''M''(''a'', ''b'') with Fractional-Order ''M'' and its Applications". ''2009 6th IEEE Consumer Communications and Networking Conference'', 1–5, : \begin P(x; 2n+1, \lambda) &= 1 - Q_(\sqrt, \sqrt) \\ &= 1 - \left Q(\sqrt-\sqrt) + Q(\sqrt+\sqrt) + e^ \sum_^n \left(\frac\right)^ I_(\sqrt) \right \end where ''n'' is non-negative integer, ''Q'' is the Gaussian Q-function, and ''I'' is the modified Bessel function of first kind with half-integer order. The modified Bessel function of first kind with half-integer order in itself can be represented as a finite sum in terms of
hyperbolic functions In mathematics, hyperbolic functions are analogues of the ordinary trigonometric functions, but defined using the hyperbola rather than the circle. Just as the points form a circle with a unit radius, the points form the right half of the un ...
. In particular, for ''k'' = 1, we have :P(x; 1, \lambda) = 1 - \left Q(\sqrt-\sqrt) + Q(\sqrt+\sqrt) \right Also, for ''k'' = 3, we have :P(x; 3, \lambda) = 1 - \left Q(\sqrt-\sqrt) + Q(\sqrt+\sqrt) + \sqrt \frac e^ \right


Approximation (including for quantiles)

Abdel-Aty derives (as "first approx.") a non-central
Wilson–Hilferty transformation In probability theory and statistics, the chi-squared distribution (also chi-square or \chi^2-distribution) with k degrees of freedom is the distribution of a sum of the squares of k independent standard normal random variables. The chi-square ...
: \left(\frac\right)^ is approximately normally distributed, \sim \mathcal\left(1-\frac, \frac \right), i.e., : P(x; k, \lambda )\approx \Phi \left\, \text \ f := \frac = k + \frac, which is quite accurate and well adapting to the noncentrality. Also, f = f(k,\lambda) becomes f = k for \lambda=0, the (central) chi-squared case. Sankaran discusses a number of closed form approximations for the cumulative distribution function. In an earlier paper, he derived and states the following approximation: : P(x; k, \lambda ) \approx \Phi \left\ where : \Phi \lbrace \cdot \rbrace \, denotes the cumulative distribution function of the standard normal distribution; : h = 1 - \frac \frac \, ; : p = \frac ; : m = (h - 1) (1 - 3 h) \, . This and other approximations are discussed in a later text book. More recently, since the CDF of non-central chi-squared distribution with odd degree of freedom can be exactly computed, the CDF for even degree of freedom can be approximated by exploiting the monotonicity and log-concavity properties of Marcum-Q function as : P(x; 2n, \lambda ) \approx \frac\left P(x; 2n - 1, \lambda) + P(x; 2n + 1, \lambda) \right Another approximation that also serves as an upper bound is given by : P(x; 2n, \lambda ) \approx 1 - \left (1- P(x; 2n - 1, \lambda)) (1 - P(x; 2n + 1, \lambda)) \right. For a given probability, these formulas are easily inverted to provide the corresponding approximation for x, to compute approximate quantiles.


Related distributions

*If V is chi-square distributed V \sim \chi_k^2 then V is also non-central chi-square distributed: V \sim ^2_k(0) *A linear combination of independent noncentral chi-squared variables \xi=\sum_i \lambda_i Y_i + c, \quad Y_i \sim \chi'^2(m_i,\delta_i^2), is generalized chi-square distributed. *If V_1 \sim _^2(\lambda) and V_2 \sim _^2(0) and V_1 is independent of V_2 then a noncentral ''F''-distributed variable is developed as \frac \sim F'_(\lambda) *If J \sim \mathrm\left(\right), then \chi_^2 \sim _k^2(\lambda) *If V\sim^2_2(\lambda), then \sqrt takes the
Rice distribution Rice is the seed of the grass species '' Oryza sativa'' (Asian rice) or less commonly ''Oryza glaberrima'' (African rice). The name wild rice is usually used for species of the genera '' Zizania'' and '' Porteresia'', both wild and domesticate ...
with parameter \sqrt. *Normal approximation: if V \sim ^2_k(\lambda), then \frac\to N(0,1) in distribution as either k\to\infty or \lambda\to\infty. *If V_1 \sim ^2_(\lambda_1)and V_2 \sim ^2_(\lambda_2), where V_1, V_2 are independent, then W = (V_1+V_2) \sim ^2_k(\lambda_1+\lambda_2) where k=k_1+k_2. *In general, for a finite set of V_i \sim ^2_(\lambda_i), i\in \left \, the sum of these non-central chi-square distributed random variables Y = \sum_^N V_i has the distribution Y \sim ^2_(\lambda_y) where k_y=\sum_^N k_i, \lambda_y=\sum_^N\lambda_i. This can be seen using moment generating functions as follows: M_Y(t) = M_(t) = \prod_^N M_(t) by the independence of the V_i random variables. It remains to plug in the MGF for the non-central chi square distributions into the product and compute the new MGF – this is left as an exercise. Alternatively it can be seen via the interpretation in the background section above as sums of squares of independent normally distributed random variables with variances of 1 and the specified means. * The ''complex noncentral chi-squared distribution'' has applications in radio communication and radar systems. Let (z_1, \ldots, z_k) be independent scalar
complex random variable In probability theory and statistics, complex random variables are a generalization of real-valued random variables to complex numbers, i.e. the possible values a complex random variable may take are complex numbers. Complex random variables can alw ...
s with noncentral circular symmetry, means of \mu_i and unit variances: \operatorname \left , z_i - \mu_i \right , ^2 = 1 . Then the real random variable S = \sum_^k \left , z_i \right , ^2 is distributed according to the complex noncentral chi-squared distribution, which is effectively a scaled (by 1/2) non-central ^2 with twice the degree of freedom and twice the noncentrality parameter: :: f_S(S) = \left( \frac \right)^ e^ I_ (2 \sqrt ) :where \lambda=\sum_^k \left , \mu_i \right , ^2.


Transformations

Sankaran (1963) discusses the transformations of the form z= X-b)/(k+\lambda). He analyzes the expansions of the
cumulant In probability theory and statistics, the cumulants of a probability distribution are a set of quantities that provide an alternative to the '' moments'' of the distribution. Any two probability distributions whose moments are identical will have ...
s of z up to the term O((k+\lambda)^) and shows that the following choices of b produce reasonable results: * b=(k-1)/2 makes the second cumulant of z approximately independent of \lambda * b=(k-1)/3 makes the third cumulant of z approximately independent of \lambda * b=(k-1)/4 makes the fourth cumulant of z approximately independent of \lambda Also, a simpler transformation z_1 = (X-(k-1)/2)^ can be used as a variance stabilizing transformation that produces a random variable with mean (\lambda + (k-1)/2)^ and variance O((k+\lambda)^). Usability of these transformations may be hampered by the need to take the square roots of negative numbers.


Occurrence and applications


Use in tolerance intervals

Two-sided normal regression tolerance intervals can be obtained based on the noncentral chi-squared distribution., p. 32 This enables the calculation of a statistical interval within which, with some confidence level, a specified proportion of a sampled population falls.


Notes


References

* Abramowitz, M. and Stegun, I. A. (1972), '' Handbook of Mathematical Functions'', Dover
Section 26.4.25.
* Johnson, N. L., Kotz, S., Balakrishnan, N. (1995), ''Continuous Univariate Distributions, Volume 2 (2nd Edition)'', Wiley. * Muirhead, R. (2005) ''Aspects of Multivariate Statistical Theory'' (2nd Edition). Wiley. * Siegel, A. F. (1979), "The noncentral chi-squared distribution with zero degrees of freedom and testing for uniformity", '' Biometrika'', 66, 381–386 * {{ProbDistributions, continuous-semi-infinite Continuous distributions c