Complex Random Variable
In probability theory and statistics, complex random variables are a generalization of real-valued random variables to complex numbers, i.e. the possible values a complex random variable may take are complex numbers. Complex random variables can always be considered as pairs of real random variables: their real and imaginary parts. Therefore, the #Cumulative distribution function, distribution of one complex random variable may be interpreted as the Joint probability distribution, joint distribution of two real random variables. Some concepts of real random variables have a straightforward generalization to complex random variables—e.g., the definition of the #Expectation, mean of a complex random variable. Other concepts are unique to complex random variables. Applications of complex random variables are found in digital signal processing, quadrature amplitude modulation and information theory. Definition A complex random variable Z on the probability space (\Omega,\mathcal ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   [Amazon] |
|
Probability Theory
Probability theory or probability calculus is the branch of mathematics concerned with probability. Although there are several different probability interpretations, probability theory treats the concept in a rigorous mathematical manner by expressing it through a set of axioms of probability, axioms. Typically these axioms formalise probability in terms of a probability space, which assigns a measure (mathematics), measure taking values between 0 and 1, termed the probability measure, to a set of outcomes called the sample space. Any specified subset of the sample space is called an event (probability theory), event. Central subjects in probability theory include discrete and continuous random variables, probability distributions, and stochastic processes (which provide mathematical abstractions of determinism, non-deterministic or uncertain processes or measured Quantity, quantities that may either be single occurrences or evolve over time in a random fashion). Although it is no ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   [Amazon] |
|
Complex Conjugate
In mathematics, the complex conjugate of a complex number is the number with an equal real part and an imaginary part equal in magnitude but opposite in sign. That is, if a and b are real numbers, then the complex conjugate of a + bi is a - bi. The complex conjugate of z is often denoted as \overline or z^*. In polar form, if r and \varphi are real numbers then the conjugate of r e^ is r e^. This can be shown using Euler's formula. The product of a complex number and its conjugate is a real number: a^2 + b^2 (or r^2 in polar coordinates). If a root of a univariate polynomial with real coefficients is complex, then its complex conjugate is also a root. Notation The complex conjugate of a complex number z is written as \overline z or z^*. The first notation, a vinculum, avoids confusion with the notation for the conjugate transpose of a matrix, which can be thought of as a generalization of the complex conjugate. The second is preferred in physics, where ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   [Amazon] |
|
Central Moment
In probability theory and statistics, a central moment is a moment of a probability distribution of a random variable about the random variable's mean; that is, it is the expected value of a specified integer power of the deviation of the random variable from the mean. The various moments form one set of values by which the properties of a probability distribution can be usefully characterized. Central moments are used in preference to ordinary moments, computed in terms of deviations from the mean instead of from zero, because the higher-order central moments relate only to the spread and shape of the distribution, rather than also to its location. Sets of central moments can be defined for both univariate and multivariate distributions. Univariate moments The -th moment about the mean (or -th central moment) of a real-valued random variable is the quantity , where E is the expectation operator. For a continuous univariate probability distribution with probability density ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   [Amazon] |
|
Characteristic Function (probability Theory)
In probability theory and statistics, the characteristic function of any real-valued random variable completely defines its probability distribution. If a random variable admits a probability density function, then the characteristic function is the Fourier transform (with sign reversal) of the probability density function. Thus it provides an alternative route to analytical results compared with working directly with probability density functions or cumulative distribution functions. There are particularly simple results for the characteristic functions of distributions defined by the weighted sums of random variables. In addition to univariate distributions, characteristic functions can be defined for vector- or matrix-valued random variables, and can also be extended to more generic cases. The characteristic function always exists when treated as a function of a real-valued argument, unlike the moment-generating function. There are relations between the behavior of the charact ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   [Amazon] |
|
Hölder's Inequality
In mathematical analysis, Hölder's inequality, named after Otto Hölder, is a fundamental inequality (mathematics), inequality between Lebesgue integration, integrals and an indispensable tool for the study of Lp space, spaces. The numbers and above are said to be Hölder conjugates of each other. The special case gives a form of the Cauchy–Schwarz inequality. Hölder's inequality holds even if is Infinity, infinite, the right-hand side also being infinite in that case. Conversely, if is in and is in , then the pointwise product is in . Hölder's inequality is used to prove the Minkowski inequality, which is the triangle inequality in the space , and also to establish that is the dual space of for . Hölder's inequality (in a slightly different form) was first found by . Inspired by Rogers' work, gave another proof as part of a work developing the concept of convex function, convex and concave functions and introducing Jensen's inequality, which was in turn named ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   [Amazon] |
|
Triangle Inequality
In mathematics, the triangle inequality states that for any triangle, the sum of the lengths of any two sides must be greater than or equal to the length of the remaining side. This statement permits the inclusion of Degeneracy (mathematics)#Triangle, degenerate triangles, but some authors, especially those writing about elementary geometry, will exclude this possibility, thus leaving out the possibility of equality. If , , and are the lengths of the sides of a triangle then the triangle inequality states that :c \leq a + b , with equality only in the degenerate case of a triangle with zero area. In Euclidean geometry and some other geometries, the triangle inequality is a theorem about vectors and vector lengths (Norm (mathematics), norms): :\, \mathbf u + \mathbf v\, \leq \, \mathbf u\, + \, \mathbf v\, , where the length of the third side has been replaced by the length of the vector sum . When and are real numbers, they can be viewed as vectors in \R^1, and the triang ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   [Amazon] |
|
Complex Normal Distribution
In probability theory, the family of complex normal distributions, denoted \mathcal or \mathcal_, characterizes complex random variables whose real and imaginary parts are jointly normal. The complex normal family has three parameters: ''location'' parameter ''μ'', ''covariance'' matrix \Gamma, and the ''relation'' matrix C. The standard complex normal is the univariate distribution with \mu = 0, \Gamma=1, and C=0. An important subclass of complex normal family is called the circularly-symmetric (central) complex normal and corresponds to the case of zero relation matrix and zero mean: \mu = 0 and C=0 . This case is used extensively in signal processing, where it is sometimes referred to as just complex normal in the literature. Definitions Complex standard normal random variable The standard complex normal random variable or standard complex Gaussian random variable is a complex random variable Z whose real and imaginary parts are independent normally distributed random v ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   [Amazon] |
|
Uncorrelatedness (probability Theory)
In probability theory and statistics, two real-valued random variables, X, Y, are said to be uncorrelated if their covariance, \operatorname ,Y= \operatorname Y- \operatorname \operatorname /math>, is zero. If two variables are uncorrelated, there is no linear relationship between them. Uncorrelated random variables have a Pearson correlation coefficient, when it exists, of zero, except in the trivial case when either variable has zero variance (is a constant). In this case the correlation is undefined. In general, uncorrelatedness is not the same as orthogonality, except in the special case where at least one of the two random variables has an expected value of 0. In this case, the covariance is the expectation of the product, and X and Y are uncorrelated if and only if \operatorname Y= 0. If X and Y are independent, with finite second moments, then they are uncorrelated. However, not all uncorrelated variables are independent. Definition Definition for two real random varia ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   [Amazon] |
|
Covariance Matrix
In probability theory and statistics, a covariance matrix (also known as auto-covariance matrix, dispersion matrix, variance matrix, or variance–covariance matrix) is a square matrix giving the covariance between each pair of elements of a given random vector. Intuitively, the covariance matrix generalizes the notion of variance to multiple dimensions. As an example, the variation in a collection of random points in two-dimensional space cannot be characterized fully by a single number, nor would the variances in the x and y directions contain all of the necessary information; a 2 \times 2 matrix would be necessary to fully characterize the two-dimensional variation. Any covariance matrix is symmetric and positive semi-definite and its main diagonal contains variances (i.e., the covariance of each element with itself). The covariance matrix of a random vector \mathbf is typically denoted by \operatorname_, \Sigma or S. Definition Throughout this article, boldfaced u ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   [Amazon] |
|
Complex Square
In mathematics, a complex number is an element of a number system that extends the real numbers with a specific element denoted , called the imaginary unit and satisfying the equation i^= -1; every complex number can be expressed in the form a + bi, where and are real numbers. Because no real number satisfies the above equation, was called an imaginary number by René Descartes. For the complex number is called the , and is called the . The set of complex numbers is denoted by either of the symbols \mathbb C or . Despite the historical nomenclature, "imaginary" complex numbers have a mathematical existence as firm as that of the real numbers, and they are fundamental tools in the scientific description of the natural world. Complex numbers allow solutions to all polynomial equations, even those that have no solutions in real numbers. More precisely, the fundamental theorem of algebra asserts that every non-constant polynomial equation with real or complex coefficients has ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   [Amazon] |
|
Pseudo-covariance
In probability theory and statistics, complex random variables are a generalization of real-valued random variables to complex numbers, i.e. the possible values a complex random variable may take are complex numbers. Complex random variables can always be considered as pairs of real random variables: their real and imaginary parts. Therefore, the #Cumulative distribution function, distribution of one complex random variable may be interpreted as the Joint probability distribution, joint distribution of two real random variables. Some concepts of real random variables have a straightforward generalization to complex random variables—e.g., the definition of the #Expectation, mean of a complex random variable. Other concepts are unique to complex random variables. Applications of complex random variables are found in digital signal processing, quadrature amplitude modulation and information theory. Definition A complex random variable Z on the probability space (\Omega,\mathcal ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   [Amazon] |