In probability and statistics, a random variable, random quantity, aleatory variable, or stochastic variable is described informally as a variable whose values depend on outcomes of a random phenomenon. The formal mathematical treatment of random variables is a topic in probability theory. In that context, a random variable is understood as a measurable function defined on a probability space that maps from the sample space to the real numbers.
A random variable's possible values might represent the possible outcomes of a yet-to-be-performed experiment, or the possible outcomes of a past experiment whose already-existing value is uncertain (for example, because of imprecise measurements or quantum uncertainty). They may also conceptually represent either the results of an "objectively" random process (such as rolling a die) or the "subjective" randomness that results from incomplete knowledge of a quantity. The meaning of the probabilities assigned to the potential values of a random variable is not part of probability theory itself, but is instead related to philosophical arguments over the interpretation of probability. The mathematics works the same regardless of the particular interpretation in use.
As a function, a random variable is required to be measurable, which allows for probabilities to be assigned to sets of its potential values. It is common that the outcomes depend on some physical variables that are not predictable. For example, when tossing a fair coin, the final outcome of heads or tails depends on the uncertain physical conditions, so the outcome being observed is uncertain. The coin could get caught in a crack in the floor, but such a possibility is excluded from consideration.
The domain of a random variable is called a ''sample space,'' defined as the set of possible outcomes of a non-deterministic event. For example, in the event of a coin toss, only two possible outcomes are possible: heads or tails.
A random variable has a probability distribution, which specifies the probability of Borel subsets of its range. Random variables can be discrete, that is, taking any of a specified finite or countable list of values (having a countable range), endowed with a probability mass function that is characteristic of the random variable's probability distribution; or continuous, taking any numerical value in an interval or collection of intervals (having an uncountable range), via a probability density function that is characteristic of the random variable's probability distribution; or a mixture of both.
Two random variables with the same probability distribution can still differ in terms of their associations with, or independence from, other random variables. The realizations of a random variable, that is, the results of randomly choosing values according to the variable's probability distribution function, are called random variates.
Although the idea was originally introduced by Christiaan Huygens, the first person to think systematically in terms of random variables was Pafnuty Chebyshev.

Definition

A random variable is a measurable function $X\; \backslash colon\; \backslash Omega\; \backslash to\; E$ from a set of possible outcomes $\backslash Omega$ to a measurable space $E$. The technical axiomatic definition requires $\backslash Omega$ to be a sample space of a probability triple $(\backslash Omega,\; \backslash mathcal,\; \backslash operatorname)$ (see the measure-theoretic definition). A random variable is often denoted by capital roman letters such as $X$, $Y$, $Z$, $T$. The probability that $X$ takes on a value in a measurable set $S\backslash subseteq\; E$ is written as : $\backslash operatorname(X\; \backslash in\; S)\; =\; \backslash operatorname(\backslash )$

Standard case

In many cases, $X$ is real-valued, i.e. $E\; =\; \backslash mathbb$. In some contexts, the term random element (see extensions) is used to denote a random variable not of this form. When the image (or range) of $X$ is countable, the random variable is called a discrete random variable and its distribution is a discrete probability distribution, i.e. can be described by a probability mass function that assigns a probability to each value in the image of $X$. If the image is uncountably infinite (usually an interval) then $X$ is called a continuous random variable. In the special case that it is absolutely continuous, its distribution can be described by a probability density function, which assigns probabilities to intervals; in particular, each individual point must necessarily have probability zero for an absolutely continuous random variable. Not all continuous random variables are absolutely continuous, a mixture distribution is one such counterexample; such random variables cannot be described by a probability density or a probability mass function. Any random variable can be described by its cumulative distribution function, which describes the probability that the random variable will be less than or equal to a certain value.

Extensions

The term "random variable" in statistics is traditionally limited to the real-valued case ($E=\backslash mathbb$). In this case, the structure of the real numbers makes it possible to define quantities such as the expected value and variance of a random variable, its cumulative distribution function, and the moments of its distribution. However, the definition above is valid for any measurable space $E$ of values. Thus one can consider random elements of other sets $E$, such as random boolean values, categorical values, complex numbers, vectors, matrices, sequences, trees, sets, shapes, manifolds, and functions. One may then specifically refer to a ''random variable of type $E$'', or an ''$E$-valued random variable''. This more general concept of a random element is particularly useful in disciplines such as graph theory, machine learning, natural language processing, and other fields in discrete mathematics and computer science, where one is often interested in modeling the random variation of non-numerical data structures. In some cases, it is nonetheless convenient to represent each element of $E$, using one or more real numbers. In this case, a random element may optionally be represented as a vector of real-valued random variables (all defined on the same underlying probability space $\backslash Omega$, which allows the different random variables to covary). For example: *A random word may be represented as a random integer that serves as an index into the vocabulary of possible words. Alternatively, it can be represented as a random indicator vector, whose length equals the size of the vocabulary, where the only values of positive probability are $(1\; \backslash \; 0\; \backslash \; 0\; \backslash \; 0\; \backslash \; \backslash cdots)$, $(0\; \backslash \; 1\; \backslash \; 0\; \backslash \; 0\; \backslash \; \backslash cdots)$, $(0\; \backslash \; 0\; \backslash \; 1\; \backslash \; 0\; \backslash \; \backslash cdots)$ and the position of the 1 indicates the word. *A random sentence of given length $N$ may be represented as a vector of $N$ random words. *A random graph on $N$ given vertices may be represented as a $N\; \backslash times\; N$ matrix of random variables, whose values specify the adjacency matrix of the random graph. *A random function $F$ may be represented as a collection of random variables $F(x)$, giving the function's values at the various points $x$ in the function's domain. The $F(x)$ are ordinary real-valued random variables provided that the function is real-valued. For example, a stochastic process is a random function of time, a random vector is a random function of some index set such as $1,2,\backslash ldots,\; n$, and random field is a random function on any set (typically time, space, or a discrete set).

Distribution functions

If a random variable $X\backslash colon\; \backslash Omega\; \backslash to\; \backslash mathbb$ defined on the probability space $(\backslash Omega,\; \backslash mathcal,\; \backslash operatorname)$ is given, we can ask questions like "How likely is it that the value of $X$ is equal to 2?". This is the same as the probability of the event $\backslash \backslash ,\backslash !$ which is often written as $P(X\; =\; 2)\backslash ,\backslash !$ or $p\_X(2)$ for short. Recording all these probabilities of output ranges of a real-valued random variable $X$ yields the probability distribution of $X$. The probability distribution "forgets" about the particular probability space used to define $X$ and only records the probabilities of various values of $X$. Such a probability distribution can always be captured by its cumulative distribution function :$F\_X(x)\; =\; \backslash operatorname(X\; \backslash le\; x)$ and sometimes also using a probability density function, $p\_X$. In measure-theoretic terms, we use the random variable $X$ to "push-forward" the measure $P$ on $\backslash Omega$ to a measure $p\_X$ on $\backslash mathbb$. The underlying probability space $\backslash Omega$ is a technical device used to guarantee the existence of random variables, sometimes to construct them, and to define notions such as correlation and dependence or independence based on a joint distribution of two or more random variables on the same probability space. In practice, one often disposes of the space $\backslash Omega$ altogether and just puts a measure on $\backslash mathbb$ that assigns measure 1 to the whole real line, i.e., one works with probability distributions instead of random variables. See the article on quantile functions for fuller development.

Examples

Discrete random variable

In an experiment a person may be chosen at random, and one random variable may be the person's height. Mathematically, the random variable is interpreted as a function which maps the person to the person's height. Associated with the random variable is a probability distribution that allows the computation of the probability that the height is in any subset of possible values, such as the probability that the height is between 180 and 190 cm, or the probability that the height is either less than 150 or more than 200 cm. Another random variable may be the person's number of children; this is a discrete random variable with non-negative integer values. It allows the computation of probabilities for individual integer values – the probability mass function (PMF) – or for sets of values, including infinite sets. For example, the event of interest may be "an even number of children". For both finite and infinite event sets, their probabilities can be found by adding up the PMFs of the elements; that is, the probability of an even number of children is the infinite sum $\backslash operatorname(0)\; +\; \backslash operatorname(2)\; +\; \backslash operatorname(4)\; +\; \backslash cdots$. In examples such as these, the sample space is often suppressed, since it is mathematically hard to describe, and the possible values of the random variables are then treated as a sample space. But when two random variables are measured on the same sample space of outcomes, such as the height and number of children being computed on the same random persons, it is easier to track their relationship if it is acknowledged that both height and number of children come from the same random person, for example so that questions of whether such random variables are correlated or not can be posed. If $\backslash ,\; \backslash $ are countable sets of real numbers, $b\_n\; >0$ and $\backslash sum\_n\; b\_n=1$, then $F=\backslash sum\_n\; b\_n\; \backslash delta\_$ is a discrete distribution function. Here $\backslash delta\_t(x)\; =\; 0$ for $x\; <\; t$, $\backslash delta\_t(x)\; =\; 1$ for $x\; \backslash ge\; t$. Taking for instance an enumeration of all rational numbers as $\backslash $, one gets a discrete distribution function that is not a step function or piecewise constant.

Coin toss

The possible outcomes for one coin toss can be described by the sample space $\backslash Omega\; =\; \backslash $. We can introduce a real-valued random variable $Y$ that models a $1 payoff for a successful bet on heads as follows: :$Y(\backslash omega)\; =\; \backslash begin\; 1,\; \&\; \backslash text\; \backslash omega\; =\; \backslash text,\; \backslash \backslash pt0,\; \&\; \backslash text\; \backslash omega\; =\; \backslash text.\; \backslash end$ If the coin is a fair coin, ''Y'' has a probability mass function $f\_Y$ given by: :$f\_Y(y)\; =\; \backslash begin\; \backslash tfrac\; 12,\&\; \backslash texty=1,\backslash \backslash pt\backslash tfrac\; 12,\&\; \backslash texty=0,\; \backslash end$

Dice roll

A random variable can also be used to describe the process of rolling dice and the possible outcomes. The most obvious representation for the two-dice case is to take the set of pairs of numbers ''n''_{1} and ''n''_{2} from (representing the numbers on the two dice) as the sample space. The total number rolled (the sum of the numbers in each pair) is then a random variable ''X'' given by the function that maps the pair to the sum:
:$X((n\_1,\; n\_2))\; =\; n\_1\; +\; n\_2$
and (if the dice are fair) has a probability mass function ''ƒ''_{''X''} given by:
:$f\_X(S)\; =\; \backslash frac,\; \backslash text\; S\; \backslash in\; \backslash $

Continuous random variable

Formally, a continuous random variable is a random variable whose cumulative distribution function is continuous everywhere. There are no "gaps", which would correspond to numbers which have a finite probability of occurring. Instead, continuous random variables almost never take an exact prescribed value ''c'' (formally, $\backslash forall\; c\; \backslash in\; \backslash mathbb:\backslash ;\; \backslash Pr(X\; =\; c)\; =\; 0$) but there is a positive probability that its value will lie in particular intervals which can be arbitrarily small. Continuous random variables usually admit probability density functions (PDF), which characterize their CDF and probability measures; such distributions are also called absolutely continuous; but some continuous distributions are singular, or mixes of an absolutely continuous part and a singular part. An example of a continuous random variable would be one based on a spinner that can choose a horizontal direction. Then the values taken by the random variable are directions. We could represent these directions by North, West, East, South, Southeast, etc. However, it is commonly more convenient to map the sample space to a random variable which takes values which are real numbers. This can be done, for example, by mapping a direction to a bearing in degrees clockwise from North. The random variable then takes values which are real numbers from the interval [0, 360), with all parts of the range being "equally likely". In this case, ''X'' = the angle spun. Any real number has probability zero of being selected, but a positive probability can be assigned to any ''range'' of values. For example, the probability of choosing a number in [0, 180] is . Instead of speaking of a probability mass function, we say that the probability ''density'' of ''X'' is 1/360. The probability of a subset of [0, 360) can be calculated by multiplying the measure of the set by 1/360. In general, the probability of a set for a given continuous random variable can be calculated by integrating the density over the given set. More formally, given any interval $I\; =,\; b=\; \backslash $, a random variable $X\_I\; \backslash sim\; \backslash operatorname(I)\; =\; \backslash operatorname,\; b/math>\; is\; called\; a\; "continuous\; uniformrandom\; variable"\; (CURV)\; if\; the\; probability\; that\; it\; takes\; a\; value\; in\; asubintervaldepends\; only\; on\; the\; length\; of\; the\; subinterval.\; This\; implies\; that\; the\; probability\; of$ X\_I$falling\; in\; any\; subinterval$ ,\; d\backslash sube,\; b/math>\; isproportionalto\; thelengthof\; the\; subinterval,\; that\; is,\; if\; ,\; one\; has$$\backslash Pr\backslash left(\; X\_I\; \backslash in,dright)\; =\; \backslash frac\backslash Pr\backslash left(\; X\_I\; \backslash in\; I\backslash right)=\; \backslash frac$$where\; the\; last\; equality\; results\; from\; theunitarity\; axiomof\; probability.\; Theprobability\; density\; functionof\; a\; CURV$ X\; \backslash sim\; \backslash operatorname,\; b/math>\; is\; given\; by\; theindicator\; functionof\; its\; interval\; ofsupportnormalized\; by\; the\; interval\text{\'}s\; length:$$f\_X(x)\; =\; \backslash begin\; \backslash displaystyle,\; \&\; a\; \backslash le\; x\; \backslash le\; b\; \backslash \backslash \; 0,\; \&\; \backslash text.\; \backslash end$$Of\; particular\; interest\; is\; the\; uniform\; distribution\; on\; theunit\; interval$ ,\; 1/math>.\; Samples\; of\; any\; desiredprobability\; distribution$ \backslash operatorname$can\; be\; generated\; by\; calculating\; thequantile\; functionof$ \backslash operatorname$on\; arandomly-generated\; numberdistributed\; uniformly\; on\; the\; unit\; interval.\; This\; exploitsproperties\; of\; cumulative\; distribution\; functions,\; which\; are\; a\; unifying\; framework\; for\; all\; random\; variables.$$$$

Mixed type

A mixed random variable is a random variable whose cumulative distribution function is neither piecewise-constant (a discrete random variable) nor everywhere-continuous. It can be realized as the sum of a discrete random variable and a continuous random variable; in which case the will be the weighted average of the CDFs of the component variables. An example of a random variable of mixed type would be based on an experiment where a coin is flipped and the spinner is spun only if the result of the coin toss is heads. If the result is tails, ''X'' = −1; otherwise ''X'' = the value of the spinner as in the preceding example. There is a probability of that this random variable will have the value −1. Other ranges of values would have half the probabilities of the last example. Most generally, every probability distribution on the real line is a mixture of discrete part, singular part, and an absolutely continuous part; see . The discrete part is concentrated on a countable set, but this set may be dense (like the set of all rational numbers).

Measure-theoretic definition

The most formal, axiomatic definition of a random variable involves measure theory. Continuous random variables are defined in terms of sets of numbers, along with functions that map such sets to probabilities. Because of various difficulties (e.g. the Banach–Tarski paradox) that arise if such sets are insufficiently constrained, it is necessary to introduce what is termed a sigma-algebra to constrain the possible sets over which probabilities can be defined. Normally, a particular such sigma-algebra is used, the Borel σ-algebra, which allows for probabilities to be defined over any sets that can be derived either directly from continuous intervals of numbers or by a finite or countably infinite number of unions and/or intersections of such intervals. The measure-theoretic definition is as follows. Let $(\backslash Omega,\; \backslash mathcal,\; P)$ be a probability space and $(E,\; \backslash mathcal)$ a measurable space. Then an $(E,\; \backslash mathcal)$-valued random variable is a measurable function $X\backslash colon\; \backslash Omega\; \backslash to\; E$, which means that, for every subset $B\backslash in\backslash mathcal$, its preimage $X^(B)\backslash in\; \backslash mathcal$ where $X^(B)\; =\; \backslash $. This definition enables us to measure any subset $B\backslash in\; \backslash mathcal$ in the target space by looking at its preimage, which by assumption is measurable. In more intuitive terms, a member of $\backslash Omega$ is a possible outcome, a member of $\backslash mathcal$ is a measurable subset of possible outcomes, the function $P$ gives the probability of each such measurable subset, $E$ represents the set of values that the random variable can take (such as the set of real numbers), and a member of $\backslash mathcal$ is a "well-behaved" (measurable) subset of $E$ (those for which the probability may be determined). The random variable is then a function from any outcome to a quantity, such that the outcomes leading to any useful subset of quantities for the random variable have a well-defined probability. When $E$ is a topological space, then the most common choice for the σ-algebra $\backslash mathcal$ is the Borel σ-algebra $\backslash mathcal(E)$, which is the σ-algebra generated by the collection of all open sets in $E$. In such case the $(E,\; \backslash mathcal)$-valued random variable is called an $E$-valued random variable. Moreover, when the space $E$ is the real line $\backslash mathbb$, then such a real-valued random variable is called simply a random variable.

Real-valued random variables

In this case the observation space is the set of real numbers. Recall, $(\backslash Omega,\; \backslash mathcal,\; P)$ is the probability space. For a real observation space, the function $X\backslash colon\; \backslash Omega\; \backslash rightarrow\; \backslash mathbb$ is a real-valued random variable if :$\backslash \; \backslash in\; \backslash mathcal\; \backslash qquad\; \backslash forall\; r\; \backslash in\; \backslash mathbb.$ This definition is a special case of the above because the set $\backslash $ generates the Borel σ-algebra on the set of real numbers, and it suffices to check measurability on any generating set. Here we can prove measurability on this generating set by using the fact that $\backslash \; =\; X^((-\backslash infty,\; r])$.

Moments

The probability distribution of a random variable is often characterised by a small number of parameters, which also have a practical interpretation. For example, it is often enough to know what its "average value" is. This is captured by the mathematical concept of expected value of a random variable, denoted $\backslash operatorname/math>,\; and\; also\; called\; the\; firstmoment.\; In\; general,$ \backslash operatorname(X)/math>\; is\; not\; equal\; to$ f(\backslash operatorname$.\; Once\; the\; "average\; value"\; is\; known,\; one\; could\; then\; ask\; how\; far\; from\; this\; average\; value\; the\; values\; of$ X$typically\; are,\; a\; question\; that\; is\; answered\; by\; thevarianceandstandard\; deviationof\; a\; random\; variable.$ \backslash operatorname/math>\; can\; be\; viewed\; intuitively\; as\; an\; average\; obtained\; from\; an\; infinite\; population,\; the\; members\; of\; which\; are\; particular\; evaluations\; of$ X$.\; Mathematically,\; this\; is\; known\; as\; the\; (generalised)problem\; of\; moments:\; for\; a\; given\; class\; of\; random\; variables$ X$,\; find\; a\; collection$ \backslash $of\; functions\; such\; that\; the\; expectation\; values$ \backslash operatorname\_i(X)/math>\; fully\; characterise\; thedistributionof\; the\; random\; variable$ X$.\; Moments\; can\; only\; be\; defined\; for\; real-valued\; functions\; of\; random\; variables\; (or\; complex-valued,\; etc.).\; If\; the\; random\; variable\; is\; itself\; real-valued,\; then\; moments\; of\; the\; variable\; itself\; can\; be\; taken,\; which\; are\; equivalent\; to\; moments\; of\; the\; identity\; function$ f(X)=X$of\; the\; random\; variable.\; However,\; even\; for\; non-real-valued\; random\; variables,\; moments\; can\; be\; taken\; of\; real-valued\; functions\; of\; those\; variables.\; For\; example,\; for\; acategoricalrandom\; variable\; \text{\'}\text{\'}X\text{\'}\text{\'}\; that\; can\; take\; on\; thenominalvalues\; "red",\; "blue"\; or\; "green",\; the\; real-valued\; function$ =\; \backslash text/math>\; can\; be\; constructed;\; this\; uses\; theIverson\; bracket,\; and\; has\; the\; value\; 1\; if$ X$has\; the\; value\; "green",\; 0\; otherwise.\; Then,\; theexpected\; valueand\; other\; moments\; of\; this\; function\; can\; be\; determined.$$$$$

Functions of random variables

A new random variable ''Y'' can be defined by applying a real Borel measurable function $g\backslash colon\; \backslash mathbb\; \backslash rightarrow\; \backslash mathbb$ to the outcomes of a real-valued random variable $X$. That is, $Y=g(X)$. The cumulative distribution function of $Y$ is then :$F\_Y(y)\; =\; \backslash operatorname(g(X)\; \backslash le\; y).$ If function $g$ is invertible (i.e., $h\; =\; g^$ exists, where $h$ is $g$'s inverse function) and is either increasing or decreasing, then the previous relation can be extended to obtain :$F\_Y(y)\; =\; \backslash operatorname(g(X)\; \backslash le\; y)\; =\; \backslash begin\; \backslash operatorname(X\; \backslash le\; h(y))\; =\; F\_X(h(y)),\; \&\; \backslash text\; h\; =\; g^\; \backslash text\; ,\backslash \backslash \; \backslash \backslash \; \backslash operatorname(X\; \backslash ge\; h(y))\; =\; 1\; -\; F\_X(h(y)),\; \&\; \backslash text\; h\; =\; g^\; \backslash text\; .\; \backslash end$ With the same hypotheses of invertibility of $g$, assuming also differentiability, the relation between the probability density functions can be found by differentiating both sides of the above expression with respect to $y$, in order to obtain :$f\_Y(y)\; =\; f\_X\backslash bigl(h(y)\backslash bigr)\; \backslash left|\; \backslash frac\; \backslash right|.$ If there is no invertibility of $g$ but each $y$ admits at most a countable number of roots (i.e., a finite, or countably infinite, number of $x\_i$ such that $y\; =\; g(x\_i)$) then the previous relation between the probability density functions can be generalized with :$f\_Y(y)\; =\; \backslash sum\_\; f\_X(g\_^(y))\; \backslash left|\; \backslash frac\; \backslash right|$ where $x\_i\; =\; g\_i^(y)$, according to the inverse function theorem. The formulas for densities do not demand $g$ to be increasing. In the measure-theoretic, axiomatic approach to probability, if a random variable $X$ on $\backslash Omega$ and a Borel measurable function $g\backslash colon\; \backslash mathbb\; \backslash rightarrow\; \backslash mathbb$, then $Y\; =\; g(X)$ is also a random variable on $\backslash Omega$, since the composition of measurable functions is also measurable. (However, this is not necessarily true if $g$ is Lebesgue measurable.) The same procedure that allowed one to go from a probability space $(\backslash Omega,\; P)$ to $(\backslash mathbb,\; dF\_)$ can be used to obtain the distribution of $Y$.

Example 1

Let $X$ be a real-valued, continuous random variable and let $Y\; =\; X^2$. :$F\_Y(y)\; =\; \backslash operatorname(X^2\; \backslash le\; y).$ If $y\; <\; 0$, then $P(X^2\; \backslash leq\; y)\; =\; 0$, so :$F\_Y(y)\; =\; 0\backslash qquad\backslash hbox\backslash quad\; y\; <\; 0.$ If $y\; \backslash geq\; 0$, then :$\backslash operatorname(X^2\; \backslash le\; y)\; =\; \backslash operatorname(|X|\; \backslash le\; \backslash sqrt)\; =\; \backslash operatorname(-\backslash sqrt\; \backslash le\; X\; \backslash le\; \backslash sqrt),$ so :$F\_Y(y)\; =\; F\_X(\backslash sqrt)\; -\; F\_X(-\backslash sqrt)\backslash qquad\backslash hbox\backslash quad\; y\; \backslash ge\; 0.$

Example 2

Suppose $X$ is a random variable with a cumulative distribution :$F\_(x)\; =\; P(X\; \backslash leq\; x)\; =\; \backslash frac$ where $\backslash theta\; >\; 0$ is a fixed parameter. Consider the random variable $Y\; =\; \backslash mathrm(1\; +\; e^).$ Then, :$F\_(y)\; =\; P(Y\; \backslash leq\; y)\; =\; P(\backslash mathrm(1\; +\; e^)\; \backslash leq\; y)\; =\; P(X\; \backslash geq\; -\backslash mathrm(e^\; -\; 1)).\backslash ,$ The last expression can be calculated in terms of the cumulative distribution of $X,$ so :$\backslash begin\; F\_Y(y)\; \&\; =\; 1\; -\; F\_X(-\backslash log(e^y\; -\; 1))\; \backslash \backslash pt\&\; =\; 1\; -\; \backslash frac\; \backslash \backslash pt\&\; =\; 1\; -\; \backslash frac\; \backslash \backslash pt\&\; =\; 1\; -\; e^.\; \backslash end$ which is the cumulative distribution function (CDF) of an exponential distribution.

Example 3

Suppose $X$ is a random variable with a standard normal distribution, whose density is :$f\_X(x)\; =\; \backslash frace^.$ Consider the random variable $Y\; =\; X^2.$ We can find the density using the above formula for a change of variables: :$f\_Y(y)\; =\; \backslash sum\_\; f\_X(g\_^(y))\; \backslash left|\; \backslash frac\; \backslash right|.$ In this case the change is not monotonic, because every value of $Y$ has two corresponding values of $X$ (one positive and negative). However, because of symmetry, both halves will transform identically, i.e., :$f\_Y(y)\; =\; 2f\_X(g^(y))\; \backslash left|\; \backslash frac\; \backslash right|.$ The inverse transformation is :$x\; =\; g^(y)\; =\; \backslash sqrt$ and its derivative is :$\backslash frac\; =\; \backslash frac\; .$ Then, :$f\_Y(y)\; =\; 2\backslash frace^\; \backslash frac\; =\; \backslash frace^.$ This is a chi-squared distribution with one degree of freedom.

Example 4

Suppose $X$ is a random variable with a normal distribution, whose density is :$f\_X(x)\; =\; \backslash frace^.$ Consider the random variable $Y\; =\; X^2.$ We can find the density using the above formula for a change of variables: :$f\_Y(y)\; =\; \backslash sum\_\; f\_X(g\_^(y))\; \backslash left|\; \backslash frac\; \backslash right|.$ In this case the change is not monotonic, because every value of $Y$ has two corresponding values of $X$ (one positive and negative). Differently from the previous example, in this case however, there is no symmetry and we have to compute the two distinct terms: :$f\_Y(y)\; =\; f\_X(g\_1^(y))\backslash left|\backslash frac\; \backslash right|\; +f\_X(g\_2^(y))\backslash left|\; \backslash frac\; \backslash right|.$ The inverse transformation is :$x\; =\; g\_^(y)\; =\; \backslash pm\; \backslash sqrt$ and its derivative is :$\backslash frac\; =\; \backslash pm\; \backslash frac\; .$ Then, :$f\_Y(y)\; =\; \backslash frac\; \backslash frac\; (e^+e^)\; .$ This is a noncentral chi-squared distribution with one degree of freedom.

Some properties

* The probability distribution of the sum of two independent random variables is the convolution of each of their distributions. * Probability distributions are not a vector space—they are not closed under linear combinations, as these do not preserve non-negativity or total integral 1—but they are closed under convex combination, thus forming a convex subset of the space of functions (or measures).

Equivalence of random variables

There are several different senses in which random variables can be considered to be equivalent. Two random variables can be equal, equal almost surely, or equal in distribution. In increasing order of strength, the precise definition of these notions of equivalence is given below.

Equality in distribution

If the sample space is a subset of the real line, random variables ''X'' and ''Y'' are ''equal in distribution'' (denoted $X\; \backslash stackrel\; Y$) if they have the same distribution functions: :$\backslash operatorname(X\; \backslash le\; x)\; =\; \backslash operatorname(Y\; \backslash le\; x)\backslash quad\backslash textx.$ To be equal in distribution, random variables need not be defined on the same probability space. Two random variables having equal moment generating functions have the same distribution. This provides, for example, a useful method of checking equality of certain functions of independent, identically distributed (IID) random variables. However, the moment generating function exists only for distributions that have a defined Laplace transform.

Almost sure equality

Two random variables ''X'' and ''Y'' are ''equal almost surely'' (denoted $X\; \backslash ;\; \backslash stackrel\; \backslash ;\; Y$) if, and only if, the probability that they are different is zero: :$\backslash operatorname(X\; \backslash neq\; Y)\; =\; 0.$ For all practical purposes in probability theory, this notion of equivalence is as strong as actual equality. It is associated to the following distance: :$d\_\backslash infty(X,Y)=\backslash operatorname\; \backslash sup\_\backslash omega|X(\backslash omega)-Y(\backslash omega)|,$ where "ess sup" represents the essential supremum in the sense of measure theory.

Equality

Finally, the two random variables ''X'' and ''Y'' are ''equal'' if they are equal as functions on their measurable space: :$X(\backslash omega)=Y(\backslash omega)\backslash qquad\backslash hbox\backslash omega.$ This notion is typically the least useful in probability theory because in practice and in theory, the underlying measure space of the experiment is rarely explicitly characterized or even characterizable.

Convergence

A significant theme in mathematical statistics consists of obtaining convergence results for certain sequences of random variables; for instance the law of large numbers and the central limit theorem. There are various senses in which a sequence $X\_n$ of random variables can converge to a random variable $X$. These are explained in the article on convergence of random variables.

See also

*Aleatoricism *Algebra of random variables *Event (probability theory) *Multivariate random variable *Pairwise independent random variables *Observable variable *Random element *Random function *Random measure *Random number generator produces a random value *Random vector *Randomness *Stochastic process *Relationships among probability distributions

References

** Inline citations **

Literature

* * * *

External links

* * * {{DEFAULTSORT:Random Variable Category:Statistical randomness

Definition

A random variable is a measurable function $X\; \backslash colon\; \backslash Omega\; \backslash to\; E$ from a set of possible outcomes $\backslash Omega$ to a measurable space $E$. The technical axiomatic definition requires $\backslash Omega$ to be a sample space of a probability triple $(\backslash Omega,\; \backslash mathcal,\; \backslash operatorname)$ (see the measure-theoretic definition). A random variable is often denoted by capital roman letters such as $X$, $Y$, $Z$, $T$. The probability that $X$ takes on a value in a measurable set $S\backslash subseteq\; E$ is written as : $\backslash operatorname(X\; \backslash in\; S)\; =\; \backslash operatorname(\backslash )$

Standard case

In many cases, $X$ is real-valued, i.e. $E\; =\; \backslash mathbb$. In some contexts, the term random element (see extensions) is used to denote a random variable not of this form. When the image (or range) of $X$ is countable, the random variable is called a discrete random variable and its distribution is a discrete probability distribution, i.e. can be described by a probability mass function that assigns a probability to each value in the image of $X$. If the image is uncountably infinite (usually an interval) then $X$ is called a continuous random variable. In the special case that it is absolutely continuous, its distribution can be described by a probability density function, which assigns probabilities to intervals; in particular, each individual point must necessarily have probability zero for an absolutely continuous random variable. Not all continuous random variables are absolutely continuous, a mixture distribution is one such counterexample; such random variables cannot be described by a probability density or a probability mass function. Any random variable can be described by its cumulative distribution function, which describes the probability that the random variable will be less than or equal to a certain value.

Extensions

The term "random variable" in statistics is traditionally limited to the real-valued case ($E=\backslash mathbb$). In this case, the structure of the real numbers makes it possible to define quantities such as the expected value and variance of a random variable, its cumulative distribution function, and the moments of its distribution. However, the definition above is valid for any measurable space $E$ of values. Thus one can consider random elements of other sets $E$, such as random boolean values, categorical values, complex numbers, vectors, matrices, sequences, trees, sets, shapes, manifolds, and functions. One may then specifically refer to a ''random variable of type $E$'', or an ''$E$-valued random variable''. This more general concept of a random element is particularly useful in disciplines such as graph theory, machine learning, natural language processing, and other fields in discrete mathematics and computer science, where one is often interested in modeling the random variation of non-numerical data structures. In some cases, it is nonetheless convenient to represent each element of $E$, using one or more real numbers. In this case, a random element may optionally be represented as a vector of real-valued random variables (all defined on the same underlying probability space $\backslash Omega$, which allows the different random variables to covary). For example: *A random word may be represented as a random integer that serves as an index into the vocabulary of possible words. Alternatively, it can be represented as a random indicator vector, whose length equals the size of the vocabulary, where the only values of positive probability are $(1\; \backslash \; 0\; \backslash \; 0\; \backslash \; 0\; \backslash \; \backslash cdots)$, $(0\; \backslash \; 1\; \backslash \; 0\; \backslash \; 0\; \backslash \; \backslash cdots)$, $(0\; \backslash \; 0\; \backslash \; 1\; \backslash \; 0\; \backslash \; \backslash cdots)$ and the position of the 1 indicates the word. *A random sentence of given length $N$ may be represented as a vector of $N$ random words. *A random graph on $N$ given vertices may be represented as a $N\; \backslash times\; N$ matrix of random variables, whose values specify the adjacency matrix of the random graph. *A random function $F$ may be represented as a collection of random variables $F(x)$, giving the function's values at the various points $x$ in the function's domain. The $F(x)$ are ordinary real-valued random variables provided that the function is real-valued. For example, a stochastic process is a random function of time, a random vector is a random function of some index set such as $1,2,\backslash ldots,\; n$, and random field is a random function on any set (typically time, space, or a discrete set).

Distribution functions

If a random variable $X\backslash colon\; \backslash Omega\; \backslash to\; \backslash mathbb$ defined on the probability space $(\backslash Omega,\; \backslash mathcal,\; \backslash operatorname)$ is given, we can ask questions like "How likely is it that the value of $X$ is equal to 2?". This is the same as the probability of the event $\backslash \backslash ,\backslash !$ which is often written as $P(X\; =\; 2)\backslash ,\backslash !$ or $p\_X(2)$ for short. Recording all these probabilities of output ranges of a real-valued random variable $X$ yields the probability distribution of $X$. The probability distribution "forgets" about the particular probability space used to define $X$ and only records the probabilities of various values of $X$. Such a probability distribution can always be captured by its cumulative distribution function :$F\_X(x)\; =\; \backslash operatorname(X\; \backslash le\; x)$ and sometimes also using a probability density function, $p\_X$. In measure-theoretic terms, we use the random variable $X$ to "push-forward" the measure $P$ on $\backslash Omega$ to a measure $p\_X$ on $\backslash mathbb$. The underlying probability space $\backslash Omega$ is a technical device used to guarantee the existence of random variables, sometimes to construct them, and to define notions such as correlation and dependence or independence based on a joint distribution of two or more random variables on the same probability space. In practice, one often disposes of the space $\backslash Omega$ altogether and just puts a measure on $\backslash mathbb$ that assigns measure 1 to the whole real line, i.e., one works with probability distributions instead of random variables. See the article on quantile functions for fuller development.

Examples

Discrete random variable

In an experiment a person may be chosen at random, and one random variable may be the person's height. Mathematically, the random variable is interpreted as a function which maps the person to the person's height. Associated with the random variable is a probability distribution that allows the computation of the probability that the height is in any subset of possible values, such as the probability that the height is between 180 and 190 cm, or the probability that the height is either less than 150 or more than 200 cm. Another random variable may be the person's number of children; this is a discrete random variable with non-negative integer values. It allows the computation of probabilities for individual integer values – the probability mass function (PMF) – or for sets of values, including infinite sets. For example, the event of interest may be "an even number of children". For both finite and infinite event sets, their probabilities can be found by adding up the PMFs of the elements; that is, the probability of an even number of children is the infinite sum $\backslash operatorname(0)\; +\; \backslash operatorname(2)\; +\; \backslash operatorname(4)\; +\; \backslash cdots$. In examples such as these, the sample space is often suppressed, since it is mathematically hard to describe, and the possible values of the random variables are then treated as a sample space. But when two random variables are measured on the same sample space of outcomes, such as the height and number of children being computed on the same random persons, it is easier to track their relationship if it is acknowledged that both height and number of children come from the same random person, for example so that questions of whether such random variables are correlated or not can be posed. If $\backslash ,\; \backslash $ are countable sets of real numbers, $b\_n\; >0$ and $\backslash sum\_n\; b\_n=1$, then $F=\backslash sum\_n\; b\_n\; \backslash delta\_$ is a discrete distribution function. Here $\backslash delta\_t(x)\; =\; 0$ for $x\; <\; t$, $\backslash delta\_t(x)\; =\; 1$ for $x\; \backslash ge\; t$. Taking for instance an enumeration of all rational numbers as $\backslash $, one gets a discrete distribution function that is not a step function or piecewise constant.

Coin toss

The possible outcomes for one coin toss can be described by the sample space $\backslash Omega\; =\; \backslash $. We can introduce a real-valued random variable $Y$ that models a $1 payoff for a successful bet on heads as follows: :$Y(\backslash omega)\; =\; \backslash begin\; 1,\; \&\; \backslash text\; \backslash omega\; =\; \backslash text,\; \backslash \backslash pt0,\; \&\; \backslash text\; \backslash omega\; =\; \backslash text.\; \backslash end$ If the coin is a fair coin, ''Y'' has a probability mass function $f\_Y$ given by: :$f\_Y(y)\; =\; \backslash begin\; \backslash tfrac\; 12,\&\; \backslash texty=1,\backslash \backslash pt\backslash tfrac\; 12,\&\; \backslash texty=0,\; \backslash end$

Dice roll

A random variable can also be used to describe the process of rolling dice and the possible outcomes. The most obvious representation for the two-dice case is to take the set of pairs of numbers ''n''

Continuous random variable

Formally, a continuous random variable is a random variable whose cumulative distribution function is continuous everywhere. There are no "gaps", which would correspond to numbers which have a finite probability of occurring. Instead, continuous random variables almost never take an exact prescribed value ''c'' (formally, $\backslash forall\; c\; \backslash in\; \backslash mathbb:\backslash ;\; \backslash Pr(X\; =\; c)\; =\; 0$) but there is a positive probability that its value will lie in particular intervals which can be arbitrarily small. Continuous random variables usually admit probability density functions (PDF), which characterize their CDF and probability measures; such distributions are also called absolutely continuous; but some continuous distributions are singular, or mixes of an absolutely continuous part and a singular part. An example of a continuous random variable would be one based on a spinner that can choose a horizontal direction. Then the values taken by the random variable are directions. We could represent these directions by North, West, East, South, Southeast, etc. However, it is commonly more convenient to map the sample space to a random variable which takes values which are real numbers. This can be done, for example, by mapping a direction to a bearing in degrees clockwise from North. The random variable then takes values which are real numbers from the interval [0, 360), with all parts of the range being "equally likely". In this case, ''X'' = the angle spun. Any real number has probability zero of being selected, but a positive probability can be assigned to any ''range'' of values. For example, the probability of choosing a number in [0, 180] is . Instead of speaking of a probability mass function, we say that the probability ''density'' of ''X'' is 1/360. The probability of a subset of [0, 360) can be calculated by multiplying the measure of the set by 1/360. In general, the probability of a set for a given continuous random variable can be calculated by integrating the density over the given set. More formally, given any interval $I\; =,\; b=\; \backslash $, a random variable $X\_I\; \backslash sim\; \backslash operatorname(I)\; =\; \backslash operatorname,\; b/math>\; is\; called\; a\; "continuous\; uniformrandom\; variable"\; (CURV)\; if\; the\; probability\; that\; it\; takes\; a\; value\; in\; asubintervaldepends\; only\; on\; the\; length\; of\; the\; subinterval.\; This\; implies\; that\; the\; probability\; of$ X\_I$falling\; in\; any\; subinterval$ ,\; d\backslash sube,\; b/math>\; isproportionalto\; thelengthof\; the\; subinterval,\; that\; is,\; if\; ,\; one\; has$$\backslash Pr\backslash left(\; X\_I\; \backslash in,dright)\; =\; \backslash frac\backslash Pr\backslash left(\; X\_I\; \backslash in\; I\backslash right)=\; \backslash frac$$where\; the\; last\; equality\; results\; from\; theunitarity\; axiomof\; probability.\; Theprobability\; density\; functionof\; a\; CURV$ X\; \backslash sim\; \backslash operatorname,\; b/math>\; is\; given\; by\; theindicator\; functionof\; its\; interval\; ofsupportnormalized\; by\; the\; interval\text{\'}s\; length:$$f\_X(x)\; =\; \backslash begin\; \backslash displaystyle,\; \&\; a\; \backslash le\; x\; \backslash le\; b\; \backslash \backslash \; 0,\; \&\; \backslash text.\; \backslash end$$Of\; particular\; interest\; is\; the\; uniform\; distribution\; on\; theunit\; interval$ ,\; 1/math>.\; Samples\; of\; any\; desiredprobability\; distribution$ \backslash operatorname$can\; be\; generated\; by\; calculating\; thequantile\; functionof$ \backslash operatorname$on\; arandomly-generated\; numberdistributed\; uniformly\; on\; the\; unit\; interval.\; This\; exploitsproperties\; of\; cumulative\; distribution\; functions,\; which\; are\; a\; unifying\; framework\; for\; all\; random\; variables.$$$$

Mixed type

A mixed random variable is a random variable whose cumulative distribution function is neither piecewise-constant (a discrete random variable) nor everywhere-continuous. It can be realized as the sum of a discrete random variable and a continuous random variable; in which case the will be the weighted average of the CDFs of the component variables. An example of a random variable of mixed type would be based on an experiment where a coin is flipped and the spinner is spun only if the result of the coin toss is heads. If the result is tails, ''X'' = −1; otherwise ''X'' = the value of the spinner as in the preceding example. There is a probability of that this random variable will have the value −1. Other ranges of values would have half the probabilities of the last example. Most generally, every probability distribution on the real line is a mixture of discrete part, singular part, and an absolutely continuous part; see . The discrete part is concentrated on a countable set, but this set may be dense (like the set of all rational numbers).

Measure-theoretic definition

The most formal, axiomatic definition of a random variable involves measure theory. Continuous random variables are defined in terms of sets of numbers, along with functions that map such sets to probabilities. Because of various difficulties (e.g. the Banach–Tarski paradox) that arise if such sets are insufficiently constrained, it is necessary to introduce what is termed a sigma-algebra to constrain the possible sets over which probabilities can be defined. Normally, a particular such sigma-algebra is used, the Borel σ-algebra, which allows for probabilities to be defined over any sets that can be derived either directly from continuous intervals of numbers or by a finite or countably infinite number of unions and/or intersections of such intervals. The measure-theoretic definition is as follows. Let $(\backslash Omega,\; \backslash mathcal,\; P)$ be a probability space and $(E,\; \backslash mathcal)$ a measurable space. Then an $(E,\; \backslash mathcal)$-valued random variable is a measurable function $X\backslash colon\; \backslash Omega\; \backslash to\; E$, which means that, for every subset $B\backslash in\backslash mathcal$, its preimage $X^(B)\backslash in\; \backslash mathcal$ where $X^(B)\; =\; \backslash $. This definition enables us to measure any subset $B\backslash in\; \backslash mathcal$ in the target space by looking at its preimage, which by assumption is measurable. In more intuitive terms, a member of $\backslash Omega$ is a possible outcome, a member of $\backslash mathcal$ is a measurable subset of possible outcomes, the function $P$ gives the probability of each such measurable subset, $E$ represents the set of values that the random variable can take (such as the set of real numbers), and a member of $\backslash mathcal$ is a "well-behaved" (measurable) subset of $E$ (those for which the probability may be determined). The random variable is then a function from any outcome to a quantity, such that the outcomes leading to any useful subset of quantities for the random variable have a well-defined probability. When $E$ is a topological space, then the most common choice for the σ-algebra $\backslash mathcal$ is the Borel σ-algebra $\backslash mathcal(E)$, which is the σ-algebra generated by the collection of all open sets in $E$. In such case the $(E,\; \backslash mathcal)$-valued random variable is called an $E$-valued random variable. Moreover, when the space $E$ is the real line $\backslash mathbb$, then such a real-valued random variable is called simply a random variable.

Real-valued random variables

In this case the observation space is the set of real numbers. Recall, $(\backslash Omega,\; \backslash mathcal,\; P)$ is the probability space. For a real observation space, the function $X\backslash colon\; \backslash Omega\; \backslash rightarrow\; \backslash mathbb$ is a real-valued random variable if :$\backslash \; \backslash in\; \backslash mathcal\; \backslash qquad\; \backslash forall\; r\; \backslash in\; \backslash mathbb.$ This definition is a special case of the above because the set $\backslash $ generates the Borel σ-algebra on the set of real numbers, and it suffices to check measurability on any generating set. Here we can prove measurability on this generating set by using the fact that $\backslash \; =\; X^((-\backslash infty,\; r])$.

Moments

The probability distribution of a random variable is often characterised by a small number of parameters, which also have a practical interpretation. For example, it is often enough to know what its "average value" is. This is captured by the mathematical concept of expected value of a random variable, denoted $\backslash operatorname/math>,\; and\; also\; called\; the\; firstmoment.\; In\; general,$ \backslash operatorname(X)/math>\; is\; not\; equal\; to$ f(\backslash operatorname$.\; Once\; the\; "average\; value"\; is\; known,\; one\; could\; then\; ask\; how\; far\; from\; this\; average\; value\; the\; values\; of$ X$typically\; are,\; a\; question\; that\; is\; answered\; by\; thevarianceandstandard\; deviationof\; a\; random\; variable.$ \backslash operatorname/math>\; can\; be\; viewed\; intuitively\; as\; an\; average\; obtained\; from\; an\; infinite\; population,\; the\; members\; of\; which\; are\; particular\; evaluations\; of$ X$.\; Mathematically,\; this\; is\; known\; as\; the\; (generalised)problem\; of\; moments:\; for\; a\; given\; class\; of\; random\; variables$ X$,\; find\; a\; collection$ \backslash $of\; functions\; such\; that\; the\; expectation\; values$ \backslash operatorname\_i(X)/math>\; fully\; characterise\; thedistributionof\; the\; random\; variable$ X$.\; Moments\; can\; only\; be\; defined\; for\; real-valued\; functions\; of\; random\; variables\; (or\; complex-valued,\; etc.).\; If\; the\; random\; variable\; is\; itself\; real-valued,\; then\; moments\; of\; the\; variable\; itself\; can\; be\; taken,\; which\; are\; equivalent\; to\; moments\; of\; the\; identity\; function$ f(X)=X$of\; the\; random\; variable.\; However,\; even\; for\; non-real-valued\; random\; variables,\; moments\; can\; be\; taken\; of\; real-valued\; functions\; of\; those\; variables.\; For\; example,\; for\; acategoricalrandom\; variable\; \text{\'}\text{\'}X\text{\'}\text{\'}\; that\; can\; take\; on\; thenominalvalues\; "red",\; "blue"\; or\; "green",\; the\; real-valued\; function$ =\; \backslash text/math>\; can\; be\; constructed;\; this\; uses\; theIverson\; bracket,\; and\; has\; the\; value\; 1\; if$ X$has\; the\; value\; "green",\; 0\; otherwise.\; Then,\; theexpected\; valueand\; other\; moments\; of\; this\; function\; can\; be\; determined.$$$$$

Functions of random variables

A new random variable ''Y'' can be defined by applying a real Borel measurable function $g\backslash colon\; \backslash mathbb\; \backslash rightarrow\; \backslash mathbb$ to the outcomes of a real-valued random variable $X$. That is, $Y=g(X)$. The cumulative distribution function of $Y$ is then :$F\_Y(y)\; =\; \backslash operatorname(g(X)\; \backslash le\; y).$ If function $g$ is invertible (i.e., $h\; =\; g^$ exists, where $h$ is $g$'s inverse function) and is either increasing or decreasing, then the previous relation can be extended to obtain :$F\_Y(y)\; =\; \backslash operatorname(g(X)\; \backslash le\; y)\; =\; \backslash begin\; \backslash operatorname(X\; \backslash le\; h(y))\; =\; F\_X(h(y)),\; \&\; \backslash text\; h\; =\; g^\; \backslash text\; ,\backslash \backslash \; \backslash \backslash \; \backslash operatorname(X\; \backslash ge\; h(y))\; =\; 1\; -\; F\_X(h(y)),\; \&\; \backslash text\; h\; =\; g^\; \backslash text\; .\; \backslash end$ With the same hypotheses of invertibility of $g$, assuming also differentiability, the relation between the probability density functions can be found by differentiating both sides of the above expression with respect to $y$, in order to obtain :$f\_Y(y)\; =\; f\_X\backslash bigl(h(y)\backslash bigr)\; \backslash left|\; \backslash frac\; \backslash right|.$ If there is no invertibility of $g$ but each $y$ admits at most a countable number of roots (i.e., a finite, or countably infinite, number of $x\_i$ such that $y\; =\; g(x\_i)$) then the previous relation between the probability density functions can be generalized with :$f\_Y(y)\; =\; \backslash sum\_\; f\_X(g\_^(y))\; \backslash left|\; \backslash frac\; \backslash right|$ where $x\_i\; =\; g\_i^(y)$, according to the inverse function theorem. The formulas for densities do not demand $g$ to be increasing. In the measure-theoretic, axiomatic approach to probability, if a random variable $X$ on $\backslash Omega$ and a Borel measurable function $g\backslash colon\; \backslash mathbb\; \backslash rightarrow\; \backslash mathbb$, then $Y\; =\; g(X)$ is also a random variable on $\backslash Omega$, since the composition of measurable functions is also measurable. (However, this is not necessarily true if $g$ is Lebesgue measurable.) The same procedure that allowed one to go from a probability space $(\backslash Omega,\; P)$ to $(\backslash mathbb,\; dF\_)$ can be used to obtain the distribution of $Y$.

Example 1

Let $X$ be a real-valued, continuous random variable and let $Y\; =\; X^2$. :$F\_Y(y)\; =\; \backslash operatorname(X^2\; \backslash le\; y).$ If $y\; <\; 0$, then $P(X^2\; \backslash leq\; y)\; =\; 0$, so :$F\_Y(y)\; =\; 0\backslash qquad\backslash hbox\backslash quad\; y\; <\; 0.$ If $y\; \backslash geq\; 0$, then :$\backslash operatorname(X^2\; \backslash le\; y)\; =\; \backslash operatorname(|X|\; \backslash le\; \backslash sqrt)\; =\; \backslash operatorname(-\backslash sqrt\; \backslash le\; X\; \backslash le\; \backslash sqrt),$ so :$F\_Y(y)\; =\; F\_X(\backslash sqrt)\; -\; F\_X(-\backslash sqrt)\backslash qquad\backslash hbox\backslash quad\; y\; \backslash ge\; 0.$

Example 2

Suppose $X$ is a random variable with a cumulative distribution :$F\_(x)\; =\; P(X\; \backslash leq\; x)\; =\; \backslash frac$ where $\backslash theta\; >\; 0$ is a fixed parameter. Consider the random variable $Y\; =\; \backslash mathrm(1\; +\; e^).$ Then, :$F\_(y)\; =\; P(Y\; \backslash leq\; y)\; =\; P(\backslash mathrm(1\; +\; e^)\; \backslash leq\; y)\; =\; P(X\; \backslash geq\; -\backslash mathrm(e^\; -\; 1)).\backslash ,$ The last expression can be calculated in terms of the cumulative distribution of $X,$ so :$\backslash begin\; F\_Y(y)\; \&\; =\; 1\; -\; F\_X(-\backslash log(e^y\; -\; 1))\; \backslash \backslash pt\&\; =\; 1\; -\; \backslash frac\; \backslash \backslash pt\&\; =\; 1\; -\; \backslash frac\; \backslash \backslash pt\&\; =\; 1\; -\; e^.\; \backslash end$ which is the cumulative distribution function (CDF) of an exponential distribution.

Example 3

Suppose $X$ is a random variable with a standard normal distribution, whose density is :$f\_X(x)\; =\; \backslash frace^.$ Consider the random variable $Y\; =\; X^2.$ We can find the density using the above formula for a change of variables: :$f\_Y(y)\; =\; \backslash sum\_\; f\_X(g\_^(y))\; \backslash left|\; \backslash frac\; \backslash right|.$ In this case the change is not monotonic, because every value of $Y$ has two corresponding values of $X$ (one positive and negative). However, because of symmetry, both halves will transform identically, i.e., :$f\_Y(y)\; =\; 2f\_X(g^(y))\; \backslash left|\; \backslash frac\; \backslash right|.$ The inverse transformation is :$x\; =\; g^(y)\; =\; \backslash sqrt$ and its derivative is :$\backslash frac\; =\; \backslash frac\; .$ Then, :$f\_Y(y)\; =\; 2\backslash frace^\; \backslash frac\; =\; \backslash frace^.$ This is a chi-squared distribution with one degree of freedom.

Example 4

Suppose $X$ is a random variable with a normal distribution, whose density is :$f\_X(x)\; =\; \backslash frace^.$ Consider the random variable $Y\; =\; X^2.$ We can find the density using the above formula for a change of variables: :$f\_Y(y)\; =\; \backslash sum\_\; f\_X(g\_^(y))\; \backslash left|\; \backslash frac\; \backslash right|.$ In this case the change is not monotonic, because every value of $Y$ has two corresponding values of $X$ (one positive and negative). Differently from the previous example, in this case however, there is no symmetry and we have to compute the two distinct terms: :$f\_Y(y)\; =\; f\_X(g\_1^(y))\backslash left|\backslash frac\; \backslash right|\; +f\_X(g\_2^(y))\backslash left|\; \backslash frac\; \backslash right|.$ The inverse transformation is :$x\; =\; g\_^(y)\; =\; \backslash pm\; \backslash sqrt$ and its derivative is :$\backslash frac\; =\; \backslash pm\; \backslash frac\; .$ Then, :$f\_Y(y)\; =\; \backslash frac\; \backslash frac\; (e^+e^)\; .$ This is a noncentral chi-squared distribution with one degree of freedom.

Some properties

* The probability distribution of the sum of two independent random variables is the convolution of each of their distributions. * Probability distributions are not a vector space—they are not closed under linear combinations, as these do not preserve non-negativity or total integral 1—but they are closed under convex combination, thus forming a convex subset of the space of functions (or measures).

Equivalence of random variables

There are several different senses in which random variables can be considered to be equivalent. Two random variables can be equal, equal almost surely, or equal in distribution. In increasing order of strength, the precise definition of these notions of equivalence is given below.

Equality in distribution

If the sample space is a subset of the real line, random variables ''X'' and ''Y'' are ''equal in distribution'' (denoted $X\; \backslash stackrel\; Y$) if they have the same distribution functions: :$\backslash operatorname(X\; \backslash le\; x)\; =\; \backslash operatorname(Y\; \backslash le\; x)\backslash quad\backslash textx.$ To be equal in distribution, random variables need not be defined on the same probability space. Two random variables having equal moment generating functions have the same distribution. This provides, for example, a useful method of checking equality of certain functions of independent, identically distributed (IID) random variables. However, the moment generating function exists only for distributions that have a defined Laplace transform.

Almost sure equality

Two random variables ''X'' and ''Y'' are ''equal almost surely'' (denoted $X\; \backslash ;\; \backslash stackrel\; \backslash ;\; Y$) if, and only if, the probability that they are different is zero: :$\backslash operatorname(X\; \backslash neq\; Y)\; =\; 0.$ For all practical purposes in probability theory, this notion of equivalence is as strong as actual equality. It is associated to the following distance: :$d\_\backslash infty(X,Y)=\backslash operatorname\; \backslash sup\_\backslash omega|X(\backslash omega)-Y(\backslash omega)|,$ where "ess sup" represents the essential supremum in the sense of measure theory.

Equality

Finally, the two random variables ''X'' and ''Y'' are ''equal'' if they are equal as functions on their measurable space: :$X(\backslash omega)=Y(\backslash omega)\backslash qquad\backslash hbox\backslash omega.$ This notion is typically the least useful in probability theory because in practice and in theory, the underlying measure space of the experiment is rarely explicitly characterized or even characterizable.

Convergence

A significant theme in mathematical statistics consists of obtaining convergence results for certain sequences of random variables; for instance the law of large numbers and the central limit theorem. There are various senses in which a sequence $X\_n$ of random variables can converge to a random variable $X$. These are explained in the article on convergence of random variables.

See also

*Aleatoricism *Algebra of random variables *Event (probability theory) *Multivariate random variable *Pairwise independent random variables *Observable variable *Random element *Random function *Random measure *Random number generator produces a random value *Random vector *Randomness *Stochastic process *Relationships among probability distributions

References

Literature

* * * *

External links

* * * {{DEFAULTSORT:Random Variable Category:Statistical randomness