In

_{i}'' and corresponding probabilities ''p_{i}''. Now consider a weightless rod on which are placed weights, at locations ''x_{i}'' along the rod and having masses ''p_{i}'' (whose sum is one). The point at which the rod balances is E[''X''].
Expected values can also be used to compute the variance, by means of the computational formula for the variance
:$\backslash operatorname(X)=\; \backslash operatorname[X^2]\; -\; (\backslash operatorname[X])^2.$
A very important application of the expectation value is in the field of quantum mechanics. The expectation value of a quantum mechanical operator $\backslash hat$ operating on a quantum state vector $,\; \backslash psi\backslash rangle$ is written as $\backslash langle\backslash hat\backslash rangle\; =\; \backslash langle\backslash psi,\; A,\; \backslash psi\backslash rangle$. The uncertainty principle, uncertainty in $\backslash hat$ can be calculated using the formula $(\backslash Delta\; A)^2\; =\; \backslash langle\backslash hat^2\backslash rangle\; -\; \backslash langle\; \backslash hat\; \backslash rangle^2$.

probability theory
Probability theory is the branch of mathematics concerned with probability. Although there are several different probability interpretations, probability theory treats the concept in a rigorous mathematical manner by expressing it through a set of ...

, the expected value of a random variable $X$, denoted $\backslash operatorname(X)$ or $\backslash operatorname;\; href="/html/ALL/s/.html"\; ;"title="">$arithmetic mean
In mathematics and statistics, the arithmetic mean (, stress on first and third syllables of "arithmetic"), or simply the mean or the average (when the context is clear), is the sum of a collection of numbers divided by the count of numbers in the ...

of a large number of independent realizations of $X$. The expected value is also known as the expectation, mathematical expectation, mean, average, or first moment. Expected value is a key concept in economics
Economics () is the social science that studies how people interact with value; in particular, the Production (economics), production, distribution (economics), distribution, and Consumption (economics), consumption of goods and services.
...

, finance
Finance is the study of financial institutions, financial markets and how they operate within the financial system. It is concerned with the creation and management of money and investments. Savers and investors have money available which could ...

, and many other subjects.
By definition, the expected value of a constant random variable $X\; =\; c$ is $c$. The expected value of a random variable $X$ with equiprobable outcomes $\backslash $ is defined as the arithmetic mean of the terms $c\_i.$ If some of the probabilities $\backslash Pr\backslash ,(X=c\_i)$ of an individual outcome $c\_i$ are unequal, then the expected value is defined to be the probability-weighted average of the $c\_i$s, that is, the sum of the $n$ products $c\_i\backslash cdot\; \backslash Pr\backslash ,(X=c\_i)$. The expected value of a general random variable involves integration in the sense of Lebesgue.
History

The idea of the expected value originated in the middle of the 17th century from the study of the so-called problem of points, which seeks to divide the stakes ''in a fair way'' between two players, who have to end their game before it is properly finished. This problem had been debated for centuries, and many conflicting proposals and solutions had been suggested over the years, when it was posed toBlaise Pascal
Blaise Pascal ( , , ; ; 19 June 1623 – 19 August 1662) was a French mathematician
A mathematician is someone who uses an extensive knowledge of mathematics
Mathematics (from Ancient Greek, Greek: ) includes the study of such topics ...

by French writer and amateur mathematician Chevalier de Méré in 1654. Méré claimed that this problem couldn't be solved, and that it showed just how flawed mathematics was when it came to its application to the real world. Pascal, being a mathematician, was provoked and determined to solve the problem once and for all.
He began to discuss the problem in a now famous series of letters to Pierre de Fermat
Pierre de Fermat (; between 31 October and 6 December 1607
– 12 January 1665) was a French lawyer at the '' Parlement'' of Toulouse, France
France (), officially the French Republic (french: link=no, République française), is a cou ...

. Soon enough, they both independently came up with a solution. They solved the problem in different computational ways, but their results were identical because their computations were based on the same fundamental principle. The principle is that the value of a future gain should be directly proportional to the chance of getting it. This principle seemed to have come naturally to both of them. They were very pleased by the fact that they had found essentially the same solution, and this in turn made them absolutely convinced that they had solved the problem conclusively; however, they did not publish their findings. They only informed a small circle of mutual scientific friends in Paris about it.
Three years later, in 1657, a Dutch mathematician Christiaan Huygens
Christiaan Huygens ( , also , ; la, Hugenius; 14 April 1629 – 8 July 1695), also spelled Huyghens, was a Dutch mathematician, physicist, astronomer and inventor, who is widely regarded as one of the greatest scientists of all time and a maj ...

, who had just visited Paris, published a treatise (see ) "''De ratiociniis in ludo aleæ''" on probability theory. In this book, he considered the problem of points, and presented a solution based on the same principle as the solutions of Pascal and Fermat. Huygens also extended the concept of expectation by adding rules for how to calculate expectations in more complicated situations than the original problem (e.g., for three or more players). In this sense, this book can be seen as the first successful attempt at laying down the foundations of the theory of probability.
In the foreword to his book, Huygens wrote:
Thus, Huygens learned about de Méré's Problem in 1655 during his visit to France; later on in 1656 from his correspondence with Carcavi, he learned that his method was essentially the same as Pascal's; so that before his book went to press in 1657, he knew about Pascal's priority in this subject.
In the mid-nineteenth century, Pafnuty Chebyshev became the first person to think systematically in terms of the expectations of random variables.
Etymology

Neither Pascal nor Huygens used the term "expectation" in its modern sense. In particular, Huygens writes: More than a hundred years later, in 1814, Pierre-Simon Laplace published his tract "''Théorie analytique des probabilités''", where the concept of expected value was defined explicitly:Notations

The use of the letter $\backslash mathop$ to denote expected value goes back to William Allen Whitworth, W. A. Whitworth in 1901. The symbol has become popular since then for English writers. In German, $\backslash mathop$ stands for "Erwartungswert", in Spanish for "Esperanza matemática", and in French for "Espérance mathématique". Another popular notation is $\backslash mu\_X$, whereas $\backslash langle\; X\backslash rangle$ is commonly used in physics, and $\backslash mathop(X)$ in Russian-language literature.Definition

Finite case

Let $X$ be a random variable with a finite number of finite outcomes $x\_1,\; x\_2,\; \backslash ldots,\; x\_k$ occurring with probabilities $p\_1,\; p\_2,\; \backslash ldots,\; p\_k,$ respectively. The expectation of $X$ is defined as :$\backslash operatorname[X]\; =\backslash sum\_^k\; x\_i\backslash ,p\_i=x\_1p\_1\; +\; x\_2p\_2\; +\; \backslash cdots\; +\; x\_kp\_k.$ Since $p\_1\; +\; p\_2\; +\; \backslash cdots\; +\; p\_k\; =\; 1,$ the expected value is the weighted sum of the $x\_i$ values, with the probabilities $p\_i$ as the weights. If all outcomes $x\_i$ are equiprobable (that is, $p\_1\; =\; p\_2\; =\; \backslash cdots\; =\; p\_k$), then the weighted average turns into the simple arithmetic mean, average. On the other hand, if the outcomes $x\_i$ are not equiprobable, then the simple average must be replaced with the weighted average, which takes into account the fact that some outcomes are more likely than others.Examples

*Let $X$ represent the outcome of a roll of a fair six-sided . More specifically, $X$ will be the number of Pip (counting), pips showing on the top face of the after the toss. The possible values for $X$ are 1, 2, 3, 4, 5, and 6, all of which are equally likely with a probability of . The expectation of $X$ is :: $\backslash operatorname[X]\; =\; 1\backslash cdot\backslash frac16\; +\; 2\backslash cdot\backslash frac16\; +\; 3\backslash cdot\backslash frac16\; +\; 4\backslash cdot\backslash frac16\; +\; 5\backslash cdot\backslash frac16\; +\; 6\backslash cdot\backslash frac16\; =\; 3.5.$ :If one rolls the $n$ times and computes the average (arithmetic mean
In mathematics and statistics, the arithmetic mean (, stress on first and third syllables of "arithmetic"), or simply the mean or the average (when the context is clear), is the sum of a collection of numbers divided by the count of numbers in the ...

) of the results, then as $n$ grows, the average will almost surely Convergent sequence, converge to the expected value, a fact known as the strong law of large numbers.
*The roulette game consists of a small ball and a wheel with 38 numbered pockets around the edge. As the wheel is spun, the ball bounces around randomly until it settles down in one of the pockets. Suppose random variable $X$ represents the (monetary) outcome of a $1 bet on a single number ("straight up" bet). If the bet wins (which happens with probability in American roulette), the payoff is $35; otherwise the player loses the bet. The expected profit from such a bet will be
:: $\backslash operatorname[\backslash ,\backslash text\backslash \$1\backslash text\backslash ,]\; =\; -\backslash \$1\; \backslash cdot\; \backslash frac\; +\; \backslash \$35\; \backslash cdot\; \backslash frac\; =\; -\backslash \$\backslash frac.$
:That is, the bet of $1 stands to lose $-\backslash \$\backslash frac$, so its expected value is $-\backslash \$\backslash frac.$
Countably infinite case

Intuitively, the expectation of a random variable taking values in a countable set of outcomes is defined analogously as the weighted sum of the outcome values, where the weights correspond to the probabilities of realizing that value. However, convergence issues associated with the infinite sum necessitate a more careful definition. A rigorous definition first defines expectation of a non-negative random variable, and then adapts it to general random variables. Let $X$ be a non-negative random variable with a countable set of outcomes $x\_1,\; x\_2,\; \backslash ldots,$ occurring with probabilities $p\_1,\; p\_2,\; \backslash ldots,$ respectively. Analogous to the discrete case, the expected value of $X$ is then defined as the series : $\backslash operatorname[X]\; =\; \backslash sum\_^\backslash infty\; x\_i\backslash ,\; p\_i.$ Note that since $x\_i\; p\_i\; \backslash geq\; 0$, the infinite sum is well-defined and does not depend on the absolute convergence, order in which it is computed. Unlike the finite case, the expectation here can be equal to infinity, if the infinite sum above increases without bound. For a general (not necessarily non-negative) random variable $X$ with a countable number of outcomes, set $X^+(\backslash omega)=\backslash max(X(\backslash omega),0)$ and $X^-(\backslash omega)=-\backslash min(X(\backslash omega),0)$. By definition, :$\backslash operatorname[X]\; =\; \backslash operatorname[X^+]\; -\; \backslash operatorname[X^-].$ Like with non-negative random variables, $\backslash operatorname;\; href="/html/ALL/s/.html"\; ;"title="">$Examples

*Suppose $x\_i\; =\; i$ and $p\_i\; =\; \backslash frac,$ for $i\; =\; 1,\; 2,\; 3,\; \backslash ldots$, where $k\; =\; \backslash frac$ (with $\backslash ln$ being the natural logarithm) is the scale factor such that the probabilities sum to 1. Then, using the direct definition for non-negative random variables, we have ::$\backslash operatorname[X]=\; \backslash sum\_i\; x\_i\; p\_i\; =\; 1\backslash left(\backslash frac\backslash right)\; +\; 2\backslash left(\backslash frac\backslash right)\; +\; 3\backslash left(\backslash frac\backslash right)\; +\; \backslash dots\; =\; \backslash frac+\backslash frac+\backslash frac+\backslash dots\; =\; k.$ *An example where the expectation is infinite arises in the context of the St. Petersburg paradox. Let $x\_i=2^i$ and $p\_i=\backslash frac$ for $i\; =\; 1,\; 2,\; 3,\; \backslash ldots$. Once again, since the random variable is non-negative, the expected value calculation gives ::$\backslash operatorname[X]=\; \backslash sum\_^\backslash infty\; x\_i\backslash ,p\_i\; =2\backslash cdot\; \backslash frac+4\backslash cdot\backslash frac\; +\; 8\backslash cdot\backslash frac+\; 16\backslash cdot\backslash frac+\; \backslash cdots\; =\; 1\; +\; 1\; +\; 1\; +\; 1\; +\; \backslash cdots\; \backslash ,\; =\; \backslash infty.$ *For an example where the expectation is not well-defined, suppose the random variable $X$ takes values $k\; =\; 1,\; -2,3,\; -4,\backslash cdots$ with respective probabilities $\backslash frac,\backslash frac,\backslash frac,\backslash frac$, ..., where $c=\backslash frac$ is a normalizing constant that ensures the probabilities sum up to one. :Then it follows that $X^+$ takes value $(2k-1)$ with probability $c/(2k-1)^2$ for $k\; =\; 1,2,3,\backslash cdots$ and takes value $0$ with remaining probability. Similarly, $X^-$ takes value $2k$ with probability $c/(2k)^2$ for $k\; =\; 1,2,3,\backslash cdots$ and takes value $0$ with remaining probability. Using the definition for non-negative random variables, one can show that both $\backslash operatorname[X^+]\; =\; \backslash infty$ and $\backslash operatorname[X^-]\; =\; \backslash infty$ (see harmonic series (mathematics), Harmonic series). Hence, the expectation of $X$ is not well-defined.Absolutely continuous case

If $X$ is a random variable with a probability density function of $f(x)$, then the expected value is defined as the Lebesgue integral : $\backslash operatorname[X]\; =\; \backslash int\_\; x\; f(x)\backslash ,\; dx,$ where the values on both sides are well defined or not well defined simultaneously. Example. A random variable that has the Cauchy distribution has a density function, but the expected value is undefined since the distribution has Heavy-tailed distribution, large "tails".General case

In general, if $X$ is a random variable defined on a probability space $(\backslash Omega,\backslash Sigma,\backslash operatorname)$, then the expected value of $X$, denoted by $\backslash operatorname;\; href="/html/ALL/s/.html"\; ;"title="">$Basic properties

The basic properties below (and their names in bold) replicate or follow immediately from those of Lebesgue integral. Note that the letters "a.s." stand for "almost surely"—a central property of the Lebesgue integral. Basically, one says that an inequality like $X\; \backslash geq\; 0$ is true almost surely, when the probability measure attributes zero-mass to the complementary event $\backslash left\backslash $. *For a general random variable $X$, define as before $X^+(\backslash omega)=\backslash max(X(\backslash omega),0)$ and $X^-(\backslash omega)=-\backslash min(X(\backslash omega),0)$, and note that $X=X^+-X^-$, with both $X^+$ and $X^-$ nonnegative, then: :$\backslash operatorname[X]\; =\; \backslash begin\; \backslash operatorname[X^+]\; -\; \backslash operatorname[X^-]\; \&\; \backslash text\; \backslash operatorname[X^+]\; <\; \backslash infty\; \backslash text\; \backslash operatorname[X^-]\; <\; \backslash infty;\backslash \backslash \; \backslash infty\; \&\; \backslash text\; \backslash operatorname[X^+]\; =\; \backslash infty\; \backslash text\; \backslash operatorname[X^-]\; <\; \backslash infty;\backslash \backslash \; -\backslash infty\; \&\; \backslash text\; \backslash operatorname[X^+]\; <\; \backslash infty\; \backslash text\; \backslash operatorname[X^-]\; =\; \backslash infty;\backslash \backslash \; \backslash text\; \&\; \backslash text\; \backslash operatorname[X^+]\; =\; \backslash infty\; \backslash text\; \backslash operatorname[X^-]\; =\; \backslash infty.\; \backslash end$ *Let $\_A$ denote the indicator function of an Event (probability theory), event $A$, then $\backslash operatorname[\_A]\; =1\backslash cdot\backslash operatorname(A)+0\backslash cdot\backslash operatorname(\backslash Omega\backslash setminus\; A)\; =\; \backslash operatorname(A).$ *Formulas in terms of CDF: If $F(x)$ is the cumulative distribution function of the probability measure $\backslash operatorname,$ and $X$ is a random variable, then :$\backslash operatorname[X]\; =\; \backslash int\_x\backslash ,dF(x),$ :where the values on both sides are well defined or not well defined simultaneously, and the integral is taken in the sense of Lebesgue-Stieltjes integral, Lebesgue-Stieltjes. Here, $\backslash overline\backslash mathbb\; =\; [-\backslash infty,+\backslash infty]$ is the extended real line. :Additionally, :$\backslash displaystyle\; \backslash operatorname[X]\; =\; \backslash int\backslash limits\_0^\backslash infty\; (1-F(x))\backslash ,dx\; -\; \backslash int\backslash limits^0\_\; F(x)\backslash ,dx,$ :with the integrals taken in the sense of Lebesgue. :The proof of the second formula follows. : *Non-negativity: If $X\; \backslash geq\; 0$ (a.s.), then $\backslash operatorname[\; X]\; \backslash geq\; 0$. *Linearity of expectation: The expected value operator (or expectation operator) $\backslash operatorname[\backslash cdot]$ is linear operator, linear in the sense that, for any random variables $X$ and $Y$, and a constant $a$, ::$\backslash begin\; \backslash operatorname[X\; +\; Y]\; \&=\; \backslash operatorname[X]\; +\; \backslash operatorname[Y],\; \backslash \backslash \; \backslash operatorname[aX]\; \&=\; a\; \backslash operatorname[X],\; \backslash end$ :whenever the right-hand side is well-defined. This means that the expected value of the sum of any finite number of random variables is the sum of the expected values of the individual random variables, and the expected value scales linearly with a multiplicative constant. Symbolically, for $N$ random variables $X\_$ and constants $a\_\; (1\backslash leq\; i\; \backslash leq\; N)$, we have $\backslash operatorname[\backslash sum\_^a\_X\_]\; =\; \backslash sum\_^a\_\backslash operatorname[X\_]$. *Monotonicity: If $X\backslash leq\; Y$ almost surely, (a.s.), and both $\backslash operatorname;\; href="/html/ALL/s/.html"\; ;"title="">$Uses and applications

The expectation of a random variable plays an important role in a variety of contexts. For example, in decision theory, an agent making an optimal choice in the context of incomplete information is often assumed to maximize the expected value of their von Neumann–Morgenstern utility function, utility function. For a different example, in statistics, where one seeks estimates for unknown parameters based on available data, the estimate itself is a random variable. In such settings, a desirable criterion for a "good" estimator is that it is unbiased estimator, unbiased; that is, the expected value of the estimate is equal to the true value of the underlying parameter. It is possible to construct an expected value equal to the probability of an event, by taking the expectation of an indicator function that is one if the event has occurred and zero otherwise. This relationship can be used to translate properties of expected values into properties of probabilities, e.g. using the law of large numbers to justify estimating probabilities by Statistical frequency, frequencies. The expected values of the powers of ''X'' are called the moment (mathematics), moments of ''X''; the moment about the mean, moments about the mean of ''X'' are expected values of powers of ''X'' − E[''X'']. The moments of some random variables can be used to specify their distributions, via their moment generating functions. To empirically Estimation theory, estimate the expected value of a random variable, one repeatedly measures observations of the variable and computes thearithmetic mean
In mathematics and statistics, the arithmetic mean (, stress on first and third syllables of "arithmetic"), or simply the mean or the average (when the context is clear), is the sum of a collection of numbers divided by the count of numbers in the ...

of the results. If the expected value exists, this procedure estimates the true expected value in an estimator bias, unbiased manner and has the property of minimizing the sum of the squares of the errors and residuals in statistics, residuals (the sum of the squared differences between the observations and the estimator, estimate). The law of large numbers demonstrates (under fairly mild conditions) that, as the Sample size, size of the statistical sample, sample gets larger, the variance of this estimator, estimate gets smaller.
This property is often exploited in a wide variety of applications, including general problems of statistical estimation and machine learning, to estimate (probabilistic) quantities of interest via Monte Carlo methods, since most quantities of interest can be written in terms of expectation, e.g. $\backslash operatorname()\; =\; \backslash operatorname[\_]$, where $\_$ is the indicator function of the set $\backslash mathcal$.
In classical mechanics, the center of mass is an analogous concept to expectation. For example, suppose ''X'' is a discrete random variable with values ''xInterchanging limits and expectation

In general, it is not the case that $\backslash operatorname[X\_n]\; \backslash to\; \backslash operatorname;\; href="/html/ALL/s/.html"\; ;"title="">$Inequalities

There are a number of inequalities involving the expected values of functions of random variables. The following list includes some of the more basic ones. *Markov's inequality: For a ''nonnegative'' random variable $X$ and $a>0$, Markov's inequality states that :$\backslash operatorname(X\backslash geq\; a)\backslash leq\backslash frac.$ *Bienaymé-Chebyshev inequality: Let $X$ be an arbitrary random variable with finite expected value $\backslash operatorname;\; href="/html/ALL/s/.html"\; ;"title="">$Expected values of common distributions

Relationship with characteristic function

The probability density function $f\_X$ of a scalar random variable $X$ is related to its characteristic function (probability), characteristic function $\backslash varphi\_X$ by the inversion formula: : $f\_X(x)\; =\; \backslash frac\backslash int\_\; e^\backslash varphi\_X(t)\; \backslash ,\; \backslash mathrmt.$ For the expected value of $g(X)$ (where $g:\backslash to$ is a Measurable function, Borel function), we can use this inversion formula to obtain :$\backslash operatorname[g(X)]\; =\; \backslash frac\; \backslash int\_\; g(x)\backslash left[\; \backslash int\_\; e^\backslash varphi\_X(t)\; \backslash ,\; \backslash mathrmt\; \backslash right]\backslash ,\backslash mathrmx.$ If $\backslash operatorname[g(X)]$ is finite, changing the order of integration, we get, in accordance with Fubini theorem, Fubini–Tonelli theorem, :$\backslash operatorname[g(X)]\; =\; \backslash frac\; \backslash int\_\; G(t)\; \backslash varphi\_X(t)\; \backslash ,\; \backslash mathrmt,$ where :$G(t)\; =\; \backslash int\_\; g(x)\; e^\; \backslash ,\; \backslash mathrmx$ is the Fourier transform of $g(x).$ The expression for $\backslash operatorname[g(X)]$ also follows directly from Plancherel theorem.See also

*Center of mass *Central tendency *Chebyshev's inequality (an inequality on location and scale parameters) *Conditional expectation *Expectation (epistemic), Expectation (the general term) *Expectation value (quantum mechanics) *Law of total expectation—the expected value of the conditional expected value of ''X'' given ''Y'' is the same as the expected value of ''X''. *Moment (mathematics) *Nonlinear expectation (a generalization of the expected value) *Wald's equation—an equation for calculating the expected value of a random number of random variablesReferences

Literature

* * {{DEFAULTSORT:Expected Value Theory of probability distributions Gambling terminology Articles containing proofs