Law Of Iterated Expectations
   HOME
*





Law Of Iterated Expectations
The proposition in probability theory known as the law of total expectation, the law of iterated expectations (LIE), Adam's law, the tower rule, and the smoothing theorem, among other names, states that if X is a random variable whose expected value \operatorname(X) is defined, and Y is any random variable on the same probability space, then :\operatorname (X) = \operatorname ( \operatorname ( X \mid Y)), i.e., the expected value of the conditional expected value of X given Y is the same as the expected value of X. One special case states that if _i is a finite or countable partition of the sample space, then :\operatorname (X) = \sum_i. Note: The conditional expected value E(''X'' , ''Z'') is a random variable whose value depend on the value of ''Z''. Note that the conditional expected value of ''X'' given the ''event'' ''Z'' = ''z'' is a function of ''z''. If we write E(''X'' , ''Z'' = ''z'') = ''g''(''z'') then the random variable E(''X'' , ''Z'') is ''g''(''Z''). S ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Probability Theory
Probability theory is the branch of mathematics concerned with probability. Although there are several different probability interpretations, probability theory treats the concept in a rigorous mathematical manner by expressing it through a set of axioms. Typically these axioms formalise probability in terms of a probability space, which assigns a measure taking values between 0 and 1, termed the probability measure, to a set of outcomes called the sample space. Any specified subset of the sample space is called an event. Central subjects in probability theory include discrete and continuous random variables, probability distributions, and stochastic processes (which provide mathematical abstractions of non-deterministic or uncertain processes or measured quantities that may either be single occurrences or evolve over time in a random fashion). Although it is not possible to perfectly predict random events, much can be said about their behavior. Two major results in probability ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Indicator Function
In mathematics, an indicator function or a characteristic function of a subset of a set is a function that maps elements of the subset to one, and all other elements to zero. That is, if is a subset of some set , one has \mathbf_(x)=1 if x\in A, and \mathbf_(x)=0 otherwise, where \mathbf_A is a common notation for the indicator function. Other common notations are I_A, and \chi_A. The indicator function of is the Iverson bracket of the property of belonging to ; that is, :\mathbf_(x)= \in A For example, the Dirichlet function is the indicator function of the rational numbers as a subset of the real numbers. Definition The indicator function of a subset of a set is a function \mathbf_A \colon X \to \ defined as \mathbf_A(x) := \begin 1 ~&\text~ x \in A~, \\ 0 ~&\text~ x \notin A~. \end The Iverson bracket provides the equivalent notation, \in A/math> or to be used instead of \mathbf_(x)\,. The function \mathbf_A is sometimes denoted , , , or even just . Nota ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Algebra Of Random Variables
The algebra of random variables in statistics, provides rules for the symbolic manipulation of random variables, while avoiding delving too deeply into the mathematically sophisticated ideas of probability theory. Its symbolism allows the treatment of sums, products, ratios and general functions of random variables, as well as dealing with operations such as finding the probability distributions and the expectations (or expected values), variances and covariances of such combinations. In principle, the elementary algebra of random variables is equivalent to that of conventional non-random (or deterministic) variables. However, the changes occurring on the probability distribution of a random variable obtained after performing algebraic operations are not straightforward. Therefore, the behavior of the different operators of the probability distribution, such as expected values, variances, covariances, and moments, may be different from that observed for the random variable us ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Christopher Sims
Christopher Albert Sims (born October 21, 1942) is an American econometrician and macroeconomist. He is currently the John J.F. Sherrerd '52 University Professor of Economics at Princeton University. Together with Thomas Sargent, he won the Nobel Memorial Prize in Economic Sciences in 2011. The award cited their "empirical research on cause and effect in the macroeconomy". Biography Sims was born in Washington, D.C., the son of Ruth Bodman (Leiserson), a Democratic politician and daughter of William Morris Leiserson, and Albert Sims, a state department worker. His father was of English and Northern Irish descent, and his mother was of half Estonian Jewish and half English ancestry. His uncle was Yale economist Mark Leiserson. Sims earned his A.B. in mathematics from Harvard University ''magna cum laude'' in 1963 and his PhD in Economics from Harvard in 1968 under supervision of Hendrik S. Houthakker. During the 1963-64 academic year, he was a graduate student at the Universi ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Product Distribution
A product distribution is a probability distribution constructed as the distribution of the product of random variables having two other known distributions. Given two statistically independent random variables ''X'' and ''Y'', the distribution of the random variable ''Z'' that is formed as the product Z = XY is a ''product distribution''. Algebra of random variables The product is one type of algebra for random variables: Related to the product distribution are the ratio distribution, sum distribution (see List of convolutions of probability distributions) and difference distribution. More generally, one may talk of combinations of sums, differences, products and ratios. Many of these distributions are described in Melvin D. Springer's book from 1979 ''The Algebra of Random Variables''. Derivation for independent random variables If X and Y are two independent, continuous random variables, described by probability density functions f_X and f_Y then the probability density ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




Law Of Total Cumulance
In probability theory and mathematical statistics, the law of total cumulance is a generalization to cumulants of the law of total probability, the law of total expectation, and the law of total variance. It has applications in the analysis of time series. It was introduced by David Brillinger.David Brillinger, "The calculation of cumulants via conditioning", ''Annals of the Institute of Statistical Mathematics'', Vol. 21 (1969), pp. 215–218. It is most transparent when stated in its most general form, for ''joint'' cumulants, rather than for cumulants of a specified order for just one random variable. In general, we have : \kappa(X_1,\dots,X_n)=\sum_\pi \kappa(\kappa(X_i : i\in B \mid Y) : B \in \pi), where * ''κ''(''X''1, ..., ''X''''n'') is the joint cumulant of ''n'' random variables ''X''1, ..., ''X''''n'', and * the sum is over all partitions \pi of the set of indices, and * "''B'' ∈ ;" means ''B'' runs through the whole list of "blocks ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Law Of Total Covariance
In probability theory, the law of total covariance, covariance decomposition formula, or conditional covariance formula states that if ''X'', ''Y'', and ''Z'' are random variables on the same probability space, and the covariance of ''X'' and ''Y'' is finite, then :\operatorname(X,Y)=\operatorname(\operatorname(X,Y \mid Z))+\operatorname(\operatorname(X\mid Z),\operatorname(Y\mid Z)). The nomenclature in this article's title parallels the phrase ''law of total variance''. Some writers on probability call this the "conditional covariance formula"Sheldon M. Ross, ''A First Course in Probability'', sixth edition, Prentice Hall, 2002, page 392. or use other names. Note: The conditional expected values E( ''X'' , ''Z'' ) and E( ''Y'' , ''Z'' ) are random variables whose values depend on the value of ''Z''. Note that the conditional expected value of ''X'' given the ''event'' ''Z'' = ''z'' is a function of ''z''. If we write E( ''X'' , ''Z'' = ''z'') = ''g''(''z'') then the random ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Law Of Total Variance
In probability theory, the law of total variance or variance decomposition formula or conditional variance formulas or law of iterated variances also known as Eve's law, states that if X and Y are random variables on the same probability space, and the variance of Y is finite, then \operatorname(Y) = \operatorname operatorname(Y \mid X)+ \operatorname(\operatorname \mid X. In language perhaps better known to statisticians than to probability theorists, the two terms are the "unexplained" and the "explained" components of the variance respectively (cf. fraction of variance unexplained, explained variation). In actuarial science, specifically credibility theory, the first component is called the expected value of the process variance (EVPV) and the second is called the variance of the hypothetical means (VHM). These two components are also the source of the term "Eve's law", from the initials EV VE for "expectation of variance" and "variance of expectation". Formulation There is a ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Law Of Total Probability
In probability theory, the law (or formula) of total probability is a fundamental rule relating marginal probabilities to conditional probabilities. It expresses the total probability of an outcome which can be realized via several distinct events, hence the name. Statement The law of total probability isZwillinger, D., Kokoska, S. (2000) ''CRC Standard Probability and Statistics Tables and Formulae'', CRC Press. page 31. a theorem that states, in its discrete case, if \left\ is a finite or countably infinite partition of a sample space (in other words, a set of pairwise disjoint events whose union is the entire sample space) and each event B_n is measurable, then for any event A of the same probability space: :P(A)=\sum_n P(A\cap B_n) or, alternatively, :P(A)=\sum_n P(A\mid B_n)P(B_n), where, for any n for which P(B_n) = 0 these terms are simply omitted from the summation, because P(A\mid B_n) is finite. The summation can be interpreted as a weighted average, and consequ ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




Fundamental Theorem Of Poker
The fundamental theorem of poker is a principle first articulated by David Sklansky that he believes expresses the essential nature of poker as a game of decision-making in the face of incomplete information. The fundamental theorem is stated in common language, but its formulation is based on mathematical reasoning. Each decision that is made in poker can be analyzed in terms of the expected value of the payoff of a decision. The correct decision to make in a given situation is the decision that has the largest expected value. If a player could see all of their opponents' cards, they would always be able to calculate the correct decision with mathematical certainty, and the less they deviate from these correct decisions, the better their expected long-term results. This is certainly true heads-up, but Morton's theorem, in which an opponent's correct decision can benefit a player, may apply in multi-way pots. An example Suppose Bob is playing limit Texas hold 'em and is dealt ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Pointwise Convergence
In mathematics, pointwise convergence is one of Modes of convergence (annotated index), various senses in which a sequence of functions can Limit (mathematics), converge to a particular function. It is weaker than uniform convergence, to which it is often compared. Definition Suppose that X is a set and Y is a topological space, such as the Real number, real or complex numbers or a metric space, for example. A Net (mathematics), net or sequence of Function (mathematics), functions \left(f_n\right) all having the same domain X and codomain Y is said to converge pointwise to a given function f : X \to Y often written as \lim_ f_n = f\ \mbox if (and only if) \lim_ f_n(x) = f(x) \text x \text f. The function f is said to be the pointwise limit function of the \left(f_n\right). Sometimes, authors use the term bounded pointwise convergence when there is a constant C such that \forall n,x,\;, f_n(x), .


Properties

This concept is often contra ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Dominated Convergence Theorem
In measure theory, Lebesgue's dominated convergence theorem provides sufficient conditions under which almost everywhere convergence of a sequence of functions implies convergence in the ''L''1 norm. Its power and utility are two of the primary theoretical advantages of Lebesgue integration over Riemann integration. In addition to its frequent appearance in mathematical analysis and partial differential equations, it is widely used in probability theory, since it gives a sufficient condition for the convergence of expected values of random variables. Statement Lebesgue's dominated convergence theorem. Let (f_n) be a sequence of complex-valued measurable functions on a measure space . Suppose that the sequence converges pointwise to a function f and is dominated by some integrable function g in the sense that : , f_n(x), \le g(x) for all numbers ''n'' in the index set of the sequence and all points x\in S. Then ''f'' is integrable (in the Lebesgue sense) and : \lim_ \int_ ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]