HOME

TheInfoList



OR:

In
statistics Statistics (from German language, German: ''wikt:Statistik#German, Statistik'', "description of a State (polity), state, a country") is the discipline that concerns the collection, organization, analysis, interpretation, and presentation of ...
, the method of moments is a method of
estimation Estimation (or estimating) is the process of finding an estimate or approximation, which is a value that is usable for some purpose even if input data may be incomplete, uncertain, or unstable. The value is nonetheless usable because it is der ...
of population
parameters A parameter (), generally, is any characteristic that can help in defining or classifying a particular system (meaning an event, project, object, situation, etc.). That is, a parameter is an element of a system that is useful, or critical, when ...
. The same principle is used to derive higher moments like skewness and kurtosis. It starts by expressing the population moments (i.e., the
expected value In probability theory, the expected value (also called expectation, expectancy, mathematical expectation, mean, average, or first moment) is a generalization of the weighted average. Informally, the expected value is the arithmetic mean of a l ...
s of powers of the
random variable A random variable (also called random quantity, aleatory variable, or stochastic variable) is a mathematical formalization of a quantity or object which depends on random events. It is a mapping or a function from possible outcomes (e.g., the po ...
under consideration) as functions of the parameters of interest. Those expressions are then set equal to the sample moments. The number of such equations is the same as the number of parameters to be estimated. Those equations are then solved for the parameters of interest. The solutions are estimates of those parameters. The method of moments was introduced by
Pafnuty Chebyshev Pafnuty Lvovich Chebyshev ( rus, Пафну́тий Льво́вич Чебышёв, p=pɐfˈnutʲɪj ˈlʲvovʲɪtɕ tɕɪbɨˈʂof) ( – ) was a Russian mathematician and considered to be the founding father of Russian mathematics. Chebyshe ...
in 1887 in the proof of the central limit theorem. The idea of matching empirical moments of a distribution to the population moments dates back at least to
Pearson Pearson may refer to: Organizations Education *Lester B. Pearson College, Victoria, British Columbia, Canada *Pearson College (UK), London, owned by Pearson PLC *Lester B. Pearson High School (disambiguation) Companies *Pearson PLC, a UK-based int ...
.


Method

Suppose that the problem is to estimate k unknown parameters \theta_, \theta_2, \dots, \theta_k characterizing the
distribution Distribution may refer to: Mathematics *Distribution (mathematics), generalized functions used to formulate solutions of partial differential equations * Probability distribution, the probability of a particular value or value range of a vari ...
f_W(w; \theta) of the random variable W. Suppose the first k moments of the true distribution (the "population moments") can be expressed as functions of the \thetas: : \begin \mu_1 & \equiv \operatorname E g_1(\theta_1, \theta_2, \ldots, \theta_k) , \\ pt\mu_2 & \equiv \operatorname E ^2g_2(\theta_1, \theta_2, \ldots, \theta_k), \\ & \,\,\, \vdots \\ \mu_k & \equiv \operatorname E ^kg_k(\theta_1, \theta_2, \ldots, \theta_k). \end Suppose a sample of size n is drawn, resulting in the values w_1, \dots, w_n. For j=1,\dots,k, let :\widehat\mu_j = \frac \sum_^n w_i^j be the ''j''-th sample moment, an estimate of \mu_j. The method of moments estimator for \theta_1, \theta_2, \ldots, \theta_k denoted by \widehat\theta_1, \widehat\theta_2, \dots, \widehat\theta_k is defined as the solution (if there is one) to the equations: : \begin \widehat \mu_1 & = g_1(\widehat\theta_1, \widehat\theta_2, \ldots, \widehat\theta_k), \\ pt\widehat \mu_2 & = g_2(\widehat\theta_1, \widehat\theta_2, \ldots, \widehat\theta_k), \\ & \,\,\, \vdots \\ \widehat \mu_k & = g_k(\widehat\theta_1, \widehat\theta_2, \ldots, \widehat\theta_k). \end


Advantages and disadvantages

The method of moments is fairly simple and yields
consistent estimator In statistics, a consistent estimator or asymptotically consistent estimator is an estimator—a rule for computing estimates of a parameter ''θ''0—having the property that as the number of data points used increases indefinitely, the result ...
s (under very weak assumptions), though these estimators are often biased. It is an alternative to the
method of maximum likelihood In statistics, maximum likelihood estimation (MLE) is a method of estimating the parameters of an assumed probability distribution, given some observed data. This is achieved by maximizing a likelihood function so that, under the assumed stati ...
. However, in some cases the likelihood equations may be intractable without computers, whereas the method-of-moments estimators can be computed much more quickly and easily. Due to easy computability, method-of-moments estimates may be used as the first approximation to the solutions of the likelihood equations, and successive improved approximations may then be found by the
Newton–Raphson method In numerical analysis, Newton's method, also known as the Newton–Raphson method, named after Isaac Newton and Joseph Raphson, is a root-finding algorithm which produces successively better approximations to the roots (or zeroes) of a real-valu ...
. In this way the method of moments can assist in finding maximum likelihood estimates. In some cases, infrequent with large samples but not so infrequent with small samples, the estimates given by the method of moments are outside of the parameter space (as shown in the example below); it does not make sense to rely on them then. That problem never arises in the method of
maximum likelihood In statistics, maximum likelihood estimation (MLE) is a method of estimation theory, estimating the Statistical parameter, parameters of an assumed probability distribution, given some observed data. This is achieved by Mathematical optimization, ...
. Also, estimates by the method of moments are not necessarily
sufficient statistics In statistics, a statistic is ''sufficient'' with respect to a statistical model and its associated unknown parameter if "no other statistic that can be calculated from the same sample provides any additional information as to the value of the p ...
, i.e., they sometimes fail to take into account all relevant information in the sample. When estimating other structural parameters (e.g., parameters of a
utility function As a topic of economics, utility is used to model worth or value. Its usage has evolved significantly over time. The term was introduced initially as a measure of pleasure or happiness as part of the theory of utilitarianism by moral philosopher ...
, instead of parameters of a known probability distribution), appropriate probability distributions may not be known, and moment-based estimates may be preferred to maximum likelihood estimation.


Examples

An example application of the method of moments is to estimate polynomial probability density distributions. In this case, an approximate polynomial of order N is defined on an interval ,b/math>. The method of moments then yields a system of equations, whose solution involves the inversion of a
Hankel matrix In linear algebra, a Hankel matrix (or catalecticant matrix), named after Hermann Hankel, is a square matrix in which each ascending skew-diagonal from left to right is constant, e.g.: \qquad\begin a & b & c & d & e \\ b & c & d & e & f \\ c & d & ...
.J. Munkhammar, L. Mattsson, J. Rydén (2017) "Polynomial probability distribution estimation using the method of moments". PLoS ONE 12(4): e0174573. https://doi.org/10.1371/journal.pone.0174573


Proving the central limit theorem

Let X_1, X_2, \cdots be independent random variables with mean 0 and variance 1, then let S_n := \frac\sum_^n X_i. We can compute the moments of S_n asE _n^0= 1, E _n^1= 0, E _n^2= 1, E _n^3= 0, \cdotsExplicit expansion shows thatE _n^= 0; \quad E _n^= \frac = \frac (2k-1)!!where the numerator is the number of ways to select k distinct pairs of balls by picking one each from 2k buckets, each containing balls numbered from 1 to n. At the n \to \infty limit, all moments converge to that of a standard normal distribution. More analysis then show that this convergence in moments imply a convergence in distribution. Essentially this argument was published by Chebyshev in 1887.


Uniform distribution

Consider the uniform distribution on the interval ,b/math>, U(a,b). If W\sim U(a,b) then we have : \mu_1 = \operatorname E \frac(a+b) : \mu_2 = \operatorname E ^2\frac(a^2+ab+b^2) Solving these equations gives : \widehat = \mu_1 - \sqrt : \widehat = \mu_1 + \sqrt Given a set of samples \ we can use the sample moments \widehat_1 and \widehat_2 in these formulae in order to estimate a and b. Note, however, that this method can produce inconsistent results in some cases. For example, the set of samples \ results in the estimate \widehat=\frac-\frac, \widehat=\frac+\frac even though \widehat<1 and so it is impossible for the set \ to have been drawn from U(\widehat,\widehat) in this case.


See also

* Generalized method of moments *
Decoding methods In coding theory, decoding is the process of translating received messages into codewords of a given code. There have been many common methods of mapping messages to codewords. These are often used to recover messages sent over a noisy channel, suc ...


References

{{DEFAULTSORT:Method Of Moments (Statistics) Probability distribution fitting Moment (mathematics)