Lehmann–Scheffé Theorem
   HOME
*





Lehmann–Scheffé Theorem
In statistics, the Lehmann–Scheffé theorem is a prominent statement, tying together the ideas of completeness, sufficiency, uniqueness, and best unbiased estimation. The theorem states that any estimator which is unbiased for a given unknown quantity and that depends on the data only through a complete, sufficient statistic is the unique best unbiased estimator of that quantity. The Lehmann–Scheffé theorem is named after Erich Leo Lehmann and Henry Scheffé, given their two early papers. If ''T'' is a complete sufficient statistic for ''θ'' and E(''g''(''T'')) = ''τ''(''θ'') then ''g''(''T'') is the uniformly minimum-variance unbiased estimator (UMVUE) of ''τ''(''θ''). Statement Let \vec= X_1, X_2, \dots, X_n be a random sample from a distribution that has p.d.f (or p.m.f in the discrete case) f(x:\theta) where \theta \in \Omega is a parameter in the parameter space. Suppose Y = u(\vec) is a sufficient statistic for ''θ'', and let \ be a com ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Statistics
Statistics (from German language, German: ''wikt:Statistik#German, Statistik'', "description of a State (polity), state, a country") is the discipline that concerns the collection, organization, analysis, interpretation, and presentation of data. In applying statistics to a scientific, industrial, or social problem, it is conventional to begin with a statistical population or a statistical model to be studied. Populations can be diverse groups of people or objects such as "all people living in a country" or "every atom composing a crystal". Statistics deals with every aspect of data, including the planning of data collection in terms of the design of statistical survey, surveys and experimental design, experiments.Dodge, Y. (2006) ''The Oxford Dictionary of Statistical Terms'', Oxford University Press. When census data cannot be collected, statisticians collect data by developing specific experiment designs and survey sample (statistics), samples. Representative sampling as ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Rao–Blackwell Theorem
In statistics, the Rao–Blackwell theorem, sometimes referred to as the Rao–Blackwell–Kolmogorov theorem, is a result which characterizes the transformation of an arbitrarily crude estimator into an estimator that is optimal by the mean-squared-error criterion or any of a variety of similar criteria. The Rao–Blackwell theorem states that if ''g''(''X'') is any kind of estimator of a parameter θ, then the conditional expectation of ''g''(''X'') given ''T''(''X''), where ''T'' is a sufficient statistic, is typically a better estimator of θ, and is never worse. Sometimes one can very easily construct a very crude estimator ''g''(''X''), and then evaluate that conditional expected value to get an estimator that is in various senses optimal. The theorem is named after Calyampudi Radhakrishna Rao and David Blackwell. The process of transforming an estimator using the Rao–Blackwell theorem can be referred to as Rao–Blackwellization. The transformed estimator is called the ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Sankhya (journal)
''Sankhyā: The Indian Journal of Statistics'' is a quarterly peer-reviewed scientific journal on statistics published by the Indian Statistical Institute (ISI). It was established in 1933 by Prasanta Chandra Mahalanobis, founding director of ISI, along the lines of Karl Pearson's ''Biometrika''. Mahalanobis was the founding editor-in-chief. Each volume of ''Sankhya'' consists of four issues, two of them are in Series A, containing articles on theoretical statistics, probability theory, and stochastic processes, whereas the other two issues form Series B, containing articles on applied statistics, i.e. applied probability, applied stochastic processes, econometrics, and statistical computing. ''Sankhya'' is considered as "core journal" of statistics by the Current Index to Statistics. Publication history ''Sankhya'' was first published in June 1933. In 1961, the journal split into two series: Series A which focused on mathematical statistics and Series B which focused on stat ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Complete Class Theorem
In statistics, completeness is a property of a statistic in relation to a model for a set of observed data. In essence, it ensures that the distributions corresponding to different values of the parameters are distinct. It is closely related to the idea of identifiability, but in statistical theory it is often found as a condition imposed on a sufficient statistic from which certain optimality results are derived. Definition Consider a random variable ''X'' whose probability distribution belongs to a parametric model ''P''''θ'' parametrized by ''θ''. Say ''T'' is a statistic; that is, the composition of a measurable function with a random sample ''X''1,...,''X''n. The statistic ''T'' is said to be complete for the distribution of ''X'' if, for every measurable function ''g,'': \text\operatorname_\theta(g(T))=0\text\theta\text\mathbf_\theta(g(T)=0)=1\text\theta. The statistic ''T'' is said to be boundedly complete for the distribution of ''X'' if this implication ho ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Loss Function
In mathematical optimization and decision theory, a loss function or cost function (sometimes also called an error function) is a function that maps an event or values of one or more variables onto a real number intuitively representing some "cost" associated with the event. An optimization problem seeks to minimize a loss function. An objective function is either a loss function or its opposite (in specific domains, variously called a reward function, a profit function, a utility function, a fitness function, etc.), in which case it is to be maximized. The loss function could include terms from several levels of the hierarchy. In statistics, typically a loss function is used for parameter estimation, and the event in question is some function of the difference between estimated and true values for an instance of data. The concept, as old as Laplace, was reintroduced in statistics by Abraham Wald in the middle of the 20th century. In the context of economics, for example, this i ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Equivariant Estimator
In statistics, the concept of being an invariant estimator is a criterion that can be used to compare the properties of different estimators for the same quantity. It is a way of formalising the idea that an estimator should have certain intuitively appealing qualities. Strictly speaking, "invariant" would mean that the estimates themselves are unchanged when both the measurements and the parameters are transformed in a compatible way, but the meaning has been extended to allow the estimates to change in appropriate ways with such transformations. The term equivariant estimator is used in formal mathematical contexts that include a precise description of the relation of the way the estimator changes in response to changes to the dataset and parameterisation: this corresponds to the use of "equivariance" in more general mathematics. General setting Background In statistical inference, there are several approaches to estimation theory that can be used to decide immediately what est ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Scale Parameter
In probability theory and statistics, a scale parameter is a special kind of numerical parameter of a parametric family of probability distributions. The larger the scale parameter, the more spread out the distribution. Definition If a family of probability distributions is such that there is a parameter ''s'' (and other parameters ''θ'') for which the cumulative distribution function satisfies :F(x;s,\theta) = F(x/s;1,\theta), \! then ''s'' is called a scale parameter, since its value determines the " scale" or statistical dispersion of the probability distribution. If ''s'' is large, then the distribution will be more spread out; if ''s'' is small then it will be more concentrated. If the probability density exists for all values of the complete parameter set, then the density (as a function of the scale parameter only) satisfies :f_s(x) = f(x/s)/s, \! where ''f'' is the density of a standardized version of the density, i.e. f(x) \equiv f_(x). An estimator of a scale p ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Uniformly Minimum-variance Unbiased Estimator
In statistics a minimum-variance unbiased estimator (MVUE) or uniformly minimum-variance unbiased estimator (UMVUE) is an unbiased estimator that has lower variance than any other unbiased estimator for all possible values of the parameter. For practical statistics problems, it is important to determine the MVUE if one exists, since less-than-optimal procedures would naturally be avoided, other things being equal. This has led to substantial development of statistical theory related to the problem of optimal estimation. While combining the constraint of unbiasedness with the desirability metric of least variance leads to good results in most practical settings—making MVUE a natural starting point for a broad range of analyses—a targeted specification may perform better for a given problem; thus, MVUE is not always the best stopping point. Definition Consider estimation of g(\theta) based on data X_1, X_2, \ldots, X_n i.i.d. from some member of a family of densities p_\theta, \ ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Estimator
In statistics, an estimator is a rule for calculating an estimate of a given quantity based on observed data: thus the rule (the estimator), the quantity of interest (the estimand) and its result (the estimate) are distinguished. For example, the sample mean is a commonly used estimator of the population mean. There are point and interval estimators. The point estimators yield single-valued results. This is in contrast to an interval estimator, where the result would be a range of plausible values. "Single value" does not necessarily mean "single number", but includes vector valued or function valued estimators. ''Estimation theory'' is concerned with the properties of estimators; that is, with defining properties that can be used to compare different estimators (different rules for creating estimates) for the same quantity, based on the same data. Such properties can be used to determine the best rules to use under given circumstances. However, in robust statistics, statistica ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




Henry Scheffé
Henry Scheffé (April 11, 1907 – July 5, 1977) was an American statistician. He is known for the Lehmann–Scheffé theorem and Scheffé's method. Education and career Scheffé was born in New York City on April 11, 1907, the child of German immigrants. The family moved to Islip, New York, where Scheffé went to high school. He graduated in 1924, took night classes at Cooper Union, and a year later entered the Polytechnic Institute of Brooklyn. He transferred to the University of Wisconsin in 1928, and earned a bachelor's degree in mathematics there in 1931. Staying at Wisconsin, he married his wife Miriam in 1934 and finished his PhD in 1935, on the subject of differential equations, under the supervision of Rudolf Ernest Langer. After teaching mathematics at Wisconsin, Oregon State University, and Reed College, Scheffé moved to Princeton University in 1941. At Princeton, he began working in statistics instead of in pure mathematics, and assisted the U.S. war effort as a ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Erich Leo Lehmann
Erich Leo Lehmann (20 November 1917 – 12 September 2009) was a German-born American statistician, who made a major contribution to nonparametric hypothesis testing. He is one of the eponyms of the Lehmann–Scheffé theorem and of the Hodges–Lehmann estimator of the median of a population. Early life Lehmann was born in Strasbourg, Alsace-Lorraine in 1917 to a family of Ashkenazi Jewish ancestry. He grew up in Frankfurt am Main, Germany, until the Machtergreifung in 1933 his family fled to Switzerland to escape the Nazis. He graduated from high school in Zurich, and studied mathematics for two years at Trinity College, Cambridge. Following that, he emigrated to the United States, arriving in New York in late 1940. He enrolled in University of California, Berkeley as a post-graduate student—albeit without a prior degree—in 1941. Career Lehmann obtained his MA in mathematics in 1942 and his PhD (under Jerzy Neyman) in 1946, at the University of California, Berkeley, whe ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]