Pitman Closeness Criterion
   HOME
*





Pitman Closeness Criterion
In statistical theory, the Pitman closeness criterion, named after E. J. G. Pitman, is a way of comparing two candidate estimators for the same parameter. Under this criterion, estimator A is preferred to estimator B if the probability that estimator A is closer to the true value than estimator B is greater than one half. Here the meaning of ''closer'' is determined by the absolute difference in the case of a scalar parameter, or by the Mahalanobis distance for a vector parameter. References *Pitman, E. (1937) "The “closest” estimates of statistical parameters". ''Mathematical Proceedings of the Cambridge Philosophical Society'', 33 (2), 212–222. *Rukhin, A. (1996) "On the Pitman closeness criterion from the decision – Theoretic point of view". ''Statistics & Decisions'', 14, 253–274. *Peddada, D. S. (1985) "A short note on Pitman’s measure of nearness". ''American Statistician'', 39, 298–299. *Peddada, D. S. (1986) "Reply". ''American Statistician'', 40, 2576 *Na ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Statistical Theory
The theory of statistics provides a basis for the whole range of techniques, in both study design and data analysis, that are used within applications of statistics. The theory covers approaches to statistical-decision problems and to statistical inference, and the actions and deductions that satisfy the basic principles stated for these different approaches. Within a given approach, statistical theory gives ways of comparing statistical procedures; it can find a best possible procedure within a given context for given statistical problems, or can provide guidance on the choice between alternative procedures. Apart from philosophical considerations about how to make statistical inferences and decisions, much of statistical theory consists of mathematical statistics, and is closely linked to probability theory, to utility theory, and to optimization. Scope Statistical theory provides an underlying rationale and provides a consistent basis for the choice of methodology used in ap ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Estimator
In statistics, an estimator is a rule for calculating an estimate of a given quantity based on observed data: thus the rule (the estimator), the quantity of interest (the estimand) and its result (the estimate) are distinguished. For example, the sample mean is a commonly used estimator of the population mean. There are point and interval estimators. The point estimators yield single-valued results. This is in contrast to an interval estimator, where the result would be a range of plausible values. "Single value" does not necessarily mean "single number", but includes vector valued or function valued estimators. ''Estimation theory'' is concerned with the properties of estimators; that is, with defining properties that can be used to compare different estimators (different rules for creating estimates) for the same quantity, based on the same data. Such properties can be used to determine the best rules to use under given circumstances. However, in robust statistics, statistica ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Mahalanobis Distance
The Mahalanobis distance is a measure of the distance between a point ''P'' and a distribution ''D'', introduced by P. C. Mahalanobis in 1936. Mahalanobis's definition was prompted by the problem of identifying the similarities of skulls based on measurements in 1927. It is a multi-dimensional generalization of the idea of measuring how many standard deviations away ''P'' is from the mean of ''D''. This distance is zero for ''P'' at the mean of ''D'' and grows as ''P'' moves away from the mean along each principal component axis. If each of these axes is re-scaled to have unit variance, then the Mahalanobis distance corresponds to standard Euclidean distance in the transformed space. The Mahalanobis distance is thus unitless, scale-invariant, and takes into account the correlations of the data set. Definition Given a probability distribution Q on \R^N, with mean \vec = (\mu_1, \mu_2, \mu_3, \dots , \mu_N)^\mathsf and positive-definite covariance matrix S, the Mahalanobis dis ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Journal Of The American Statistical Association
The ''Journal of the American Statistical Association (JASA)'' is the primary journal published by the American Statistical Association, the main professional body for statisticians in the United States. It is published four times a year in March, June, September and December by Taylor & Francis, Ltd on behalf of the American Statistical Association. As a statistics journal it publishes articles primarily focused on the application of statistics, statistical theory and methods in economic, social, physical, engineering, and health sciences. The journal also includes reviews of academic books which are important to the advancement of the field. It had an impact factor of 2.063 in 2010, tenth highest in the "Statistics and Probability" category of ''Journal Citation Reports''. In a 2003 survey of statisticians, the ''Journal of the American Statistical Association'' was ranked first, among all journals, for "Applications of Statistics" and second (after ''Annals of Statistics'') f ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Biometrika
''Biometrika'' is a peer-reviewed scientific journal published by Oxford University Press for thBiometrika Trust The editor-in-chief is Paul Fearnhead (Lancaster University). The principal focus of this journal is theoretical statistics. It was established in 1901 and originally appeared quarterly. It changed to three issues per year in 1977 but returned to quarterly publication in 1992. History ''Biometrika'' was established in 1901 by Francis Galton, Karl Pearson, and Raphael Weldon to promote the study of biometrics. The history of ''Biometrika'' is covered by Cox (2001). The name of the journal was chosen by Pearson, but Francis Edgeworth insisted that it be spelt with a "k" and not a "c". Since the 1930s, it has been a journal for statistical theory and methodology. Galton's role in the journal was essentially that of a patron and the journal was run by Pearson and Weldon and after Weldon's death in 1906 by Pearson alone until he died in 1936. In the early days, the American ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




Sankhya (journal)
''Sankhyā: The Indian Journal of Statistics'' is a quarterly peer-reviewed scientific journal on statistics published by the Indian Statistical Institute (ISI). It was established in 1933 by Prasanta Chandra Mahalanobis, founding director of ISI, along the lines of Karl Pearson's ''Biometrika''. Mahalanobis was the founding editor-in-chief. Each volume of ''Sankhya'' consists of four issues, two of them are in Series A, containing articles on theoretical statistics, probability theory, and stochastic processes, whereas the other two issues form Series B, containing articles on applied statistics, i.e. applied probability, applied stochastic processes, econometrics, and statistical computing. ''Sankhya'' is considered as "core journal" of statistics by the Current Index to Statistics. Publication history ''Sankhya'' was first published in June 1933. In 1961, the journal split into two series: Series A which focused on mathematical statistics and Series B which focused on stat ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Statistical Distance
In statistics, probability theory, and information theory, a statistical distance quantifies the distance between two statistical objects, which can be two random variables, or two probability distributions or samples, or the distance can be between an individual sample point and a population or a wider sample of points. A distance between populations can be interpreted as measuring the distance between two probability distributions and hence they are essentially measures of distances between probability measures. Where statistical distance measures relate to the differences between random variables, these may have statistical dependence,Dodge, Y. (2003)—entry for distance and hence these distances are not directly related to measures of distances between probability measures. Again, a measure of distance between random variables may relate to the extent of dependence between them, rather than to their individual values. Statistical distance measures are not typically m ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]