HOME
*





Bhatia–Davis Inequality
In mathematics, the Bhatia–Davis inequality, named after Rajendra Bhatia and Chandler Davis, is an upper bound on the variance ''σ''2 of any bounded probability distribution on the real line. Statement Let ''m'' and M be the lower and upper bounds, respectively, for a set of real numbers ''a1'', ..., ''an ,'' with a particular probability distribution. Let ''μ'' be the expected value of this distribution. Then the Bhatia–Davis inequality states: : \sigma^2 \le (M - \mu)(\mu - m). \, Equality holds if and only if every ''aj'' in the set of values is equal either to ''M'' or to ''m''. Proof Since m \leq A \leq M, 0 \leq \mathbb M - A)(A - m)= -\mathbb ^2- m M + (m+M)\mu. Thus, \sigma^2 = \mathbb ^2- \mu^2 \leq - m M + (m+M)\mu - \mu^2 = (M - \mu) (\mu - m). Extensions of the Bhatia–Davis inequality If \Phi is a positive and unital linear mapping of a ''C* -''algebra \mathcal into a ''C*'' -algebra \mathcal, and ''A'' is a self-adjoint element of \mathcal ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Rajendra Bhatia
Rajendra Bhatia (born 1952) is an Indian mathematician, author, and educator. He is currently a professor of mathematics at Ashoka University located in Sonipat, Haryana ,India. Education He studied at the University of Delhi, where he completed his BSc degree in physics and MSc degree in mathematics, and moved to the Indian Statistical Institute, Kolkata, where he completed his Ph.D. in 1982 under the probabilist K. R. Parthasarathy. Research Bhatia's research interests include matrix inequalities, calculus of matrix functions, means of matrices, and connections between harmonic analysis, geometry and matrix analysis. He is one of the eponyms of the Bhatia–Davis inequality. Academic life Rajendra Bhatia founded the series Texts and Readings in Mathematics in 1992 and the series Culture and History of Mathematics on the history of Indian mathematics. He has served on the editorial boards of several major international journals such as Linear Algebra and Its Applicat ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Chandler Davis
Horace Chandler Davis (August 12, 1926 – September 24, 2022) was an American-Canadian mathematician, writer, educator, and political activist: "an internationally esteemed mathematician, a minor science fiction writer of note, and among the most celebrated political prisoners in the United States during the years of the high Cold War.". Background Horace Chandler Davis, known as "Chan" by friends, was born on August 12, 1926 in Ithaca, New York, to parents Horace Bancroft Davis and Marian Rubins, both members of the Communist Party USA (CPUSA). He joined the Young Pioneers of America while in elementary school. Because of their politics, his parents moved frequently, so that Davis spent a year of his childhood in Brazil. In 1942, age 16, he received a Harvard National Scholarship. At Harvard, he joined the Astounding Science-Fiction Fanclub, whose members included: John Michel, Frederik Pohl, Isaac Asimov, and Donald Wollheim. In 1943, Davis joined the Communist Party USA ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Upper Bound
In mathematics, particularly in order theory, an upper bound or majorant of a subset of some preordered set is an element of that is greater than or equal to every element of . Dually, a lower bound or minorant of is defined to be an element of that is less than or equal to every element of . A set with an upper (respectively, lower) bound is said to be bounded from above or majorized (respectively bounded from below or minorized) by that bound. The terms bounded above (bounded below) are also used in the mathematical literature for sets that have upper (respectively lower) bounds. Examples For example, is a lower bound for the set (as a subset of the integers or of the real numbers, etc.), and so is . On the other hand, is not a lower bound for since it is not smaller than every element in . The set has as both an upper bound and a lower bound; all other numbers are either an upper bound or a lower bound for that . Every subset of the natural numbers has a lowe ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Variance
In probability theory and statistics, variance is the expectation of the squared deviation of a random variable from its population mean or sample mean. Variance is a measure of dispersion, meaning it is a measure of how far a set of numbers is spread out from their average value. Variance has a central role in statistics, where some ideas that use it include descriptive statistics, statistical inference, hypothesis testing, goodness of fit, and Monte Carlo sampling. Variance is an important tool in the sciences, where statistical analysis of data is common. The variance is the square of the standard deviation, the second central moment of a distribution, and the covariance of the random variable with itself, and it is often represented by \sigma^2, s^2, \operatorname(X), V(X), or \mathbb(X). An advantage of variance as a measure of dispersion is that it is more amenable to algebraic manipulation than other measures of dispersion such as the expected absolute deviation; for e ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Probability Distribution
In probability theory and statistics, a probability distribution is the mathematical function that gives the probabilities of occurrence of different possible outcomes for an experiment. It is a mathematical description of a random phenomenon in terms of its sample space and the probabilities of events (subsets of the sample space). For instance, if is used to denote the outcome of a coin toss ("the experiment"), then the probability distribution of would take the value 0.5 (1 in 2 or 1/2) for , and 0.5 for (assuming that the coin is fair). Examples of random phenomena include the weather conditions at some future date, the height of a randomly selected person, the fraction of male students in a school, the results of a survey to be conducted, etc. Introduction A probability distribution is a mathematical description of the probabilities of events, subsets of the sample space. The sample space, often denoted by \Omega, is the set of all possible outcomes of a random phe ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Expected Value
In probability theory, the expected value (also called expectation, expectancy, mathematical expectation, mean, average, or first moment) is a generalization of the weighted average. Informally, the expected value is the arithmetic mean of a large number of independently selected outcomes of a random variable. The expected value of a random variable with a finite number of outcomes is a weighted average of all possible outcomes. In the case of a continuum of possible outcomes, the expectation is defined by integration. In the axiomatic foundation for probability provided by measure theory, the expectation is given by Lebesgue integration. The expected value of a random variable is often denoted by , , or , with also often stylized as or \mathbb. History The idea of the expected value originated in the middle of the 17th century from the study of the so-called problem of points, which seeks to divide the stakes ''in a fair way'' between two players, who have to end th ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Popoviciu's Inequality On Variances
In probability theory, Popoviciu's inequality, named after Tiberiu Popoviciu, is an upper bound on the variance ''σ''2 of any bounded probability distribution. Let ''M'' and ''m'' be upper and lower bounds on the values of any random variable with a particular probability distribution. Then Popoviciu's inequality states: : \sigma^2 \le \frac14 ( M - m )^2. This equality holds precisely when half of the probability is concentrated at each of the two bounds. Sharma ''et al''. have sharpened Popoviciu's inequality: : \le \frac14 (M - m)^2. Popoviciu's inequality is weaker than the Bhatia–Davis inequality which states : \sigma^2 \le ( M - \mu )( \mu - m ) where ''μ'' is the expectation of the random variable. In the case of an independent sample of ''n'' observations from a bounded probability distribution, the von Szokefalvi Nagy inequality gives a lower bound to the variance of the sample mean: : \sigma^2 \ge \frac . Proof via the Bhatia–Davis inequa ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Cramér–Rao Bound
In estimation theory and statistics, the Cramér–Rao bound (CRB) expresses a lower bound on the variance of unbiased estimators of a deterministic (fixed, though unknown) parameter, the variance of any such estimator is at least as high as the inverse of the Fisher information. Equivalently, it expresses an upper bound on the precision (the inverse of variance) of unbiased estimators: the precision of any such estimator is at most the Fisher information. The result is named in honor of Harald Cramér and C. R. Rao, but has independently also been derived by Maurice Fréchet, Georges Darmois, as well as Alexander Aitken and Harold Silverstone. An unbiased estimator that achieves this lower bound is said to be (fully) '' efficient''. Such a solution achieves the lowest possible mean squared error among all unbiased methods, and is therefore the minimum variance unbiased (MVU) estimator. However, in some cases, no unbiased technique exists which achieves the bound. This may occur ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




Chapman–Robbins Bound
In statistics, the Chapman–Robbins bound or Hammersley–Chapman–Robbins bound is a lower bound on the variance of estimators of a deterministic parameter. It is a generalization of the Cramér–Rao bound; compared to the Cramér–Rao bound, it is both tighter and applicable to a wider range of problems. However, it is usually more difficult to compute. The bound was independently discovered by John Hammersley in 1950, and by Douglas Chapman and Herbert Robbins in 1951. Statement Let \Theta be the set of parameters for a family of probability distributions \ on \Omega. For any two \theta, \theta' \in \Theta, let \chi^2(\mu_; \mu_) be the F-divergence#Instances_of_f-divergences, \chi^2-divergence from \mu_ to \mu_. Then: A generalization to the multivariable case is: Proof By the F-divergence#Variational representations, variational representation of chi-squared divergence:\chi^2(P; Q) = \sup_g \frac Plug in g = \hat g, P = \mu_, Q = \mu_\theta, to obtain: \chi^2(\ ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Statistical Inequalities
Statistics (from German: ''Statistik'', "description of a state, a country") is the discipline that concerns the collection, organization, analysis, interpretation, and presentation of data. In applying statistics to a scientific, industrial, or social problem, it is conventional to begin with a statistical population or a statistical model to be studied. Populations can be diverse groups of people or objects such as "all people living in a country" or "every atom composing a crystal". Statistics deals with every aspect of data, including the planning of data collection in terms of the design of surveys and experiments.Dodge, Y. (2006) ''The Oxford Dictionary of Statistical Terms'', Oxford University Press. When census data cannot be collected, statisticians collect data by developing specific experiment designs and survey samples. Representative sampling assures that inferences and conclusions can reasonably extend from the sample to the population as a whole. An expe ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]