HOME
*





Pao-Lu Hsu
Pao-Lu Hsu or Xu Baolu (; September 1, 1910 – December 18, 1970) was a Chinese mathematician noted for his work in probability theory and statistics. Life and career Pao-Lu Hsu was born in Beijing on September 1, 1910, with his ancestral home in Hangzhou, Zhejiang Province. He was from a prominent intellectual family. In his childhood, he received solid training in both traditional Chinese and modern western cultures. He graduated from Tsinghua University in 1933, majoring in mathematics. After his graduation, he worked at Peking University as a teacher. In the meantime, he published a joint paper with Tsai-han Kiang ( Jiang Zehan) on the numbers of nondegenerate critical points, which showed his solid mathematical foundation and research capability. In 1936, he went to University College London and spend four years studying mathematical statistics. During this period, with his strong mathematical skill combining with advanced statistical ideas, he wrote a series remarkable pa ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Xu (surname)
Xu can refer to the following Chinese surnames that are homographs when Romanized using their Mandarin pronunciations: * Xu (surname 徐) * Xu (surname 許) * Xu (surname 須) The tones of these surnames are different in Mandarin, but if the tone diacritics are omitted then both surnames would be spelled Xu in pinyin, and Hsü in the Wade–Giles Wade–Giles () is a romanization system for Mandarin Chinese. It developed from a system produced by Thomas Francis Wade, during the mid-19th century, and was given completed form with Herbert A. Giles's '' Chinese–English Dictionary'' of ... system or Hsu if the diaeresis is also omitted. {{DEFAULTSORT:Xu (surname) Chinese-language surnames Multiple Chinese surnames ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


MacTutor History Of Mathematics Archive
The MacTutor History of Mathematics archive is a website maintained by John J. O'Connor and Edmund F. Robertson and hosted by the University of St Andrews in Scotland. It contains detailed biographies on many historical and contemporary mathematicians, as well as information on famous curves and various topics in the history of mathematics. The History of Mathematics archive was an outgrowth of Mathematical MacTutor system, a HyperCard database by the same authors, which won them the European Academic Software award in 1994. In the same year, they founded their web site. it has biographies on over 2800 mathematicians and scientists. In 2015, O'Connor and Robertson won the Hirst Prize of the London Mathematical Society for their work... The citation for the Hirst Prize calls the archive "the most widely used and influential web-based resource in history of mathematics". See also * Mathematics Genealogy Project * MathWorld * PlanetMath PlanetMath is a free, collaborative, m ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Hsu–Robbins–Erdős Theorem
In the mathematical theory of probability, the Hsu–Robbins–Erdős theorem states that if X_1, \ldots ,X_n is a sequence of i.i.d. random variables with zero mean and finite variance and : S_n = X_1 + \cdots + X_n, \, then : \sum\limits_ P( , S_n , > \varepsilon n) 0. The result was proved by Pao-Lu Hsu and Herbert Robbins in 1947. This is an interesting strengthening of the classical strong law of large numbers in the direction of the Borel–Cantelli lemma. The idea of such a result is probably due to Robbins, but the method of proof is vintage Hsu. Hsu and Robbins further conjectured in that the condition of finiteness of the variance of X is also a necessary condition for \sum\limits_ P(, S_n , > \varepsilon n) < \infty to hold. Two years later, the famed mathematician

picture info

Random Variable
A random variable (also called random quantity, aleatory variable, or stochastic variable) is a mathematical formalization of a quantity or object which depends on random events. It is a mapping or a function from possible outcomes (e.g., the possible upper sides of a flipped coin such as heads H and tails T) in a sample space (e.g., the set \) to a measurable space, often the real numbers (e.g., \ in which 1 corresponding to H and -1 corresponding to T). Informally, randomness typically represents some fundamental element of chance, such as in the roll of a dice; it may also represent uncertainty, such as measurement error. However, the interpretation of probability is philosophically complicated, and even in specific cases is not always straightforward. The purely mathematical analysis of random variables is independent of such interpretational difficulties, and can be based upon a rigorous axiomatic setup. In the formal mathematical language of measure theory, a random var ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Characteristic Function (probability Theory)
In probability theory and statistics, the characteristic function of any real-valued random variable completely defines its probability distribution. If a random variable admits a probability density function, then the characteristic function is the Fourier transform of the probability density function. Thus it provides an alternative route to analytical results compared with working directly with probability density functions or cumulative distribution functions. There are particularly simple results for the characteristic functions of distributions defined by the weighted sums of random variables. In addition to univariate distributions, characteristic functions can be defined for vector- or matrix-valued random variables, and can also be extended to more generic cases. The characteristic function always exists when treated as a function of a real-valued argument, unlike the moment-generating function. There are relations between the behavior of the characteristic function of a ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




Multivariate Analysis
Multivariate statistics is a subdivision of statistics encompassing the simultaneous observation and analysis of more than one outcome variable. Multivariate statistics concerns understanding the different aims and background of each of the different forms of multivariate analysis, and how they relate to each other. The practical application of multivariate statistics to a particular problem may involve several types of univariate and multivariate analyses in order to understand the relationships between variables and their relevance to the problem being studied. In addition, multivariate statistics is concerned with multivariate probability distributions, in terms of both :*how these can be used to represent the distributions of observed data; :*how they can be used as part of statistical inference, particularly where several different quantities are of interest to the same analysis. Certain types of problems involving multivariate data, for example simple linear regression an ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Linear Model
In statistics, the term linear model is used in different ways according to the context. The most common occurrence is in connection with regression models and the term is often taken as synonymous with linear regression model. However, the term is also used in time series analysis with a different meaning. In each case, the designation "linear" is used to identify a subclass of models for which substantial reduction in the complexity of the related statistical theory is possible. Linear regression models For the regression case, the statistical model is as follows. Given a (random) sample (Y_i, X_, \ldots, X_), \, i = 1, \ldots, n the relation between the observations Y_i and the independent variables X_ is formulated as :Y_i = \beta_0 + \beta_1 \phi_1(X_) + \cdots + \beta_p \phi_p(X_) + \varepsilon_i \qquad i = 1, \ldots, n where \phi_1, \ldots, \phi_p may be nonlinear functions. In the above, the quantities \varepsilon_i are random variables representing errors in the ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Likelihood-ratio Test
In statistics, the likelihood-ratio test assesses the goodness of fit of two competing statistical models based on the ratio of their likelihoods, specifically one found by maximization over the entire parameter space and another found after imposing some constraint. If the constraint (i.e., the null hypothesis) is supported by the observed data, the two likelihoods should not differ by more than sampling error. Thus the likelihood-ratio test tests whether this ratio is significantly different from one, or equivalently whether its natural logarithm is significantly different from zero. The likelihood-ratio test, also known as Wilks test, is the oldest of the three classical approaches to hypothesis testing, together with the Lagrange multiplier test and the Wald test. In fact, the latter two can be conceptualized as approximations to the likelihood-ratio test, and are asymptotically equivalent. In the case of comparing two models each of which has no unknown parameters, use o ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Statistical Hypothesis
A statistical hypothesis test is a method of statistical inference used to decide whether the data at hand sufficiently support a particular hypothesis. Hypothesis testing allows us to make probabilistic statements about population parameters. History Early use While hypothesis testing was popularized early in the 20th century, early forms were used in the 1700s. The first use is credited to John Arbuthnot (1710), followed by Pierre-Simon Laplace (1770s), in analyzing the human sex ratio at birth; see . Modern origins and early controversy Modern significance testing is largely the product of Karl Pearson ( ''p''-value, Pearson's chi-squared test), William Sealy Gosset (Student's t-distribution), and Ronald Fisher ("null hypothesis", analysis of variance, "significance test"), while hypothesis testing was developed by Jerzy Neyman and Egon Pearson (son of Karl). Ronald Fisher began his life in statistics as a Bayesian (Zabell 1992), but Fisher soon grew disenchanted with the su ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Gauss–Markov Process
Gauss–Markov stochastic processes (named after Carl Friedrich Gauss and Andrey Markov) are stochastic processes that satisfy the requirements for both Gaussian processes and Markov processes. A stationary Gauss–Markov process is unique up to rescaling; such a process is also known as an Ornstein–Uhlenbeck process. Gauss–Markov processes obey Langevin equations. Basic properties Every Gauss–Markov process ''X''(''t'') possesses the three following properties: C. B. Mehr and J. A. McFadden. Certain Properties of Gaussian Processes and Their First-Passage Times. Journal of the Royal Statistical Society. Series B (Methodological), Vol. 27, No. 3(1965), pp. 505-522 # If ''h''(''t'') is a non-zero scalar function of ''t'', then ''Z''(''t'') = ''h''(''t'')''X''(''t'') is also a Gauss–Markov process # If ''f''(''t'') is a non-decreasing scalar function of ''t'', then ''Z''(''t'') = ''X''(''f''(''t'')) is also a Gauss–Markov process # If the process is non-degenerate and mea ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Variance
In probability theory and statistics, variance is the expectation of the squared deviation of a random variable from its population mean or sample mean. Variance is a measure of dispersion, meaning it is a measure of how far a set of numbers is spread out from their average value. Variance has a central role in statistics, where some ideas that use it include descriptive statistics, statistical inference, hypothesis testing, goodness of fit, and Monte Carlo sampling. Variance is an important tool in the sciences, where statistical analysis of data is common. The variance is the square of the standard deviation, the second central moment of a distribution, and the covariance of the random variable with itself, and it is often represented by \sigma^2, s^2, \operatorname(X), V(X), or \mathbb(X). An advantage of variance as a measure of dispersion is that it is more amenable to algebraic manipulation than other measures of dispersion such as the expected absolute deviation; for e ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




Behrens–Fisher Problem
In statistics, the Behrens–Fisher problem, named after Walter Behrens and Ronald Fisher, is the problem of interval estimation and hypothesis testing concerning the difference between the means of two normally distributed populations when the variances of the two populations are not assumed to be equal, based on two independent samples. Specification One difficulty with discussing the Behrens–Fisher problem and proposed solutions, is that there are many different interpretations of what is meant by "the Behrens–Fisher problem". These differences involve not only what is counted as being a relevant solution, but even the basic statement of the context being considered. Context Let ''X''1, ..., ''X''''n'' and ''Y''1, ..., ''Y''''m'' be i.i.d. samples from two populations which both come from the same location–scale family of distributions. The scale parameters are assumed to be unknown and not necessarily equal, and the problem is to assess whether t ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]