Pao-Lu Hsu
   HOME
*





Pao-Lu Hsu
Pao-Lu Hsu or Xu Baolu (; September 1, 1910 – December 18, 1970) was a Chinese mathematician noted for his work in probability theory and statistics. Life and career Pao-Lu Hsu was born in Beijing on September 1, 1910, with his ancestral home in Hangzhou, Zhejiang Province. He was from a prominent intellectual family. In his childhood, he received solid training in both traditional Chinese and modern western cultures. He graduated from Tsinghua University in 1933, majoring in mathematics. After his graduation, he worked at Peking University as a teacher. In the meantime, he published a joint paper with Tsai-han Kiang ( Jiang Zehan) on the numbers of nondegenerate critical points, which showed his solid mathematical foundation and research capability. In 1936, he went to University College London and spend four years studying mathematical statistics. During this period, with his strong mathematical skill combining with advanced statistical ideas, he wrote a series remarkable pa ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Xu (surname)
Xu can refer to the following Chinese surnames that are homographs when Romanized using their Mandarin pronunciations: * Xu (surname 徐) * Xu (surname 許) * Xu (surname 須) The tones of these surnames are different in Mandarin, but if the tone diacritics are omitted then both surnames would be spelled Xu in pinyin, and Hsü in the Wade–Giles Wade–Giles () is a romanization system for Mandarin Chinese. It developed from a system produced by Thomas Francis Wade, during the mid-19th century, and was given completed form with Herbert A. Giles's '' Chinese–English Dictionary'' of ... system or Hsu if the diaeresis is also omitted. {{DEFAULTSORT:Xu (surname) Chinese-language surnames Multiple Chinese surnames ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


MacTutor History Of Mathematics Archive
The MacTutor History of Mathematics archive is a website maintained by John J. O'Connor and Edmund F. Robertson and hosted by the University of St Andrews in Scotland. It contains detailed biographies on many historical and contemporary mathematicians, as well as information on famous curves and various topics in the history of mathematics. The History of Mathematics archive was an outgrowth of Mathematical MacTutor system, a HyperCard database by the same authors, which won them the European Academic Software award in 1994. In the same year, they founded their web site. it has biographies on over 2800 mathematicians and scientists. In 2015, O'Connor and Robertson won the Hirst Prize of the London Mathematical Society for their work... The citation for the Hirst Prize calls the archive "the most widely used and influential web-based resource in history of mathematics". See also * Mathematics Genealogy Project * MathWorld * PlanetMath PlanetMath is a free, collaborative, m ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Hsu–Robbins–Erdős Theorem
In the mathematical theory of probability, the Hsu–Robbins–Erdős theorem states that if X_1, \ldots ,X_n is a sequence of i.i.d. random variables with zero mean and finite variance and : S_n = X_1 + \cdots + X_n, \, then : \sum\limits_ P( , S_n , > \varepsilon n) 0. The result was proved by Pao-Lu Hsu and Herbert Robbins in 1947. This is an interesting strengthening of the classical strong law of large numbers in the direction of the Borel–Cantelli lemma. The idea of such a result is probably due to Robbins, but the method of proof is vintage Hsu. Hsu and Robbins further conjectured in that the condition of finiteness of the variance of X is also a necessary condition for \sum\limits_ P(, S_n , > \varepsilon n) < \infty to hold. Two years later, the famed mathematician

picture info

Random Variable
A random variable (also called random quantity, aleatory variable, or stochastic variable) is a mathematical formalization of a quantity or object which depends on random events. It is a mapping or a function from possible outcomes (e.g., the possible upper sides of a flipped coin such as heads H and tails T) in a sample space (e.g., the set \) to a measurable space, often the real numbers (e.g., \ in which 1 corresponding to H and -1 corresponding to T). Informally, randomness typically represents some fundamental element of chance, such as in the roll of a dice; it may also represent uncertainty, such as measurement error. However, the interpretation of probability is philosophically complicated, and even in specific cases is not always straightforward. The purely mathematical analysis of random variables is independent of such interpretational difficulties, and can be based upon a rigorous axiomatic setup. In the formal mathematical language of measure theory, a random var ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Characteristic Function (probability Theory)
In probability theory and statistics, the characteristic function of any real-valued random variable completely defines its probability distribution. If a random variable admits a probability density function, then the characteristic function is the Fourier transform of the probability density function. Thus it provides an alternative route to analytical results compared with working directly with probability density functions or cumulative distribution functions. There are particularly simple results for the characteristic functions of distributions defined by the weighted sums of random variables. In addition to univariate distributions, characteristic functions can be defined for vector- or matrix-valued random variables, and can also be extended to more generic cases. The characteristic function always exists when treated as a function of a real-valued argument, unlike the moment-generating function. There are relations between the behavior of the characteristic function of a ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




Multivariate Analysis
Multivariate statistics is a subdivision of statistics encompassing the simultaneous observation and analysis of more than one outcome variable. Multivariate statistics concerns understanding the different aims and background of each of the different forms of multivariate analysis, and how they relate to each other. The practical application of multivariate statistics to a particular problem may involve several types of univariate and multivariate analyses in order to understand the relationships between variables and their relevance to the problem being studied. In addition, multivariate statistics is concerned with multivariate probability distributions, in terms of both :*how these can be used to represent the distributions of observed data; :*how they can be used as part of statistical inference, particularly where several different quantities are of interest to the same analysis. Certain types of problems involving multivariate data, for example simple linear regression an ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Linear Model
In statistics, the term linear model is used in different ways according to the context. The most common occurrence is in connection with regression models and the term is often taken as synonymous with linear regression model. However, the term is also used in time series analysis with a different meaning. In each case, the designation "linear" is used to identify a subclass of models for which substantial reduction in the complexity of the related statistical theory is possible. Linear regression models For the regression case, the statistical model is as follows. Given a (random) sample (Y_i, X_, \ldots, X_), \, i = 1, \ldots, n the relation between the observations Y_i and the independent variables X_ is formulated as :Y_i = \beta_0 + \beta_1 \phi_1(X_) + \cdots + \beta_p \phi_p(X_) + \varepsilon_i \qquad i = 1, \ldots, n where \phi_1, \ldots, \phi_p may be nonlinear functions. In the above, the quantities \varepsilon_i are random variables representing errors in the ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Likelihood-ratio Test
In statistics, the likelihood-ratio test assesses the goodness of fit of two competing statistical models based on the ratio of their likelihoods, specifically one found by maximization over the entire parameter space and another found after imposing some constraint. If the constraint (i.e., the null hypothesis) is supported by the observed data, the two likelihoods should not differ by more than sampling error. Thus the likelihood-ratio test tests whether this ratio is significantly different from one, or equivalently whether its natural logarithm is significantly different from zero. The likelihood-ratio test, also known as Wilks test, is the oldest of the three classical approaches to hypothesis testing, together with the Lagrange multiplier test and the Wald test. In fact, the latter two can be conceptualized as approximations to the likelihood-ratio test, and are asymptotically equivalent. In the case of comparing two models each of which has no unknown parameters, use o ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Statistical Hypothesis
A statistical hypothesis test is a method of statistical inference used to decide whether the data at hand sufficiently support a particular hypothesis. Hypothesis testing allows us to make probabilistic statements about population parameters. History Early use While hypothesis testing was popularized early in the 20th century, early forms were used in the 1700s. The first use is credited to John Arbuthnot (1710), followed by Pierre-Simon Laplace (1770s), in analyzing the human sex ratio at birth; see . Modern origins and early controversy Modern significance testing is largely the product of Karl Pearson ( ''p''-value, Pearson's chi-squared test), William Sealy Gosset (Student's t-distribution), and Ronald Fisher ("null hypothesis", analysis of variance, "significance test"), while hypothesis testing was developed by Jerzy Neyman and Egon Pearson (son of Karl). Ronald Fisher began his life in statistics as a Bayesian (Zabell 1992), but Fisher soon grew disenchanted with the su ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  



MORE