Hopkins Statistic
   HOME
*





Hopkins Statistic
The Hopkins statistic (introduced by Brian Hopkins and John Gordon Skellam) is a way of measuring the cluster tendency of a data set. It belongs to the family of sparse sampling tests. It acts as a statistical hypothesis test where the null hypothesis is that the data is generated by a Poisson point process In probability, statistics and related fields, a Poisson point process is a type of random mathematical object that consists of points randomly located on a mathematical space with the essential feature that the points occur independently of one ... and are thus uniformly randomly distributed. If individuals are aggregated, then its value approaches 0, and if they are randomly distributed, the value tends to 0.5. Preliminaries A typical formulation of the Hopkins statistic follows. :Let X be the set of n data points. :Generate a random sample \overset of m \ll n data points sampled without replacement from X. :Generate a set Y of m uniformly randomly distributed data points ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


John Gordon Skellam
John Gordon Skellam (1914-1979) was a statistician and ecologist, who discovered the Skellam distribution. Skellam was born in Staffordshire. He was educated at Hanley High School where he won several scholarships including free admission to New College in Oxford. He was one of the most respected members of the British Region of the Biometric Society. In 1951, John G. Skellam developed the reaction-diffusion model of invasion biology. This model describes the dynamics of populations, which simultaneously develops and spreads, and provides that the invasion front moves with constant speed. He explained on the basis of habitation muskrat introduced to Europe that only by chance the species may be in a place where it would have to grow. Skellam has provided a model that allows to take the dynamics of populations as a random variable at any time t. Its stochastic form is much more flexible than previous deterministic equations. Bibliography * * See also * Skellam distribut ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Cluster Tendency
Cluster analysis or clustering is the task of grouping a set of objects in such a way that objects in the same group (called a cluster) are more similar (in some sense) to each other than to those in other groups (clusters). It is a main task of exploratory data analysis, and a common technique for statistical data analysis, used in many fields, including pattern recognition, image analysis, information retrieval, bioinformatics, data compression, computer graphics and machine learning. Cluster analysis itself is not one specific algorithm, but the general task to be solved. It can be achieved by various algorithms that differ significantly in their understanding of what constitutes a cluster and how to efficiently find them. Popular notions of clusters include groups with small distances between cluster members, dense areas of the data space, intervals or particular statistical distributions. Clustering can therefore be formulated as a multi-objective optimization problem. The a ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  



MORE