Q-Gaussian Distribution
   HOME
*



picture info

Q-Gaussian Distribution
The ''q''-Gaussian is a probability distribution arising from the maximization of the Tsallis entropy under appropriate constraints. It is one example of a Tsallis distribution. The ''q''-Gaussian is a generalization of the Gaussian in the same way that Tsallis entropy is a generalization of standard Boltzmann–Gibbs entropy or Shannon entropy. The normal distribution is recovered as ''q'' â†’ 1. The ''q''-Gaussian has been applied to problems in the fields of statistical mechanics, geology, anatomy, astronomy, economics, finance, and machine learning. The distribution is often favored for its heavy tails in comparison to the Gaussian for 1 < ''q'' < 3. For q <1 the ''q''-Gaussian distribution is the PDF of a bounded . This makes in biology and other domains the ''q''-Gaussian distribution more suitable than ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

The PDF Of QGaussian
''The'' () is a grammatical article in English, denoting persons or things that are already or about to be mentioned, under discussion, implied or otherwise presumed familiar to listeners, readers, or speakers. It is the definite article in English. ''The'' is the most frequently used word in the English language; studies and analyses of texts have found it to account for seven percent of all printed English-language words. It is derived from gendered articles in Old English which combined in Middle English and now has a single form used with nouns of any gender. The word can be used with both singular and plural nouns, and with a noun that starts with any letter. This is different from many other languages, which have different forms of the definite article for different genders or numbers. Pronunciation In most dialects, "the" is pronounced as (with the voiced dental fricative followed by a schwa) when followed by a consonant sound, and as (homophone of the archaic ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




Q-analog
In mathematics, a ''q''-analog of a theorem, identity or expression is a generalization involving a new parameter ''q'' that returns the original theorem, identity or expression in the limit as . Typically, mathematicians are interested in ''q''-analogs that arise naturally, rather than in arbitrarily contriving ''q''-analogs of known results. The earliest ''q''-analog studied in detail is the basic hypergeometric series, which was introduced in the 19th century.Exton, H. (1983), ''q-Hypergeometric Functions and Applications'', New York: Halstead Press, Chichester: Ellis Horwood, 1983, , , ''q''-analogues are most frequently studied in the mathematical fields of combinatorics and special functions. In these settings, the limit is often formal, as is often discrete-valued (for example, it may represent a prime power). ''q''-analogs find applications in a number of areas, including the study of fractals and multi-fractal measures, and expressions for the entropy of chaotic ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

White Noise
In signal processing, white noise is a random signal having equal intensity at different frequencies, giving it a constant power spectral density. The term is used, with this or similar meanings, in many scientific and technical disciplines, including physics, acoustical engineering, telecommunications, and statistical forecasting. White noise refers to a statistical model for signals and signal sources, rather than to any specific signal. White noise draws its name from white light, although light that appears white generally does not have a flat power spectral density over the visible band. In discrete time, white noise is a discrete signal whose samples are regarded as a sequence of serially uncorrelated random variables with zero mean and finite variance; a single realization of white noise is a random shock. Depending on the context, one may also require that the samples be independent and have identical probability distribution (in other words independent and iden ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Probability Density Function
In probability theory, a probability density function (PDF), or density of a continuous random variable, is a function whose value at any given sample (or point) in the sample space (the set of possible values taken by the random variable) can be interpreted as providing a ''relative likelihood'' that the value of the random variable would be close to that sample. Probability density is the probability per unit length, in other words, while the ''absolute likelihood'' for a continuous random variable to take on any particular value is 0 (since there is an infinite set of possible values to begin with), the value of the PDF at two different samples can be used to infer, in any particular draw of the random variable, how much more likely it is that the random variable would be close to one sample compared to the other sample. In a more precise sense, the PDF is used to specify the probability of the random variable falling ''within a particular range of values'', as opposed to ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Box–Muller Transform
The Box–Muller transform, by George Edward Pelham Box and Mervin Edgar Muller, is a random number sampling method for generating pairs of independent, standard, normally distributed (zero expectation, unit variance) random numbers, given a source of uniformly distributed random numbers. The method was in fact first mentioned explicitly by Raymond E. A. C. Paley and Norbert Wiener in 1934. The Box–Muller transform is commonly expressed in two forms. The basic form as given by Box and Muller takes two samples from the uniform distribution on the interval , 1/nowiki> and maps them to two standard, normally distributed samples. The polar form takes two samples from a different interval, ˆ’1, +1 and maps them to two normally distributed samples without the use of sine or cosine functions. The Box–Muller transform was developed as a more computationally efficient alternative to the inverse transform sampling method. The ziggurat algorithm gives a more efficient ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Compact Space
In mathematics, specifically general topology, compactness is a property that seeks to generalize the notion of a closed and bounded subset of Euclidean space by making precise the idea of a space having no "punctures" or "missing endpoints", i.e. that the space not exclude any ''limiting values'' of points. For example, the open interval (0,1) would not be compact because it excludes the limiting values of 0 and 1, whereas the closed interval ,1would be compact. Similarly, the space of rational numbers \mathbb is not compact, because it has infinitely many "punctures" corresponding to the irrational numbers, and the space of real numbers \mathbb is not compact either, because it excludes the two limiting values +\infty and -\infty. However, the ''extended'' real number line ''would'' be compact, since it contains both infinities. There are many ways to make this heuristic notion precise. These ways usually agree in a metric space, but may not be equivalent in other topologic ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Information Entropy
In information theory, the entropy of a random variable is the average level of "information", "surprise", or "uncertainty" inherent to the variable's possible outcomes. Given a discrete random variable X, which takes values in the alphabet \mathcal and is distributed according to p: \mathcal\to , 1/math>: \Eta(X) := -\sum_ p(x) \log p(x) = \mathbb \log p(X), where \Sigma denotes the sum over the variable's possible values. The choice of base for \log, the logarithm, varies for different applications. Base 2 gives the unit of bits (or " shannons"), while base ''e'' gives "natural units" nat, and base 10 gives units of "dits", "bans", or " hartleys". An equivalent definition of entropy is the expected value of the self-information of a variable. The concept of information entropy was introduced by Claude Shannon in his 1948 paper " A Mathematical Theory of Communication",PDF archived froherePDF archived frohere and is also referred to as Shannon entropy. Shannon's theory d ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Tsallis Statistics
The term Tsallis statistics usually refers to the collection of mathematical functions and associated probability distributions that were originated by Constantino Tsallis. Using that collection, it is possible to derive Tsallis distributions from the optimization of the Tsallis entropic form. A continuous real parameter ''q'' can be used to adjust the distributions, so that distributions which have properties intermediate to that of Gaussian and Lévy distributions can be created. The parameter ''q'' represents the degree of non- extensivity of the distribution. Tsallis statistics are useful for characterising complex, anomalous diffusion. Tsallis functions The ''q''-deformed exponential and logarithmic functions were first introduced in Tsallis statistics in 1994. However, the ''q''-deformation is the Box–Cox transformation for q=1-\lambda, proposed by George Box and David Cox in 1964. ''q''-exponential The ''q''-exponential is a deformation of the exponential function using ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




Nonextensive Entropy
Entropy is considered to be an extensive property, i.e., that its value depends on the amount of material present. Constantino Tsallis has proposed a nonextensive entropy (Tsallis entropy), which is a generalization of the traditional Boltzmann–Gibbs entropy. The rationale behind the theory is that Gibbs-Boltzmann entropy leads to systems that have a strong dependence on initial conditions. In reality most materials behave quite independently of initial conditions. Nonextensive entropy leads to nonextensive statistical mechanics, whose typical functions are power laws, instead of the traditional exponentials. See also * Tsallis entropy In physics, the Tsallis entropy is a generalization of the standard Boltzmann–Gibbs entropy. Overview The concept was introduced in 1988 by Constantino Tsallis as a basis for generalizing the standard statistical mechanics and is identical in f ... Statistical mechanics Entropy and information Thermodynamic entropy Information ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Degrees Of Freedom (statistics)
In statistics, the number of degrees of freedom is the number of values in the final calculation of a statistic that are free to vary. Estimates of statistical parameters can be based upon different amounts of information or data. The number of independent pieces of information that go into the estimate of a parameter is called the degrees of freedom. In general, the degrees of freedom of an estimate of a parameter are equal to the number of independent scores that go into the estimate minus the number of parameters used as intermediate steps in the estimation of the parameter itself. For example, if the variance is to be estimated from a random sample of ''N'' independent scores, then the degrees of freedom is equal to the number of independent scores (''N'') minus the number of parameters estimated as intermediate steps (one, namely, the sample mean) and is therefore equal to ''N'' âˆ’ 1. Mathematically, degrees of freedom is the number of dimensions of the domain o ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]