Tweedie Distribution
   HOME
*





Tweedie Distribution
In probability and statistics, the Tweedie distributions are a family of probability distributions which include the purely continuous normal, gamma and inverse Gaussian distributions, the purely discrete scaled Poisson distribution, and the class of compound Poisson–gamma distributions which have positive mass at zero, but are otherwise continuous. Tweedie distributions are a special case of exponential dispersion models and are often used as distributions for generalized linear models. The Tweedie distributions were named by Bent Jørgensen after Maurice Tweedie, a statistician and medical physicist at the University of Liverpool, UK, who presented the first thorough study of these distributions in 1984. Definitions The (reproductive) Tweedie distributions are defined as subfamily of (reproductive) exponential dispersion models (ED), with a special mean-variance relationship. A random variable ''Y'' is Tweedie distributed ''Twp(μ, σ2)'', if Y \sim \mathrm(\mu, \sigma^ ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Probability
Probability is the branch of mathematics concerning numerical descriptions of how likely an Event (probability theory), event is to occur, or how likely it is that a proposition is true. The probability of an event is a number between 0 and 1, where, roughly speaking, 0 indicates impossibility of the event and 1 indicates certainty."Kendall's Advanced Theory of Statistics, Volume 1: Distribution Theory", Alan Stuart and Keith Ord, 6th Ed, (2009), .William Feller, ''An Introduction to Probability Theory and Its Applications'', (Vol 1), 3rd Ed, (1968), Wiley, . The higher the probability of an event, the more likely it is that the event will occur. A simple example is the tossing of a fair (unbiased) coin. Since the coin is fair, the two outcomes ("heads" and "tails") are both equally probable; the probability of "heads" equals the probability of "tails"; and since no other outcomes are possible, the probability of either "heads" or "tails" is 1/2 (which could also be written ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Measure (mathematics)
In mathematics, the concept of a measure is a generalization and formalization of geometrical measures ( length, area, volume) and other common notions, such as mass and probability of events. These seemingly distinct concepts have many similarities and can often be treated together in a single mathematical context. Measures are foundational in probability theory, integration theory, and can be generalized to assume negative values, as with electrical charge. Far-reaching generalizations (such as spectral measures and projection-valued measures) of measure are widely used in quantum physics and physics in general. The intuition behind this concept dates back to ancient Greece, when Archimedes tried to calculate the area of a circle. But it was not until the late 19th and early 20th centuries that measure theory became a branch of mathematics. The foundations of modern measure theory were laid in the works of Émile Borel, Henri Lebesgue, Nikolai Luzin, Johann Radon, Const ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Taylor's Law
Taylor's power law is an empirical law in ecology that relates the variance of the number of individuals of a species per unit area of habitat to the corresponding mean by a power law relationship. It is named after the ecologist who first proposed it in 1961, Lionel Roy Taylor (1924–2007). Taylor's original name for this relationship was the law of the mean. The name ''Taylor's law'' was coined by Southwood in 1966. Definition This law was originally defined for ecological systems, specifically to assess the spatial clustering of organisms. For a population count Y with mean \mu and variance \operatorname (Y), Taylor's law is written : \operatorname (Y) = a\mu^b, where ''a'' and ''b'' are both positive constants. Taylor proposed this relationship in 1961, suggesting that the exponent ''b'' be considered a species specific index of aggregation. This power law has subsequently been confirmed for many hundreds of species. Taylor's law has also been applied to assess the time ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Inverse Gaussian Distribution
In probability theory, the inverse Gaussian distribution (also known as the Wald distribution) is a two-parameter family of continuous probability distributions with support on (0,∞). Its probability density function is given by : f(x;\mu,\lambda) = \sqrt\frac \exp\biggl(-\frac\biggr) for ''x'' > 0, where \mu > 0 is the mean and \lambda > 0 is the shape parameter. The inverse Gaussian distribution has several properties analogous to a Gaussian distribution. The name can be misleading: it is an "inverse" only in that, while the Gaussian describes a Brownian motion's level at a fixed time, the inverse Gaussian describes the distribution of the time a Brownian motion with positive drift takes to reach a fixed positive level. Its cumulant generating function (logarithm of the characteristic function) is the inverse of the cumulant generating function of a Gaussian random variable. To indicate that a random variable ''X'' is inverse Gaussian-distributed with mean μ and sha ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Stable Distribution
In probability theory, a distribution is said to be stable if a linear combination of two independent random variables with this distribution has the same distribution, up to location and scale parameters. A random variable is said to be stable if its distribution is stable. The stable distribution family is also sometimes referred to as the Lévy alpha-stable distribution, after Paul Lévy, the first mathematician to have studied it.B. Mandelbrot, The Pareto–Lévy Law and the Distribution of Income, International Economic Review 1960 https://www.jstor.org/stable/2525289 Of the four parameters defining the family, most attention has been focused on the stability parameter, \alpha (see panel). Stable distributions have 0 < \alpha \leq 2, with the upper bound corresponding to the , and \alpha=1 to the
[...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Compound Poisson Distribution
In probability theory, a compound Poisson distribution is the probability distribution of the sum of a number of independent identically-distributed random variables, where the number of terms to be added is itself a Poisson-distributed variable. The result can be either a continuous or a discrete distribution. Definition Suppose that :N\sim\operatorname(\lambda), i.e., ''N'' is a random variable whose distribution is a Poisson distribution with expected value λ, and that :X_1, X_2, X_3, \dots are identically distributed random variables that are mutually independent and also independent of ''N''. Then the probability distribution of the sum of N i.i.d. random variables :Y = \sum_^N X_n is a compound Poisson distribution. In the case ''N'' = 0, then this is a sum of 0 terms, so the value of ''Y'' is 0. Hence the conditional distribution of ''Y'' given that ''N'' = 0 is a degenerate distribution. The compound Poisson distribution is obtained by marginalising the j ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




Domain (mathematical Analysis)
In mathematical analysis, a domain or region is a non-empty connected open set in a topological space, in particular any non-empty connected open subset of the real coordinate space or the complex coordinate space . This is a different concept than the domain of a function, though it is often used for that purpose, for example in partial differential equations and Sobolev spaces. The basic idea of a connected subset of a space dates from the 19th century, but precise definitions vary slightly from generation to generation, author to author, and edition to edition, as concepts developed and terms were translated between German, French, and English works. In English, some authors use the term ''domain'', some use the term ''region'', some use both terms interchangeably, and some define the two terms slightly differently; some avoid ambiguity by sticking with a phrase such as ''non-empty connected open subset''. One common convention is to define a ''domain'' as a connected open se ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Attractor
In the mathematical field of dynamical systems, an attractor is a set of states toward which a system tends to evolve, for a wide variety of starting conditions of the system. System values that get close enough to the attractor values remain close even if slightly disturbed. In finite-dimensional systems, the evolving variable may be represented algebraically as an ''n''-dimensional vector. The attractor is a region in ''n''-dimensional space. In physical systems, the ''n'' dimensions may be, for example, two or three positional coordinates for each of one or more physical entities; in economic systems, they may be separate variables such as the inflation rate and the unemployment rate. If the evolving variable is two- or three-dimensional, the attractor of the dynamic process can be represented geometrically in two or three dimensions, (as for example in the three-dimensional case depicted to the right). An attractor can be a point, a finite set of points, a curve, a manif ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Convergence In Distribution
In probability theory, there exist several different notions of convergence of random variables. The convergence of sequences of random variables to some limit random variable is an important concept in probability theory, and its applications to statistics and stochastic processes. The same concepts are known in more general mathematics as stochastic convergence and they formalize the idea that a sequence of essentially random or unpredictable events can sometimes be expected to settle down into a behavior that is essentially unchanging when items far enough into the sequence are studied. The different possible notions of convergence relate to how such a behavior can be characterized: two readily understood behaviors are that the sequence eventually takes a constant value, and that values in the sequence continue to change but can be described by an unchanging probability distribution. Background "Stochastic convergence" formalizes the idea that a sequence of essentially random or ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Differential Equation
In mathematics, a differential equation is an equation that relates one or more unknown functions and their derivatives. In applications, the functions generally represent physical quantities, the derivatives represent their rates of change, and the differential equation defines a relationship between the two. Such relations are common; therefore, differential equations play a prominent role in many disciplines including engineering, physics, economics, and biology. Mainly the study of differential equations consists of the study of their solutions (the set of functions that satisfy each equation), and of the properties of their solutions. Only the simplest differential equations are solvable by explicit formulas; however, many properties of solutions of a given differential equation may be determined without computing them exactly. Often when a closed-form expression for the solutions is not available, solutions may be approximated numerically using computers. The theory of d ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Deviance (statistics)
In statistics, deviance is a goodness-of-fit statistic for a statistical model; it is often used for statistical hypothesis testing. It is a generalization of the idea of using the sum of squares of residuals (SSR) in ordinary least squares to cases where model-fitting is achieved by maximum likelihood. It plays an important role in exponential dispersion models and generalized linear models. Definition The unit deviance d(y,\mu) is a bivariate function that satisfies the following conditions: * d(y,y) = 0 * d(y,\mu) > 0 \quad\forall y \neq \mu The total deviance D(\mathbf,\hat) of a model with predictions \hat of the observation \mathbf is the sum of its unit deviances: D(\mathbf,\hat) = \sum_i d(y_i, \hat_i). The (total) deviance for a model ''M''0 with estimates \hat = E \hat_0/math>, based on a dataset ''y'', may be constructed by its likelihood as:McCullagh and Nelder (1989): page 17 D(y,\hat) = 2 \left(\log \left (y\mid\hat \theta_s)\right- \log \left p(y\mid\hat \theta_0)\r ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Natural Exponential Family
In probability and statistics, a natural exponential family (NEF) is a class of probability distributions that is a special case of an exponential family (EF). Definition Univariate case The natural exponential families (NEF) are a subset of the exponential families. A NEF is an exponential family in which the natural parameter ''η'' and the natural statistic ''T''(''x'') are both the identity. A distribution in an exponential family with parameter ''θ'' can be written with probability density function (PDF) : f_X(x\mid \theta) = h(x)\ \exp\Big(\ \eta(\theta) T(x) - A(\theta)\ \Big) \,\! , where h(x) and A(\theta) are known functions. A distribution in a natural exponential family with parameter θ can thus be written with PDF : f_X(x\mid \theta) = h(x)\ \exp\Big(\ \theta x - A(\theta)\ \Big) \,\! . [Note that slightly different notation is used by the originator of the NEF, Carl Morris.Morris C. (2006) "Natural exponential families", ''Encyclopedia of Statis ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]