Generalized Normal Distribution
   HOME



picture info

Generalized Normal Distribution
The generalized normal distribution (GND) or generalized Gaussian distribution (GGD) is either of two families of parametric continuous probability distributions on the real line. Both families add a shape parameter to the normal distribution. To distinguish the two families, they are referred to below as "symmetric" and "asymmetric"; however, this is not a standard nomenclature. Symmetric version The symmetric generalized normal distribution, also known as the exponential power distribution or the generalized error distribution, is a parametric family of symmetric distributions. It includes all normal and Laplace distributions, and as limiting cases it includes all continuous uniform distributions on bounded intervals of the real line. This family includes the normal distribution when \textstyle\beta=2 (with mean \textstyle\mu and variance \textstyle \frac) and it includes the Laplace distribution when \textstyle\beta=1. As \textstyle\beta\rightarrow\infty, the density ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




Parametric Statistics
Parametric statistics is a branch of statistics which leverages models based on a fixed (finite) set of parameters. Conversely nonparametric statistics does not assume explicit (finite-parametric) mathematical forms for distributions when modeling data. However, it may make some assumptions about that distribution, such as continuity or symmetry, or even an explicit mathematical shape but have a model for a distributional parameter that is not itself finite-parametric. Most well-known statistical methods are parametric. Regarding nonparametric (and semiparametric) models, Sir David Cox has said, "These typically involve fewer assumptions of structure and distributional form but usually contain strong assumptions about independencies". Example The normal family of distributions all have the same general shape and are ''parameterized'' by mean and standard deviation. That means that if the mean and standard deviation are known and if the distribution is normal, the probability o ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Peakedness
In probability theory and statistics, a shape parameter (also known as form parameter) is a kind of numerical parameter of a parametric family of probability distributionsEveritt B.S. (2002) Cambridge Dictionary of Statistics. 2nd Edition. CUP. that is neither a location parameter nor a scale parameter (nor a function of these, such as a rate parameter). Such a parameter must affect the ''shape'' of a distribution rather than simply shifting it (as a location parameter does) or stretching/shrinking it (as a scale parameter does). For example, "peakedness" refers to how round the main peak is. Estimation Many estimators measure location or scale; however, estimators for shape parameters also exist. Most simply, they can be estimated in terms of the higher moments, using the method of moments, as in the ''skewness'' (3rd moment) or ''kurtosis'' (4th moment), if the higher moments are defined and finite. Estimators of shape often involve higher-order statistics (non-linear funct ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Linear Regression
In statistics, linear regression is a statistical model, model that estimates the relationship between a Scalar (mathematics), scalar response (dependent variable) and one or more explanatory variables (regressor or independent variable). A model with exactly one explanatory variable is a ''simple linear regression''; a model with two or more explanatory variables is a multiple linear regression. This term is distinct from multivariate linear regression, which predicts multiple correlated dependent variables rather than a single dependent variable. In linear regression, the relationships are modeled using linear predictor functions whose unknown model parameters are estimation theory, estimated from the data. Most commonly, the conditional mean of the response given the values of the explanatory variables (or predictors) is assumed to be an affine function of those values; less commonly, the conditional median or some other quantile is used. Like all forms of regression analysis, ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Cusp (singularity)
In mathematics, a cusp, sometimes called spinode in old texts, is a point on a curve where a moving point must reverse direction. A typical example is given in the figure. A cusp is thus a type of singular point of a curve. For a plane curve defined by an analytic, parametric equation :\begin x &= f(t)\\ y &= g(t), \end a cusp is a point where both derivatives of and are zero, and the directional derivative, in the direction of the tangent, changes sign (the direction of the tangent is the direction of the slope \lim (g'(t)/f'(t))). Cusps are ''local singularities'' in the sense that they involve only one value of the parameter , in contrast to self-intersection points that involve more than one value. In some contexts, the condition on the directional derivative may be omitted, although, in this case, the singularity may look like a regular point. For a curve defined by an implicit equation :F(x,y) = 0, which is smooth, cusps are points where the terms of lowest degre ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Student T Distribution
In probability theory and statistics, Student's  distribution (or simply the  distribution) t_\nu is a continuous probability distribution that generalizes the standard normal distribution. Like the latter, it is symmetric around zero and bell-shaped. However, t_\nu has heavier tails, and the amount of probability mass in the tails is controlled by the parameter \nu. For \nu = 1 the Student's distribution t_\nu becomes the standard Cauchy distribution, which has very "fat" tails; whereas for \nu \to \infty it becomes the standard normal distribution \mathcal(0, 1), which has very "thin" tails. The name "Student" is a pseudonym used by William Sealy Gosset in his scientific paper publications during his work at the Guinness Brewery in Dublin, Ireland. The Student's  distribution plays a role in a number of widely used statistical analyses, including Student's -test for assessing the statistical significance of the difference between two sample means, the cons ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Skew Normal Distribution
In probability theory and statistics, the skew normal distribution is a continuous probability distribution that generalises the normal distribution to allow for non-zero skewness. Definition Let \phi(x) denote the Normal distribution, standard normal probability density function :\phi(x)=\frace^ with the cumulative distribution function given by :\Phi(x) = \int_^ \phi(t)\ \mathrm dt = \frac \left[ 1 + \operatorname \left(\frac\right)\right], where "erf" is the error function. Then the probability density function (pdf) of the skew-normal distribution with parameter \alpha is given by :f(x) = 2\phi(x)\Phi(\alpha x). \, This distribution was first introduced by O'Hagan and Leonard (1976). Alternative forms to this distribution, with the corresponding quantile function, have been given by Ashour and Abdel-Hamid and by Mudholkar and Hutson. A stochastic process that underpins the distribution was described by Andel, Netuka and Zvara (1984). Both the distribution and its stochas ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Symmetric Distribution
In statistics, a symmetric probability distribution is a probability distribution—an assignment of probabilities to possible occurrences—which is unchanged when its probability density function (for continuous probability distribution) or probability mass function (for discrete random variables) is reflected around a vertical line at some value of the random variable represented by the distribution. This vertical line is the line of symmetry of the distribution. Thus the probability of being any given distance on one side of the value about which symmetry occurs is the same as the probability of being the same distance on the other side of that value. Formal definition A probability distribution is said to be symmetric if and only if there exists a value x_0 such that : f(x_0-\delta) = f(x_0+\delta) for all real numbers \delta , where ''f'' is the probability density function if the distribution is continuous or the probability mass function if the distribution is discre ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Trigamma Function
In mathematics, the trigamma function, denoted or , is the second of the polygamma functions, and is defined by : \psi_1(z) = \frac \ln\Gamma(z). It follows from this definition that : \psi_1(z) = \frac \psi(z) where is the digamma function. It may also be defined as the sum of the series : \psi_1(z) = \sum_^\frac, making it a special case of the Hurwitz zeta function : \psi_1(z) = \zeta(2,z). Note that the last two formulas are valid when is not a natural number. Calculation A double integral representation, as an alternative to the ones given above, may be derived from the series representation: : \psi_1(z) = \int_0^1\!\!\int_0^x\frac\,dy\,dx using the formula for the sum of a geometric series. Integration over yields: : \psi_1(z) = -\int_0^1\frac\,dx An asymptotic expansion as a Laurent series can be obtained via the derivative of the asymptotic expansion of the digamma function: :\begin \psi_1(z) &\sim \left(\ln z - \sum_^\infty \frac\right) \\ &= \fra ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Digamma Function
In mathematics, the digamma function is defined as the logarithmic derivative of the gamma function: :\psi(z) = \frac\ln\Gamma(z) = \frac. It is the first of the polygamma functions. This function is Monotonic function, strictly increasing and Concave function, strictly concave on (0,\infty), and it Asymptotic analysis, asymptotically behaves as :\psi(z) \sim \ln - \frac, for complex numbers with large modulus (, z, \rightarrow\infty) in the Circular sector, sector , \arg z, 0. The digamma function is often denoted as \psi_0(x), \psi^(x) or (the uppercase form of the archaic Greek consonant digamma meaning Gamma, double-gamma). Gamma. Relation to harmonic numbers The gamma function obeys the equation :\Gamma(z+1)=z\Gamma(z). \, Taking the logarithm on both sides and using the functional equation property of the log-gamma function gives: :\log \Gamma(z+1)=\log(z)+\log \Gamma(z), Differentiating both sides with respect to gives: :\psi(z+1)=\psi(z)+\frac Since the ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Moment (mathematics)
In mathematics, the moments of a function are certain quantitative measures related to the shape of the function's graph. If the function represents mass density, then the zeroth moment is the total mass, the first moment (normalized by total mass) is the center of mass, and the second moment is the moment of inertia. If the function is a probability distribution, then the first moment is the expected value, the second central moment is the variance, the third standardized moment is the skewness, and the fourth standardized moment is the kurtosis. For a distribution of mass or probability on a bounded interval, the collection of all the moments (of all orders, from to ) uniquely determines the distribution ( Hausdorff moment problem). The same is not true on unbounded intervals ( Hamburger moment problem). In the mid-nineteenth century, Pafnuty Chebyshev became the first person to think systematically in terms of the moments of random variables. Significance of th ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Newton's Method
In numerical analysis, the Newton–Raphson method, also known simply as Newton's method, named after Isaac Newton and Joseph Raphson, is a root-finding algorithm which produces successively better approximations to the roots (or zeroes) of a real-valued function. The most basic version starts with a real-valued function , its derivative , and an initial guess for a root of . If satisfies certain assumptions and the initial guess is close, then x_ = x_0 - \frac is a better approximation of the root than . Geometrically, is the x-intercept of the tangent of the graph of at : that is, the improved guess, , is the unique root of the linear approximation of at the initial guess, . The process is repeated as x_ = x_n - \frac until a sufficiently precise value is reached. The number of correct digits roughly doubles with each step. This algorithm is first in the class of Householder's methods, and was succeeded by Halley's method. The method can also be extended t ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Maximum Likelihood
In statistics, maximum likelihood estimation (MLE) is a method of estimating the parameters of an assumed probability distribution, given some observed data. This is achieved by maximizing a likelihood function so that, under the assumed statistical model, the observed data is most probable. The point in the parameter space that maximizes the likelihood function is called the maximum likelihood estimate. The logic of maximum likelihood is both intuitive and flexible, and as such the method has become a dominant means of statistical inference. If the likelihood function is differentiable, the derivative test for finding maxima can be applied. In some cases, the first-order conditions of the likelihood function can be solved analytically; for instance, the ordinary least squares estimator for a linear regression model maximizes the likelihood when the random errors are assumed to have normal distributions with the same variance. From the perspective of Bayesian inference, ML ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]