Empirical examples
The distributions of a wide variety of physical, biological, and human-made phenomena approximately follow a power law over a wide range of magnitudes: these include the sizes of craters on theProperties
Statistical incompleteness
The power-law model does not obey the treasured paradigm of statistical completeness. Especially probability bounds, the suspected cause of typical bending and/or flattening phenomena in the high- and low-frequency graphical segments, are parametrically absent in the standard model.Scale invariance
One attribute of power laws is theirLack of well-defined average value
A power-law has a well-definedUniversality
The equivalence of power laws with a particular scaling exponent can have a deeper origin in the dynamical processes that generate the power-law relation. In physics, for example,Power-law functions
Scientific interest in power-law relations stems partly from the ease with which certain general classes of mechanisms generate them. The demonstration of a power-law relation in some data can point to specific kinds of mechanisms that might underlie the natural phenomenon in question, and can indicate a deep connection with other, seemingly unrelated systems; see also universality above. The ubiquity of power-law relations in physics is partly due to dimensional constraints, while inExamples
More than a hundred power-law distributions have been identified in physics (e.g. sandpile avalanches), biology (e.g. species extinction and body mass), and the social sciences (e.g. city sizes and income). Among them are:Artificial Intelligence
*Astronomy
* Kepler's third law * TheBiology
* Kleiber's law relating animal metabolism to size, and allometric laws in general * The two-thirds power law, relating speed to curvature in the humanChemistry
* Rate lawClimate science
* Sizes of cloud areas and perimeters, as viewed from space * The size of rain-shower cells * Energy dissipation in cyclones * Diameters ofGeneral science
* Highly optimized tolerance *Proposed form ofEconomics
* Population sizes of cities in a region or urban network, Zipf's law. *Distribution of artists by the average price of their artworks. *Finance
* Returns for high-riskMathematics
*Physics
*The Angstrom exponent in aerosol optics *The frequency-dependency ofPolitical Science
* Cube root law of assembly sizesPsychology
*Variants
Broken power law
Smoothly broken power law
The pieces of a broken power law can be smoothly spliced together to construct a smoothly broken power law. There are different possible ways to splice together power laws. One example is the following:Power law with exponential cutoff
A power law with an exponential cutoff is simply a power law multiplied by an exponential function: :Curved power law
:Power-law probability distributions
In a looser sense, a power-lawGraphical methods for identification
Although more sophisticated and robust methods have been proposed, the most frequently used graphical methods of identifying power-law probability distributions using random samples are Pareto quantile-quantile plots (or Pareto Q–Q plots), mean residual life plots andPareto Q–Q plots
Pareto Q–Q plots compare the quantiles of the log-transformed data to the corresponding quantiles of an exponential distribution with mean 1 (or to the quantiles of a standard Pareto distribution) by plotting the former versus the latter. If the resultant scatterplot suggests that the plotted points ''asymptotically converge'' to a straight line, then a power-law distribution should be suspected. A limitation of Pareto Q–Q plots is that they behave poorly when the tail indexMean residual life plots
On the other hand, in its version for identifying power-law probability distributions, the mean residual life plot consists of first log-transforming the data, and then plotting the average of those log-transformed data that are higher than the ''i''-th order statistic versus the ''i''-th order statistic, for ''i'' = 1, ..., ''n'', where n is the size of the random sample. If the resultant scatterplot suggests that the plotted points tend to stabilize about a horizontal straight line, then a power-law distribution should be suspected. Since the mean residual life plot is very sensitive to outliers (it is not robust), it usually produces plots that are difficult to interpret; for this reason, such plots are usually called Hill horror plots.Log-log plots
Bundle plots
Another graphical method for the identification of power-law probability distributions using random samples has been proposed. This methodology consists of plotting a ''bundle for the log-transformed sample''. Originally proposed as a tool to explore the existence of moments and the moment generation function using random samples, the bundle methodology is based on residual quantile functions (RQFs), also called residual percentile functions, which provide a full characterization of the tail behavior of many well-known probability distributions, including power-law distributions, distributions with other types of heavy tails, and even non-heavy-tailed distributions. Bundle plots do not have the disadvantages of Pareto Q–Q plots, mean residual life plots and log–log plots mentioned above (they are robust to outliers, allow visually identifying power laws with small values ofPlotting power-law distributions
In general, power-law distributions are plotted on log–log plot, doubly logarithmic axes, which emphasizes the upper tail region. The most convenient way to do this is via the (complementary) cumulative distribution function#Complementary cumulative distribution function (tail distribution), cumulative distribution (ccdf) that is, the survival function,Estimating the exponent from empirical data
There are many ways of estimating the value of the scaling exponent for a power-law tail, however not all of them yield Maximum likelihood estimation#Second-order efficiency after correction for bias, unbiased and consistent answers. Some of the most reliable techniques are often based on the method of maximum likelihood estimation, maximum likelihood. Alternative methods are often based on making a linear regression on either the log–log probability, the log–log cumulative distribution function, or on log-binned data, but these approaches should be avoided as they can all lead to highly biased estimates of the scaling exponent.Maximum likelihood
For real-valued, independent and identically distributed data, we fit a power-law distribution of the form :Kolmogorov–Smirnov estimation
Another method for the estimation of the power-law exponent, which does not assume independent and identically distributed (iid) data, uses the minimization of the Kolmogorov–Smirnov statistic,Two-point fitting method
This criterion can be applied for the estimation of power-law exponent in the case of scale-free distributions and provides a more convergent estimate than the maximum likelihood method. It has been applied to study probability distributions of fracture apertures. In some contexts the probability distribution is described, not by the cumulative distribution function, by the cumulative frequency analysis, cumulative frequency of a property ''X'', defined as the number of elements per meter (or area unit, second etc.) for which ''X'' > ''x'' applies, where ''x'' is a variable real number. As an example, the cumulative distribution of the fracture aperture, ''X'', for a sample of ''N'' elements is defined as 'the number of fractures per meter having aperture greater than ''x'' . Use of cumulative frequency has some advantages, e.g. it allows one to put on the same diagram data gathered from sample lines of different lengths at different scales (e.g. from outcrop and from microscope).Validating power laws
Although power-law relations are attractive for many theoretical reasons, demonstrating that data does indeed follow a power-law relation requires more than simply fitting a particular model to the data. This is important for understanding the mechanism that gives rise to the distribution: superficially similar distributions may arise for significantly different reasons, and different models yield different predictions, such as extrapolation. For example, log-normal distributions are often mistaken for power-law distributions: a data set drawn from a lognormal distribution will be approximately linear for large values (corresponding to the upper tail of the lognormal being close to a power law), but for small values the lognormal will drop off significantly (bowing down), corresponding to the lower tail of the lognormal being small (there are very few small values, rather than many small values in a power law). For example,See also
*Fat-tailed distribution *Heavy-tailed distributions *Hyperbolic growth *Lévy flight *Long tail *Low-degree saturation *References
Notes Bibliography * * * * * * * * * *External links