Laplace Smoothing
   HOME

TheInfoList



OR:

In
statistics Statistics (from German language, German: ''wikt:Statistik#German, Statistik'', "description of a State (polity), state, a country") is the discipline that concerns the collection, organization, analysis, interpretation, and presentation of ...
, additive smoothing, also called
Laplace Pierre-Simon, marquis de Laplace (; ; 23 March 1749 – 5 March 1827) was a French scholar and polymath whose work was important to the development of engineering, mathematics, statistics, physics, astronomy, and philosophy. He summarized ...
smoothing or
Lidstone Lidstone is a hamlet on the River Glyme in Oxfordshire, about east of Chipping Norton. The hamlet is in Enstone civil parish, about west of Neat Enstone. Archaeology In Round Hill Field on a ridge about south of Lidstone is a Bronze Age bo ...
smoothing, is a technique used to smooth categorical data. Given a set of observation counts \textstyle from a \textstyle -dimensional multinomial distribution with \textstyle trials, a "smoothed" version of the counts gives the estimator: :\hat\theta_i= \frac \qquad (i=1,\ldots,d), where the smoothed count \textstyle and the "pseudocount" ''α'' > 0 is a smoothing parameter. ''α'' = 0 corresponds to no smoothing. (This parameter is explained in below.) Additive smoothing is a type of
shrinkage estimator In statistics, shrinkage is the reduction in the effects of sampling variation. In regression analysis, a fitted relationship appears to perform less well on a new data set than on the data set used for fitting. In particular the value of the coeff ...
, as the resulting estimate will be between the
empirical probability The empirical probability, relative frequency, or experimental probability of an event is the ratio of the number of outcomes in which a specified event occurs to the total number of trials, not in a theoretical sample space but in an actual experi ...
( relative frequency) \textstyle , and the uniform probability \textstyle . Invoking Laplace's rule of succession, some authors have argued that ''α'' should be 1 (in which case the term add-one smoothing is also used), though in practice a smaller value is typically chosen. From a Bayesian point of view, this corresponds to the
expected value In probability theory, the expected value (also called expectation, expectancy, mathematical expectation, mean, average, or first moment) is a generalization of the weighted average. Informally, the expected value is the arithmetic mean of a l ...
of the posterior distribution, using a symmetric
Dirichlet distribution In probability and statistics, the Dirichlet distribution (after Peter Gustav Lejeune Dirichlet), often denoted \operatorname(\boldsymbol\alpha), is a family of continuous multivariate probability distributions parameterized by a vector \boldsymb ...
with parameter ''α'' as a prior distribution. In the special case where the number of categories is 2, this is equivalent to using a
Beta distribution In probability theory and statistics, the beta distribution is a family of continuous probability distributions defined on the interval , 1in terms of two positive parameters, denoted by ''alpha'' (''α'') and ''beta'' (''β''), that appear as ...
as the conjugate prior for the parameters of
Binomial distribution In probability theory and statistics, the binomial distribution with parameters ''n'' and ''p'' is the discrete probability distribution of the number of successes in a sequence of ''n'' independent experiments, each asking a yes–no quest ...
.


History

Laplace came up with this smoothing technique when he tried to estimate the chance that the sun will rise tomorrow. His rationale was that even given a large sample of days with the rising sun, we still can not be completely sure that the sun will still rise tomorrow (known as the
sunrise problem The sunrise problem can be expressed as follows: "What is the probability that the sun will rise tomorrow?" The sunrise problem illustrates the difficulty of using probability theory when evaluating the plausibility of statements or beliefs. Acc ...
).Lecture 5 , Machine Learning (Stanford)
at 1h10m into the lecture


Pseudocount

A pseudocount is an amount (not generally an integer, despite its name) added to the number of observed cases in order to change the expected probability in a model of those data, when not known to be zero. It is so named because, roughly speaking, a pseudo-count of value \textstyle weighs into the posterior distribution similarly to each category having an additional count of \textstyle . If the frequency of each item \textstyle is \textstyle out of \textstyle samples, the empirical probability of event \textstyle is :p_ = \frac but the posterior probability when additively smoothed is :p_ = \frac, as if to increase each count \textstyle by \textstyle a priori. Depending on the prior knowledge, which is sometimes a subjective value, a pseudocount may have any non-negative finite value. It may only be zero (or the possibility ignored) if impossible by definition, such as the possibility of a decimal digit of pi being a letter, or a physical possibility that would be rejected and so not counted, such as a computer printing a letter when a valid program for pi is run, or excluded and not counted because of no interest, such as if only interested in the zeros and ones. Generally, there is also a possibility that no value may be computable or observable in a finite time (see the halting problem). But at least one possibility must have a non-zero pseudocount, otherwise no prediction could be computed before the first observation. The relative values of pseudocounts represent the relative prior expected probabilities of their possibilities. The sum of the pseudocounts, which may be very large, represents the estimated weight of the prior knowledge compared with all the actual observations (one for each) when determining the expected probability. In any observed data set or
sample Sample or samples may refer to: Base meaning * Sample (statistics), a subset of a population – complete data set * Sample (signal), a digital discrete sample of a continuous analog signal * Sample (material), a specimen or small quantity of s ...
there is the possibility, especially with low-probability
event Event may refer to: Gatherings of people * Ceremony, an event of ritual significance, performed on a special occasion * Convention (meeting), a gathering of individuals engaged in some common interest * Event management, the organization of eve ...
s and with small data sets, of a possible event not occurring. Its observed frequency is therefore zero, apparently implying a probability of zero. This oversimplification is inaccurate and often unhelpful, particularly in probability-based machine learning techniques such as artificial neural networks and hidden Markov models. By artificially adjusting the probability of rare (but not impossible) events so those probabilities are not exactly zero, zero-frequency problems are avoided. Also see Cromwell's rule. The simplest approach is to add ''one'' to each observed number of events including the zero-count possibilities. This is sometimes called Laplace's Rule of Succession. This approach is equivalent to assuming a uniform prior distribution over the probabilities for each possible event (spanning the simplex where each probability is between 0 and 1, and they all sum to 1). Using the Jeffreys prior approach, a pseudocount of one half should be added to each possible outcome. Pseudocounts should be set to one only when there is no prior knowledge at all — see the principle of indifference. However, given appropriate prior knowledge, the sum should be adjusted in proportion to the expectation that the prior probabilities should be considered correct, despite evidence to the contrary — see further analysis. Higher values are appropriate inasmuch as there is prior knowledge of the true values (for a mint condition coin, say); lower values inasmuch as there is prior knowledge that there is probable bias, but of unknown degree (for a bent coin, say). A more complex approach is to estimate the probability of the events from other factors and adjust accordingly.


Examples

One way to motivate pseudocounts, particularly for binomial data, is via a formula for the midpoint of an
interval estimate In statistics, interval estimation is the use of sample data to estimate an '' interval'' of plausible values of a parameter of interest. This is in contrast to point estimation, which gives a single value. The most prevalent forms of interval e ...
, particularly a binomial proportion confidence interval. The best-known is due to Edwin Bidwell Wilson, in : the midpoint of the
Wilson score interval In statistics, a binomial proportion confidence interval is a confidence interval for the probability of success calculated from the outcome of a series of success–failure experiments (Bernoulli trials). In other words, a binomial proportion co ...
corresponding to standard deviations on either side is: :\frac. Taking \textstyle z = 2 standard deviations to approximate a 95% confidence interval () yields pseudocount of 2 for each outcome, so 4 in total, colloquially known as the "plus four rule": :\frac. This is also the midpoint of the Agresti–Coull interval, .


Generalized to the case of known incidence rates

Often you are testing the bias of an unknown trial population against a control population with known parameters (incidence rates) \textstyle . In this case the uniform probability \textstyle should be replaced by the known incidence rate of the control population \textstyle to calculate the smoothed estimator : :\hat\theta_i= \frac \qquad (i=1,\ldots,d), As a consistency check, if the empirical estimator happens to equal the incidence rate, i.e. \textstyle = \frac, the smoothed estimator is independent of ''\textstyle '' and also equals the incidence rate.


Applications


Classification

Additive smoothing is commonly a component of
naive Bayes classifier In statistics, naive Bayes classifiers are a family of simple "probabilistic classifiers" based on applying Bayes' theorem with strong (naive) independence assumptions between the features (see Bayes classifier). They are among the simplest Baye ...
s.


Statistical language modelling

In a bag of words model of natural language processing and information retrieval, the data consists of the number of occurrences of each word in a document. Additive smoothing allows the assignment of non-zero probabilities to words which do not occur in the sample. Recent studies have proven that additive smoothing is more effective than other probability smoothing methods in several retrieval tasks such as language-model-based pseudo-relevance feedback and recommender systems.


See also

*
Bayesian average A Bayesian average is a method of estimating the mean of a population using outside information, especially a pre-existing belief, which is factored into the calculation. This is a central feature of Bayesian interpretation. This is useful when the ...
* Prediction by partial matching * Categorical distribution


References


Sources

* * {{refend


External links

* SF Chen, J Goodman (1996).
An empirical study of smoothing techniques for language modeling
. ''Proceedings of the 34th annual meeting on Association for Computational Linguistics''.



Statistical natural language processing Categorical data Probability theory