In various science/engineering applications, such as
independent component analysis
In signal processing, independent component analysis (ICA) is a computational method for separating a multivariate statistics, multivariate signal into additive subcomponents. This is done by assuming that at most one subcomponent is Gaussian and ...
,
image analysis
Image analysis or imagery analysis is the extraction of meaningful information from images; mainly from digital images by means of digital image processing techniques. Image analysis tasks can be as simple as reading barcode, bar coded tags or a ...
,
genetic analysis
Genetic analysis is the overall process of studying and researching in fields of science that involve genetics and molecular biology. There are a number of applications that are developed from this research, and these are also considered parts ...
,
speech recognition
Speech recognition is an interdisciplinary subfield of computer science and computational linguistics that develops methodologies and technologies that enable the recognition and translation of spoken language into text by computers. It is also ...
,
manifold learning
Nonlinear dimensionality reduction, also known as manifold learning, is any of various related techniques that aim to project high-dimensional data, potentially existing across non-linear manifolds which cannot be adequately captured by linear de ...
, and time delay estimation
[Benesty, J.; Yiteng Huang; Jingdong Chen (2007) Time Delay Estimation via Minimum Entropy. In ''Signal Processing Letters'', Volume 14, Issue 3, March 2007 157–160 ] it is useful to estimate the
differential entropy
Differential entropy (also referred to as continuous entropy) is a concept in information theory that began as an attempt by Claude Shannon to extend the idea of (Shannon) entropy (a measure of average surprisal) of a random variable, to continu ...
of a system or process, given some observations.
The simplest and most common approach uses
histogram
A histogram is a visual representation of the frequency distribution, distribution of quantitative data. To construct a histogram, the first step is to Data binning, "bin" (or "bucket") the range of values— divide the entire range of values in ...
-based estimation, but other approaches have been developed and used, each with its own benefits and drawbacks.
[J. Beirlant, E. J. Dudewicz, L. Gyorfi, and E. C. van der Meulen (1997]
Nonparametric entropy estimation: An overview
In ''International Journal of Mathematical and Statistical Sciences'', Volume 6, pp. 17–
39. The main factor in choosing a method is often a trade-off between the bias and the variance of the estimate,
[T. Schürmann, Bias analysis in entropy estimation. In ''J. Phys. A: Math. Gen'', 37 (2004), pp. L295–L301. ] although the nature of the (suspected) distribution of the data may also be a factor,
as well as the sample size and the size of the alphabet of the probability distribution.
Histogram estimator
The histogram approach uses the idea that the differential entropy of a probability distribution
for a continuous random variable
,
:
can be approximated by first approximating
with a
histogram
A histogram is a visual representation of the frequency distribution, distribution of quantitative data. To construct a histogram, the first step is to Data binning, "bin" (or "bucket") the range of values— divide the entire range of values in ...
of the observations, and then finding the
discrete entropy of a quantization of
:
with bin probabilities given by that histogram. The histogram is itself a
maximum-likelihood (ML) estimate of the discretized frequency distribution ), where
is the width of the
th bin. Histograms can be quick to calculate, and simple, so this approach has some attraction. However, the estimate produced is
bias
Bias is a disproportionate weight ''in favor of'' or ''against'' an idea or thing, usually in a way that is inaccurate, closed-minded, prejudicial, or unfair. Biases can be innate or learned. People may develop biases for or against an individ ...
ed, and although corrections can be made to the estimate, they may not always be satisfactory.
[G. Miller (1955) Note on the bias of information estimates. In ''Information Theory in Psychology: Problems and Methods'', pp. 95–100.]
A method better suited for multidimensional
probability density function
In probability theory, a probability density function (PDF), density function, or density of an absolutely continuous random variable, is a Function (mathematics), function whose value at any given sample (or point) in the sample space (the s ...
s (pdf) is to first make a
pdf estimate with some method, and then, from the pdf estimate, compute the entropy. A useful pdf estimate method is e.g. Gaussian
mixture model
In statistics, a mixture model is a probabilistic model for representing the presence of subpopulations within an overall population, without requiring that an observed data set should identify the sub-population to which an individual observati ...
ing (GMM), where the
expectation maximization
Expectation, or expectations, as well as expectancy or expectancies, may refer to:
Science
* Expectancy effect, including observer-expectancy effects and subject-expectancy effects such as the placebo effect
* Expectancy theory of motivation
* ...
(EM) algorithm is used to find an ML estimate of a
weighted sum
A weight function is a mathematical device used when performing a sum, integral, or average to give some elements more "weight" or influence on the result than other elements in the same set. The result of this application of a weight function is ...
of Gaussian pdf's approximating the data pdf.
Estimates based on sample-spacings
If the data is one-dimensional, we can imagine taking all the observations and putting them in order of their value. The spacing between one value and the next then gives us a rough idea of (the
reciprocal of) the probability density in that region: the closer together the values are, the higher the probability density. This is a very rough estimate with high
variance
In probability theory and statistics, variance is the expected value of the squared deviation from the mean of a random variable. The standard deviation (SD) is obtained as the square root of the variance. Variance is a measure of dispersion ...
, but can be improved, for example by thinking about the space between a given value and the one ''m'' away from it, where ''m'' is some fixed number.
The probability density estimated in this way can then be used to calculate the entropy estimate, in a similar way to that given above for the histogram, but with some slight tweaks.
One of the main drawbacks with this approach is going beyond one dimension: the idea of lining the data points up in order falls apart in more than one dimension. However, using analogous methods, some multidimensional entropy estimators have been developed.
[E. G. Learned-Miller (2003) A new class of entropy estimators for multi-dimensional densities, in ''Proceedings of the (ICASSP’03)'', vol. 3, April 2003, pp. 297–300.][I. Lee (2010) Sample-spacings based density and entropy estimators for spherically invariant multidimensional data, In ''Neural Computation'', vol. 22, issue 8, April 2010, pp. 2208–2227.]
Estimates based on nearest-neighbours
For each point in our dataset, we can find the distance to its
nearest neighbour. We can in fact estimate the entropy from the distribution of the nearest-neighbour-distance of our datapoints.
(In a uniform distribution these distances all tend to be fairly similar, whereas in a strongly nonuniform distribution they may vary a lot more.)
Bayesian estimator
When in under-sampled regime, having a prior on the distribution can help the estimation. One such
Bayesian estimator was proposed in the neuroscience context known as the NSB (
Nemenman–Shafee–
Bialek) estimator.
[Ilya Nemenman, Fariel Shafee, William Bialek (2003) Entropy and Inference, Revisited. Advances in Neural Information Processing][Ilya Nemenman, William Bialek, de Ruyter (2004) Entropy and information in neural spike trains: Progress on the sampling problem. Physical Review E] The NSB estimator uses a mixture of
Dirichlet prior, chosen such that the induced prior over the entropy is approximately uniform.
Estimates based on expected entropy
A new approach to the problem of entropy evaluation is to compare the expected entropy of a sample of random sequence with the calculated entropy of the sample. The method gives very accurate results, but it is limited to calculations of random sequences modeled as
Markov chain
In probability theory and statistics, a Markov chain or Markov process is a stochastic process describing a sequence of possible events in which the probability of each event depends only on the state attained in the previous event. Informally ...
s of the first order with small values of bias and correlations. This is the first known method that takes into account the size of the sample sequence and its impact on the accuracy of the calculation of entropy.
[Marek Lesniewicz (2014) Expected Entropy as a Measure and Criterion of Randomness of Binary Sequence]
In ''Przeglad Elektrotechniczny'', Volume 90, 1/2014, pp. 42– 46.[Marek Lesniewicz (2016) Analyses and Measurements of Hardware Generated Random Binary Sequences Modeled as Markov Chain]
In ''Przeglad Elektrotechniczny'', Volume 92, 11/2016, pp. 268-274.
Deep Neural Network estimator
A deep neural network (DNN) can be used to estimate the joint entropy and called Neural Joint Entropy Estimator (NJEE).
Practically, the DNN is trained as a classifier that maps an input vector or matrix X to an output probability distribution over the possible classes of random variable Y, given input X. For example, in an image classification task, the NJEE maps a vector of pixel values to probabilities over possible image classes. In practice, the probability distribution of Y is obtained by a Softmax layer with number of nodes that is equal to the alphabet size of Y. NJEE uses continuously differentiable activation functions, such that the conditions for the universal approximation theorem holds. It is shown that this method provides a strongly consistent estimator and outperforms other methods in case of large alphabet sizes.
References
{{DEFAULTSORT:Entropy Estimation
Entropy and information
Information theory
Statistical randomness
Random number generation