HOME

TheInfoList



OR:

Minimum message length (MML) is a Bayesian information-theoretic method for statistical model comparison and selection. It provides a formal
information theory Information theory is the scientific study of the quantification, storage, and communication of information. The field was originally established by the works of Harry Nyquist and Ralph Hartley, in the 1920s, and Claude Shannon in the 1940s. ...
restatement of
Occam's Razor Occam's razor, Ockham's razor, or Ocham's razor ( la, novacula Occami), also known as the principle of parsimony or the law of parsimony ( la, lex parsimoniae), is the problem-solving principle that "entities should not be multiplied beyond neces ...
: even when models are equal in their measure of fit-accuracy to the observed data, the one generating the most concise ''explanation'' of data is more likely to be correct (where the ''explanation'' consists of the statement of the model, followed by the lossless encoding of the data using the stated model). MML was invented by
Chris Wallace Christopher Wallace (born October 12, 1947) is an American broadcast journalist. He is known for his tough and wide-ranging interviews, for which he is often compared to his father, ''60 Minutes'' journalist Mike Wallace. Over his 50-year care ...
, first appearing in the seminal paper "An information measure for classification". MML is intended not just as a theoretical construct, but as a technique that may be deployed in practice. It differs from the related concept of
Kolmogorov complexity In algorithmic information theory (a subfield of computer science and mathematics), the Kolmogorov complexity of an object, such as a piece of text, is the length of a shortest computer program (in a predetermined programming language) that prod ...
in that it does not require use of a
Turing-complete In computability theory, a system of data-manipulation rules (such as a computer's instruction set, a programming language, or a cellular automaton) is said to be Turing-complete or computationally universal if it can be used to simulate any Tur ...
language to model data.


Definition

Shannon's ''
A Mathematical Theory of Communication "A Mathematical Theory of Communication" is an article by mathematician Claude E. Shannon published in '' Bell System Technical Journal'' in 1948. It was renamed ''The Mathematical Theory of Communication'' in the 1949 book of the same name, a sm ...
'' (1948) states that in an optimal code, the message length (in binary) of an event E, \operatorname(E), where E has probability P(E), is given by \operatorname(E) = -\log_2(P(E)).
Bayes's theorem In probability theory and statistics, Bayes' theorem (alternatively Bayes' law or Bayes' rule), named after Thomas Bayes, describes the probability of an event, based on prior knowledge of conditions that might be related to the event. For examp ...
states that the probability of a (variable) hypothesis H given fixed evidence E is proportional to P(E, H) P(H), which, by the definition of
conditional probability In probability theory, conditional probability is a measure of the probability of an event occurring, given that another event (by assumption, presumption, assertion or evidence) has already occurred. This particular method relies on event B occur ...
, is equal to P(H \land E). We want the model (hypothesis) with the highest such
posterior probability The posterior probability is a type of conditional probability that results from updating the prior probability with information summarized by the likelihood via an application of Bayes' rule. From an epistemological perspective, the posterior ...
. Suppose we encode a message which represents (describes) both model and data jointly. Since \operatorname(H \land E) = -\log_2(P(H \land E)), the most probable model will have the shortest such message. The message breaks into two parts: -\log_2(P(H \land E)) = -\log_2(P(H)) + -\log_2(P(E, H)). The first part encodes the model itself. The second part contains information (e.g., values of parameters, or initial conditions, etc.) that, when processed by the model, outputs the observed data. MML naturally and precisely trades model complexity for goodness of fit. A more complicated model takes longer to state (longer first part) but probably fits the data better (shorter second part). So, an MML metric won't choose a complicated model unless that model pays for itself.


Continuous-valued parameters

One reason why a model might be longer would be simply because its various parameters are stated to greater precision, thus requiring transmission of more digits. Much of the power of MML derives from its handling of how accurately to state parameters in a model, and a variety of approximations that make this feasible in practice. This makes it possible to usefully compare, say, a model with many parameters imprecisely stated against a model with fewer parameters more accurately stated.


Key features of MML

* MML can be used to compare models of different structure. For example, its earliest application was in finding
mixture model In statistics, a mixture model is a probabilistic model for representing the presence of subpopulations within an overall population, without requiring that an observed data set should identify the sub-population to which an individual observation ...
s with the optimal number of classes. Adding extra classes to a mixture model will always allow the data to be fitted to greater accuracy, but according to MML this must be weighed against the extra bits required to encode the parameters defining those classes. * MML is a method of Bayesian model comparison. It gives every model a score. * MML is scale-invariant and statistically invariant. Unlike many Bayesian selection methods, MML doesn't care if you change from measuring length to volume or from Cartesian co-ordinates to polar co-ordinates. * MML is statistically consistent. For problems like the Neyman-Scott (1948) problem or factor analysis where the amount of data per parameter is bounded above, MML can estimate all parameters with statistical consistency. * MML accounts for the precision of measurement. It uses the
Fisher information In mathematical statistics, the Fisher information (sometimes simply called information) is a way of measuring the amount of information that an observable random variable ''X'' carries about an unknown parameter ''θ'' of a distribution that mode ...
(in the Wallace-Freeman 1987 approximation, or other hyper-volumes in other approximations) to optimally discretize continuous parameters. Therefore the posterior is always a probability, not a probability density. * MML has been in use since 1968. MML coding schemes have been developed for several distributions, and many kinds of machine learners including unsupervised classification, decision trees and graphs, DNA sequences,
Bayesian network A Bayesian network (also known as a Bayes network, Bayes net, belief network, or decision network) is a probabilistic graphical model that represents a set of variables and their conditional dependencies via a directed acyclic graph (DAG). Bay ...
s, neural networks (one-layer only so far), image compression, image and function segmentation, etc.


See also

*
Algorithmic probability In algorithmic information theory, algorithmic probability, also known as Solomonoff probability, is a mathematical method of assigning a prior probability to a given observation. It was invented by Ray Solomonoff in the 1960s. It is used in induct ...
*
Algorithmic information theory Algorithmic information theory (AIT) is a branch of theoretical computer science that concerns itself with the relationship between computation and information of computably generated objects (as opposed to stochastically generated), such as str ...
*
Grammar induction Grammar induction (or grammatical inference) is the process in machine learning of learning a formal grammar (usually as a collection of ''re-write rules'' or '' productions'' or alternatively as a finite state machine or automaton of some kind) fr ...
*
Inductive inference Inductive reasoning is a method of reasoning in which a general principle is derived from a body of observations. It consists of making broad generalizations based on specific observations. Inductive reasoning is distinct from ''deductive'' rea ...
* Inductive probability *
Kolmogorov complexity In algorithmic information theory (a subfield of computer science and mathematics), the Kolmogorov complexity of an object, such as a piece of text, is the length of a shortest computer program (in a predetermined programming language) that prod ...
– absolute complexity (within a constant, depending on the particular choice of Universal
Turing Machine A Turing machine is a mathematical model of computation describing an abstract machine that manipulates symbols on a strip of tape according to a table of rules. Despite the model's simplicity, it is capable of implementing any computer algo ...
); MML is typically a computable approximation (see ) *
Minimum description length Minimum Description Length (MDL) is a model selection principle where the shortest description of the data is the best model. MDL methods learn through a data compression perspective and are sometimes described as mathematical applications of Occam ...
– an alternative with a possibly different (non-Bayesian) motivation, developed 10 years after MML. *
Occam's razor Occam's razor, Ockham's razor, or Ocham's razor ( la, novacula Occami), also known as the principle of parsimony or the law of parsimony ( la, lex parsimoniae), is the problem-solving principle that "entities should not be multiplied beyond neces ...


References


External links

''Original Publication:'' * Books: * * , on implementing MML, an
source-code
Related Links: * Links to al
Chris Wallace
s known publications. *
searchable database of Chris Wallace's publications
* * *
History of MML, CSW's last talk
* (Shows how
Occam's razor Occam's razor, Ockham's razor, or Ocham's razor ( la, novacula Occami), also known as the principle of parsimony or the law of parsimony ( la, lex parsimoniae), is the problem-solving principle that "entities should not be multiplied beyond neces ...
works fine when interpreted as MML.) * (MML, FP, and Haskel
code
. * *
.pdf
Comley & Dowe

are the first two papers on MML Bayesian nets using both discrete and continuous valued parameters. *
Minimum Message Length (MML)
LA's MML introduction
(MML alt.)


*

for MML
mixture model In statistics, a mixture model is a probabilistic model for representing the presence of subpopulations within an overall population, without requiring that an observed data set should identify the sub-population to which an individual observation ...
ling.
MITECSChris Wallace
wrote an entry on MML for MITECS. (Requires account)
mikko.ps
Short introductory slides by Mikko Koivisto in Helsinki *
Akaike information criterion The Akaike information criterion (AIC) is an estimator of prediction error and thereby relative quality of statistical models for a given set of data. Given a collection of models for the data, AIC estimates the quality of each model, relative to e ...
( AIC) method of
model selection Model selection is the task of selecting a statistical model from a set of candidate models, given data. In the simplest cases, a pre-existing set of data is considered. However, the task can also involve the design of experiments such that the ...
, and
comparison
with MML: {{Least Squares and Regression Analysis Algorithmic information theory