HOME

TheInfoList



OR:

Minimum Description Length (MDL) is a
model selection Model selection is the task of selecting a statistical model from a set of candidate models, given data. In the simplest cases, a pre-existing set of data is considered. However, the task can also involve the design of experiments such that the ...
principle where the shortest description of the data is the best model. MDL methods learn through a data compression perspective and are sometimes described as mathematical applications of
Occam's razor Occam's razor, Ockham's razor, or Ocham's razor ( la, novacula Occami), also known as the principle of parsimony or the law of parsimony ( la, lex parsimoniae), is the problem-solving principle that "entities should not be multiplied beyond neces ...
. The MDL principle can be extended to other forms of inductive inference and learning, for example to estimation and sequential prediction, without explicitly identifying a single model of the data. MDL has its origins mostly in
information theory Information theory is the scientific study of the quantification, storage, and communication of information. The field was originally established by the works of Harry Nyquist and Ralph Hartley, in the 1920s, and Claude Shannon in the 1940 ...
and has been further developed within the general fields of statistics, theoretical computer science and machine learning, and more narrowly computational learning theory. Historically, there are different, yet interrelated, usages of the definite noun phrase "''the'' minimum description length ''principle''" that vary in what is meant by ''description'': * Within Jorma Rissanen's theory of learning, a central concept of information theory, models are statistical hypotheses and descriptions are defined as universal codes. * Rissanen's 1978 pragmatic first attempt to automatically derive short descriptions, relates to the
Bayesian Information Criterion In statistics, the Bayesian information criterion (BIC) or Schwarz information criterion (also SIC, SBC, SBIC) is a criterion for model selection among a finite set of models; models with lower BIC are generally preferred. It is based, in part, o ...
(BIC). * Within
Algorithmic Information Theory Algorithmic information theory (AIT) is a branch of theoretical computer science that concerns itself with the relationship between computation and information of computably generated objects (as opposed to stochastically generated), such as st ...
, where the description length of a data sequence is the length of the smallest program that outputs that data set. In this context, it is also known as 'idealized' MDL principle and it is closely related to Solomonoff's theory of inductive inference, which is that the best model of a data set is represented by its shortest
self-extracting archive A self-extracting archive (SFX or SEA) is a computer executable program which contains compressed data in an archive file combined with machine-executable program instructions to extract this information on a compatible operating system and ...
.


Overview

Selecting the minimum length description of the available data as the best model observes the ''principle'' identified as Occam's razor. Prior to the advent of computer programming, generating such descriptions was the intellectual labor of scientific theorists. It was far less formal than it has become in the computer age. If two scientists had a theoretic disagreement, they rarely could ''formally'' apply Occam's razor to choose between their theories. They would have different data sets and possibly different descriptive languages. Nevertheless, science advanced as Occam's razor was an informal guide in deciding which model was best. With the advent of formal languages and computer programming Occam's razor was mathematically defined. Models of a given set of observations, encoded as bits of data, could be created in the form of computer programs that output that data. Occam's razor could then ''formally'' select the shortest program, measured in bits of this ''algorithmic information'', as the best model. To avoid confusion, note that there is nothing in the MDL principle that implies a machine produced the program embodying the model. It can be entirely the product of humans. The MDL principle applies regardless of whether the description to be run on a computer is the product of humans, machines or any combination thereof. The MDL principle requires ''only'' that the shortest description, when executed, produce the original data set without error.


Two-Part codes

The distinction in computer programs between programs and literal data applies to all formal descriptions and is sometimes referred to as "''two parts''" of a description. In statistical MDL learning, such a description is frequently called a ''two-part code''.


MDL in machine learning

MDL applies in machine learning when algorithms (machines) generate descriptions. Learning occurs when an algorithm generates a shorter description of the same data set. The theoretic minimum description length of a data set, called its
Kolmogorov complexity In algorithmic information theory (a subfield of computer science and mathematics), the Kolmogorov complexity of an object, such as a piece of text, is the length of a shortest computer program (in a predetermined programming language) that pr ...
, cannot, however, be computed. That is to say, even if by random chance an algorithm generates the shortest program of all that outputs the data set, an automated theorem prover cannot prove there is no shorter such program. Nevertheless, given two programs that output the dataset, the MDL principle selects the shorter of the two as embodying the best model.


Recent work on algorithmic MDL learning

Recent machine MDL learning of algorithmic, as opposed to statistical, data models have received increasing attention with increasing availability of data, computation resources and theoretic advances. Approaches are informed by the burgeoning field of
artificial general intelligence Artificial general intelligence (AGI) is the ability of an intelligent agent to understand or learn any intellectual task that a human being can. It is a primary goal of some artificial intelligence research and a common topic in science ficti ...
. Shortly before his death,
Marvin Minsky Marvin Lee Minsky (August 9, 1927 – January 24, 2016) was an American cognitive and computer scientist concerned largely with research of artificial intelligence (AI), co-founder of the Massachusetts Institute of Technology's AI laboratory, an ...
came out strongly in favor of this line of research, saying:


Statistical MDL learning

Any set of data can be represented by a string of
symbols A symbol is a mark, sign, or word that indicates, signifies, or is understood as representing an idea, object, or relationship. Symbols allow people to go beyond what is known or seen by creating linkages between otherwise very different co ...
from a finite (say,
binary Binary may refer to: Science and technology Mathematics * Binary number, a representation of numbers using only two digits (0 and 1) * Binary function, a function that takes two arguments * Binary operation, a mathematical operation that t ...
)
alphabet An alphabet is a standardized set of basic written graphemes (called letters) that represent the phonemes of certain spoken languages. Not all writing systems represent language in this way; in a syllabary, each character represents a sy ...
.
he MDL Principleis based on the following insight: any regularity in a given set of data can be used to compress the data, i.e. to describe it using fewer symbols than needed to describe the data literally. (Grünwald, 2004)
Based on this, in 1978, Jorma Rissanen published an MDL learning algorithm using the statistical notion of information rather than algorithmic information. Over the past 40 years this has developed into a rich theory of statistical and machine learning procedures with connections to Bayesian model selection and averaging, penalization methods such as Lasso and Ridge, and so on - Grünwald and Roos (2020) give an introduction including all modern developments. Rissanen started out with this idea: all statistical learning is about finding regularities in data, and the best hypothesis to describe the regularities in data is also the one that is able to ''statistically'' compress the data most. Like other statistical methods, it can be used for learning the parameters of a model using some data. Usually though, standard statistical methods assume that the general form of a model is fixed. MDL's main strength is that it can also be used for selecting the general form of a model and its parameters. The quantity of interest (sometimes just a model, sometimes just parameters, sometimes both at the same time) is called a hypothesis. The basic idea is then to consider the (lossless) ''two-stage code'' that encodes data D with length by first encoding a hypothesis H in the set of considered hypotheses and then coding D "with the help of" H; in the simplest context this just means "encoding the deviations of the data from the predictions made by H: = \min_ \ (\ L(H) + L(D, H) \ ) \ The H achieving this minimum is then viewed as the best explanation of data D. As a simple example, take a regression problem: the data D could consist of a sequence of points D = (x_1,y_1), \ldots, (x_n,y_n), the set could be the set of all polynomials from X to Y. To describe a polynomial H of degree (say) k, one would first have to discretize the parameters to some precision; one would then have to describe this precision (a natural number); next, one would have to describe the degree k (another natural number), and in the final step, one would have to describe k+1 parameters; the total length would be L(H). One would then describe the points in D using some fixed code for the x-values and then using a code for the n deviations y_i - H(x_i). In practice, one often (but not always) uses a ''probabilistic model''. For example, one associates each polynomial H with the corresponding conditional distribution expressing that given X, Y is normally distributed with mean H(X) and some variance \sigma^2 which could either be fixed or added as a free parameter. Then the set of hypotheses reduces to the assumption of a linear model, Y=H(X)+\epsilon , with H a polynomial. Furthermore, one is often not directly interested in specific parameters values, but just, for example, the ''degree'' of the polynomial. In that case, one sets to be = \where each _j represents the hypothesis that the data is best described as a j-th degree polynomial. One then codes data D given hypothesis _j using a ''one-part code'' designed such that, whenever ''some'' hypothesis H \in _j fits the data well, the codelength L(D, H) is short. The design of such codes is called universal coding. There are various types of universal codes one could use, often giving similar lengths for long data sequences but differing for short ones. The 'best' (in the sense that it has a minimax optimality property) are the ''normalized maximum likelihood'' (NML) or ''Shtarkov'' codes. A quite useful class of codes are the ''Bayesian marginal likelihood codes.'' For exponential families of distributions, when Jeffreys prior is used and the parameter space is suitably restricted, these asymptotically coincide with the NML codes; this brings MDL theory in close contact with objective Bayes model selection, in which one also sometimes adopts Jeffreys' prior, albeit for different reasons. The MDL approach to model selection "gives a selection criterion formally identical to the BIC approach" for large number of samples.


Example of Statistical MDL Learning

A coin is flipped 1000 times, and the numbers of heads and tails are recorded. Consider two model classes: *The first is a code that represents outcomes with a 0 for heads or a 1 for tails. This code represents the hypothesis that the coin is fair. The code length according to this code is always exactly 1000 bits. *The second consists of all codes that are efficient for a coin with some specific bias, representing the hypothesis that the coin is not fair. Say that we observe 510 heads and 490 tails. Then the code length according to the best code in the second model class is shorter than 1000 bits. For this reason, a naive statistical method might choose the second model as a better explanation for the data. However, an MDL approach would construct a single code based on the hypothesis, instead of just using the best one. This code could be the normalized maximum likelihood code or a Bayesian code. If such a code is used, then the total codelength based on the second model class would be larger than 1000 bits. Therefore, the conclusion when following an MDL approach is inevitably that there is not enough evidence to support the hypothesis of the biased coin, even though the best element of the second model class provides better fit to the data.


Statistical MDL Notation

Central to MDL theory is the
one-to-one correspondence In mathematics, a bijection, also known as a bijective function, one-to-one correspondence, or invertible function, is a function between the elements of two sets, where each element of one set is paired with exactly one element of the other s ...
between code length functions and
probability distribution In probability theory and statistics, a probability distribution is the mathematical function that gives the probabilities of occurrence of different possible outcomes for an experiment. It is a mathematical description of a random phenomenon ...
s (this follows from the Kraft–McMillan inequality). For any probability distribution P, it is possible to construct a code C such that the length (in bits) of C(x) is equal to -\log_2 P(x); this code minimizes the expected code length. Conversely, given a code C, one can construct a probability distribution P such that the same holds. (Rounding issues are ignored here.) In other words, searching for an efficient code is equivalent to searching for a good probability distribution.


Limitations of Statistical MDL Learning

The description language of statistical MDL is not computationally universal. Therefore it cannot, even in principle, learn models of recursive natural processes.


Related concepts

Statistical MDL learning is very strongly connected to
probability theory Probability theory is the branch of mathematics concerned with probability. Although there are several different probability interpretations, probability theory treats the concept in a rigorous mathematical manner by expressing it through a set ...
and
statistics Statistics (from German: ''Statistik'', "description of a state, a country") is the discipline that concerns the collection, organization, analysis, interpretation, and presentation of data. In applying statistics to a scientific, indust ...
through the correspondence between codes and probability distributions mentioned above. This has led some researchers to view MDL as equivalent to
Bayesian inference Bayesian inference is a method of statistical inference in which Bayes' theorem is used to update the probability for a hypothesis as more evidence or information becomes available. Bayesian inference is an important technique in statistics, and ...
: code length of model and data together in MDL correspond respectively to
prior probability In Bayesian statistical inference, a prior probability distribution, often simply called the prior, of an uncertain quantity is the probability distribution that would express one's beliefs about this quantity before some evidence is taken into ...
and marginal likelihood in the Bayesian framework. While Bayesian machinery is often useful in constructing efficient MDL codes, the MDL framework also accommodates other codes that are not Bayesian. An example is the Shtarkov ''normalized maximum likelihood code'', which plays a central role in current MDL theory, but has no equivalent in Bayesian inference. Furthermore, Rissanen stresses that we should make no assumptions about the ''true'' data-generating process: in practice, a model class is typically a simplification of reality and thus does not contain any code or probability distribution that is true in any objective sense. In the last mentioned reference Rissanen bases the mathematical underpinning of MDL on the Kolmogorov structure function. According to the MDL philosophy, Bayesian methods should be dismissed if they are based on unsafe
priors Prior (or prioress) is an ecclesiastical title for a superior in some religious orders. The word is derived from the Latin for "earlier" or "first". Its earlier generic usage referred to any monastic superior. In abbeys, a prior would be l ...
that would lead to poor results. The priors that are acceptable from an MDL point of view also tend to be favored in so-called objective Bayesian analysis; there, however, the motivation is usually different.


Other systems

Rissanen's was not the first information-theoretic approach to learning; as early as 1968 Wallace and Boulton pioneered a related concept called
minimum message length Minimum message length (MML) is a Bayesian information-theoretic method for statistical model comparison and selection. It provides a formal information theory restatement of Occam's Razor: even when models are equal in their measure of fit-accurac ...
(MML). The difference between MDL and MML is a source of ongoing confusion. Superficially, the methods appear mostly equivalent, but there are some significant differences, especially in interpretation: * MML is a fully subjective Bayesian approach: it starts from the idea that one represents one's beliefs about the data-generating process in the form of a prior distribution. MDL avoids assumptions about the data-generating process. * Both methods make use of ''two-part codes'': the first part always represents the information that one is trying to learn, such as the index of a model class (
model selection Model selection is the task of selecting a statistical model from a set of candidate models, given data. In the simplest cases, a pre-existing set of data is considered. However, the task can also involve the design of experiments such that the ...
) or parameter values (
parameter estimation Estimation theory is a branch of statistics that deals with estimating the values of parameters based on measured empirical data that has a random component. The parameters describe an underlying physical setting in such a way that their valu ...
); the second part is an encoding of the data given the information in the first part. The difference between the methods is that, in the MDL literature, it is advocated that unwanted parameters should be moved to the second part of the code, where they can be represented with the data by using a so-called one-part code, which is often more efficient than a ''two-part code''. In the original description of MML, all parameters are encoded in the first part, so all parameters are learned. * Within the MML framework, each parameter is stated to exactly that precision, which results in the optimal overall message length: the preceding example might arise if some parameter was originally considered "possibly useful" to a model but was subsequently found to be unable to help to explain the data (such a parameter will be assigned a code length corresponding to the (Bayesian) prior probability that the parameter would be found to be unhelpful). In the MDL framework, the focus is more on comparing model classes than models, and it is more natural to approach the same question by comparing the class of models that explicitly include such a parameter against some other class that doesn't. The difference lies in the machinery applied to reach the same conclusion.


See also

*
Algorithmic probability In algorithmic information theory, algorithmic probability, also known as Solomonoff probability, is a mathematical method of assigning a prior probability to a given observation. It was invented by Ray Solomonoff in the 1960s. It is used in induc ...
*
Algorithmic information theory Algorithmic information theory (AIT) is a branch of theoretical computer science that concerns itself with the relationship between computation and information of computably generated objects (as opposed to stochastically generated), such as st ...
*
Inductive inference Inductive reasoning is a method of reasoning in which a general principle is derived from a body of observations. It consists of making broad generalizations based on specific observations. Inductive reasoning is distinct from ''deductive'' rea ...
* Inductive probability * Lempel–Ziv complexity


References


Further reading


Minimum Description Length on the Web
by the University of Helsinki. Features readings, demonstrations, events and links to MDL researchers.
Homepage of Jorma Rissanen
containing lecture notes and other recent material on MDL. *''Advances in Minimum Description Length'',
MIT Press The MIT Press is a university press affiliated with the Massachusetts Institute of Technology (MIT) in Cambridge, Massachusetts (United States). It was established in 1962. History The MIT Press traces its origins back to 1926 when MIT publis ...
, . {{DEFAULTSORT:Minimum Description Length Algorithmic information theory