Bayesian experimental design
   HOME

TheInfoList



OR:

Bayesian experimental design provides a general probability-theoretical framework from which other theories on
experimental design The design of experiments (DOE, DOX, or experimental design) is the design of any task that aims to describe and explain the variation of information under conditions that are hypothesized to reflect the variation. The term is generally associ ...
can be derived. It is based on Bayesian inference to interpret the observations/data acquired during the experiment. This allows accounting for both any prior knowledge on the parameters to be determined as well as uncertainties in observations. The theory of Bayesian experimental design is to a certain extent based on the theory for making optimal decisions under uncertainty. The aim when designing an experiment is to maximize the expected utility of the experiment outcome. The utility is most commonly defined in terms of a measure of the accuracy of the information provided by the experiment (e.g. the
Shannon information In information theory, the information content, self-information, surprisal, or Shannon information is a basic quantity derived from the probability of a particular event occurring from a random variable. It can be thought of as an alternative wa ...
or the negative of the
variance In probability theory and statistics, variance is the expectation of the squared deviation of a random variable from its population mean or sample mean. Variance is a measure of dispersion, meaning it is a measure of how far a set of numbe ...
), but may also involve factors such as the financial cost of performing the experiment. What will be the optimal experiment design depends on the particular utility criterion chosen.


Relations to more specialized optimal design theory


Linear theory

If the model is linear, the prior
probability density function In probability theory, a probability density function (PDF), or density of a continuous random variable, is a function whose value at any given sample (or point) in the sample space (the set of possible values taken by the random variable) ca ...
(PDF) is homogeneous and observational errors are normally distributed, the theory simplifies to the classical optimal experimental design theory.


Approximate normality

In numerous publications on Bayesian experimental design, it is (often implicitly) assumed that all posterior PDFs will be approximately normal. This allows for the expected utility to be calculated using linear theory, averaging over the space of model parameters, an approach reviewed in . Caution must however be taken when applying this method, since approximate normality of all possible posteriors is difficult to verify, even in cases of normal observational errors and uniform prior PDF.


Posterior distribution

In many cases, the posterior distribution is not available in closed form and has to be approximated using numerical methods. The most common approach is to use
Monte Carlo Monte Carlo (; ; french: Monte-Carlo , or colloquially ''Monte-Carl'' ; lij, Munte Carlu ; ) is officially an administrative area of the Principality of Monaco, specifically the ward of Monte Carlo/Spélugues, where the Monte Carlo Casino is ...
methods to generate samples from the posterior, which can then be used to approximate the expected utility. Another approach is to use a
variational Bayes Variational Bayesian methods are a family of techniques for approximating intractable integrals arising in Bayesian inference and machine learning. They are typically used in complex statistical models consisting of observed variables (usuall ...
approximation of the posterior, which can often be calculated in closed form. This approach has the advantage of being computationally more efficient than Monte Carlo methods, but the disadvantage that the approximation might not be very accurate. Some authors such as and proposed approaches that use the posterior predictive distribution to assess the effect of new measurements on prediction uncertainty, while suggest maximizing the mutual information between parameters, predictions and potential new experiments.


Mathematical formulation

Given a vector \theta of parameters to determine, a prior PDF p(\theta) over those parameters and a PDF p(y\mid\theta,\xi) for making observation y, given parameter values \theta and an experiment design \xi, the posterior PDF can be calculated using Bayes' theorem :p(\theta \mid y, \xi) = \frac \, , where p(y\mid\xi) is the marginal probability density in observation space :p(y\mid\xi) = \int p(\theta)p(y\mid\theta,\xi)\,d\theta \, . The expected utility of an experiment with design \xi can then be defined :U(\xi)=\int p(y\mid\xi)U(y,\xi)\,dy, where U(y,\xi) is some real-valued functional of the posterior PDF p(\theta \mid y, \xi) after making observation y using an experiment design \xi.


Gain in Shannon information as utility

Utility may be defined as the prior-posterior gain in
Shannon information In information theory, the information content, self-information, surprisal, or Shannon information is a basic quantity derived from the probability of a particular event occurring from a random variable. It can be thought of as an alternative wa ...
: U(y, \xi) = - \int \log(p(\theta \mid y, \xi))\,p(\theta , y, \xi) \, d\theta + \int \log(p(\theta))\,p(\theta)\,d\theta \, . Another possibility is to define the utility as :U(y, \xi) = D_(p(\theta\mid y,\xi) \, p(\theta)) \, , the
Kullback–Leibler divergence In mathematical statistics, the Kullback–Leibler divergence (also called relative entropy and I-divergence), denoted D_\text(P \parallel Q), is a type of statistical distance: a measure of how one probability distribution ''P'' is different fr ...
of the prior from the posterior distribution. noted that the expected utility will then be coordinate-independent and can be written in two forms : \begin U(\xi) & = - \int \int \log(p(\theta \mid y,\xi))\,p(\theta, y \mid \xi) \, d\theta\,dy + \int\log(p(\theta))\,p(\theta) \, d\theta \\ & = - \int \int \log(p(y \mid \theta,\xi))\,p(\theta, y \mid \xi)\,dy\,d\theta + \int\log(p(y \mid \xi))\,p(y\mid \xi) \, dy, \end \, of which the latter can be evaluated without the need for evaluating individual posterior PDFs p(\theta \mid y,\xi) for all possible observations y. It is worth noting that the second term on the second equation line will not depend on the design \xi, as long as the observational uncertainty doesn't. On the other hand, the integral of p(\theta) \log p(\theta) in the first form is constant for all \xi, so if the goal is to choose the design with the highest utility, the term need not be computed at all. Several authors have considered numerical techniques for evaluating and optimizing this criterion, e.g. and . Note that :U(\xi) = I(\theta;y)\, , the expected information gain being exactly the
mutual information In probability theory and information theory, the mutual information (MI) of two random variables is a measure of the mutual dependence between the two variables. More specifically, it quantifies the " amount of information" (in units such ...
between the parameter ''θ'' and the observation ''y''. An example of Bayesian design for linear dynamical model discrimination is given in . Since I(\theta;y)\, , was difficult to calculate, its lower bound has been used as a utility function. The lower bound is then maximized under the signal energy constraint. Proposed Bayesian design has been also compared with classical average D-optimal design. It was shown that the Bayesian design is superior to D-optimal design. The
Kelly criterion In probability theory, the Kelly criterion (or Kelly strategy or Kelly bet), is a formula that determines the optimal theoretical size for a bet. It is valid when the expected returns are known. The Kelly bet size is found by maximizing the expec ...
also describes such a utility function for a gambler seeking to maximize profit, which is used in gambling and information theory; Kelly's situation is identical to the foregoing, with the side information, or "private wire" taking the place of the experiment.


See also

* Bayesian optimization *
Optimal design In the design of experiments, optimal designs (or optimum designs) are a class of design of experiments, experimental designs that are Optimization (mathematics), optimal with respect to some statistical theory, statistical objective function, ...
*
Active Learning Active learning is "a method of learning in which students are actively or experientially involved in the learning process and where there are different levels of active learning, depending on student involvement." states that "students partici ...


References

* * * * * * * * * {{Statistics, collection, state=collapsed
Experimental design The design of experiments (DOE, DOX, or experimental design) is the design of any task that aims to describe and explain the variation of information under conditions that are hypothesized to reflect the variation. The term is generally associ ...
Design of experiments Optimal decisions Industrial engineering Systems engineering