HOME

TheInfoList



OR:

A Bayesian network (also known as a Bayes network, Bayes net, belief network, or decision network) is a probabilistic
graphical model A graphical model or probabilistic graphical model (PGM) or structured probabilistic model is a probabilistic model for which a Graph (discrete mathematics), graph expresses the conditional dependence structure between random variables. They are ...
that represents a set of variables and their conditional dependencies via a
directed acyclic graph In mathematics, particularly graph theory, and computer science, a directed acyclic graph (DAG) is a directed graph with no directed cycles. That is, it consists of vertices and edges (also called ''arcs''), with each edge directed from one ve ...
(DAG). Bayesian networks are ideal for taking an event that occurred and predicting the likelihood that any one of several possible known causes was the contributing factor. For example, a Bayesian network could represent the probabilistic relationships between diseases and symptoms. Given symptoms, the network can be used to compute the probabilities of the presence of various diseases. Efficient algorithms can perform
inference Inferences are steps in reasoning, moving from premises to logical consequences; etymologically, the word '' infer'' means to "carry forward". Inference is theoretically traditionally divided into deduction and induction, a distinction that in ...
and
learning Learning is the process of acquiring new understanding, knowledge, behaviors, skills, value (personal and cultural), values, attitudes, and preferences. The ability to learn is possessed by humans, animals, and some machine learning, machines ...
in Bayesian networks. Bayesian networks that model sequences of variables (''e.g.'' speech signals or
protein sequences Protein primary structure is the linear sequence of amino acids in a peptide or protein. By convention, the primary structure of a protein is reported starting from the amino-terminal (N) end to the carboxyl-terminal (C) end. Protein biosynthes ...
) are called
dynamic Bayesian network A Dynamic Bayesian Network (DBN) is a Bayesian network (BN) which relates variables to each other over adjacent time steps. This is often called a ''Two-Timeslice'' BN (2TBN) because it says that at any point in time T, the value of a variable c ...
s. Generalizations of Bayesian networks that can represent and solve decision problems under uncertainty are called
influence diagram Influence or influencer may refer to: *Social influence, in social psychology, influence in interpersonal relationships **Minority influence, when the minority affect the behavior or beliefs of the majority *Influencer marketing, through individu ...
s.


Graphical model

Formally, Bayesian networks are
directed acyclic graph In mathematics, particularly graph theory, and computer science, a directed acyclic graph (DAG) is a directed graph with no directed cycles. That is, it consists of vertices and edges (also called ''arcs''), with each edge directed from one ve ...
s (DAGs) whose nodes represent variables in the
Bayesian Thomas Bayes (/beɪz/; c. 1701 – 1761) was an English statistician, philosopher, and Presbyterian minister. Bayesian () refers either to a range of concepts and approaches that relate to statistical methods based on Bayes' theorem, or a follower ...
sense: they may be observable quantities,
latent variable In statistics, latent variables (from Latin: present participle of ''lateo'', “lie hidden”) are variables that can only be inferred indirectly through a mathematical model from other observable variables that can be directly observed or me ...
s, unknown parameters or hypotheses. Edges represent conditional dependencies; nodes that are not connected (no path connects one node to another) represent variables that are
conditionally independent In probability theory, conditional independence describes situations wherein an observation is irrelevant or redundant when evaluating the certainty of a hypothesis. Conditional independence is usually formulated in terms of conditional probabil ...
of each other. Each node is associated with a probability function that takes, as input, a particular set of values for the node's
parent A parent is a caregiver of the offspring in their own species. In humans, a parent is the caretaker of a child (where "child" refers to offspring, not necessarily age). A ''biological parent'' is a person whose gamete resulted in a child, a male t ...
variables, and gives (as output) the probability (or probability distribution, if applicable) of the variable represented by the node. For example, if m parent nodes represent m Boolean variables, then the probability function could be represented by a table of 2^m entries, one entry for each of the 2^m possible parent combinations. Similar ideas may be applied to undirected, and possibly cyclic, graphs such as
Markov network In the domain of physics and probability, a Markov random field (MRF), Markov network or undirected graphical model is a set of random variables having a Markov property described by an undirected graph. In other words, a random field is said to b ...
s.


Example

Two events can cause grass to be wet: an active sprinkler or rain. Rain has a direct effect on the use of the sprinkler (namely that when it rains, the sprinkler usually is not active). This situation can be modeled with a Bayesian network (shown to the right). Each variable has two possible values, T (for true) and F (for false). The joint probability function is, by the
chain rule of probability In probability theory, the chain rule (also called the general product rule) permits the calculation of any member of the joint distribution of a set of random variables using only conditional probabilities. The rule is useful in the study of Bayes ...
, : \Pr(G,S,R)=\Pr(G\mid S,R) \Pr(S\mid R)\Pr(R) where ''G'' = "Grass wet (true/false)", ''S'' = "Sprinkler turned on (true/false)", and ''R'' = "Raining (true/false)". The model can answer questions about the presence of a cause given the presence of an effect (so-called inverse probability) like "What is the probability that it is raining, given the grass is wet?" by using the
conditional probability In probability theory, conditional probability is a measure of the probability of an event occurring, given that another event (by assumption, presumption, assertion or evidence) has already occurred. This particular method relies on event B occur ...
formula and summing over all
nuisance variable In the theory of stochastic processes in probability theory and statistics, a nuisance variable is a random variable that is fundamental to the probabilistic model, but that is of no particular interest in itself or is no longer of any interest: one ...
s: : \Pr(R=T\mid G=T) =\frac = \frac Using the expansion for the joint probability function \Pr(G,S,R) and the conditional probabilities from the conditional probability tables (CPTs) stated in the diagram, one can evaluate each term in the sums in the numerator and denominator. For example, : \begin \Pr(G=T, S=T,R=T) & = \Pr(G=T\mid S=T,R=T)\Pr(S=T\mid R=T)\Pr(R=T) \\ & = 0.99 \times 0.01 \times 0.2 \\ & = 0.00198. \end Then the numerical results (subscripted by the associated variable values) are : \Pr(R=T\mid G=T) = \frac = \frac\approx 35.77 \%. To answer an interventional question, such as "What is the probability that it would rain, given that we wet the grass?" the answer is governed by the post-intervention joint distribution function : \Pr(S,R\mid\text(G=T)) = \Pr(S\mid R) \Pr(R) obtained by removing the factor \Pr(G\mid S,R) from the pre-intervention distribution. The do operator forces the value of G to be true. The probability of rain is unaffected by the action: : \Pr(R\mid\text(G=T)) = \Pr(R). To predict the impact of turning the sprinkler on: : \Pr(R,G\mid\text(S=T)) = \Pr(R)\Pr(G\mid R,S=T) with the term \Pr(S=T\mid R) removed, showing that the action affects the grass but not the rain. These predictions may not be feasible given unobserved variables, as in most policy evaluation problems. The effect of the action \text(x) can still be predicted, however, whenever the back-door criterion is satisfied. It states that, if a set ''Z'' of nodes can be observed that ''d''-separates (or blocks) all back-door paths from ''X'' to ''Y'' then : \Pr(Y,Z\mid\text(x)) = \frac. A back-door path is one that ends with an arrow into ''X''. Sets that satisfy the back-door criterion are called "sufficient" or "admissible." For example, the set ''Z'' = ''R'' is admissible for predicting the effect of ''S'' = ''T'' on ''G'', because ''R'' ''d''-separates the (only) back-door path ''S'' ← ''R'' → ''G''. However, if ''S'' is not observed, no other set ''d''-separates this path and the effect of turning the sprinkler on (''S'' = ''T'') on the grass (''G'') cannot be predicted from passive observations. In that case ''P''(''G'' ,  do(''S'' = ''T'')) is not "identified". This reflects the fact that, lacking interventional data, the observed dependence between ''S'' and ''G'' is due to a causal connection or is spurious (apparent dependence arising from a common cause, ''R''). (see
Simpson's paradox Simpson's paradox is a phenomenon in probability and statistics in which a trend appears in several groups of data but disappears or reverses when the groups are combined. This result is often encountered in social-science and medical-science st ...
) To determine whether a causal relation is identified from an arbitrary Bayesian network with unobserved variables, one can use the three rules of "''do''-calculus" and test whether all ''do'' terms can be removed from the expression of that relation, thus confirming that the desired quantity is estimable from frequency data. Using a Bayesian network can save considerable amounts of memory over exhaustive probability tables, if the dependencies in the joint distribution are sparse. For example, a naive way of storing the conditional probabilities of 10 two-valued variables as a table requires storage space for 2^ = 1024 values. If no variable's local distribution depends on more than three parent variables, the Bayesian network representation stores at most 10\cdot2^3 = 80 values. One advantage of Bayesian networks is that it is intuitively easier for a human to understand (a sparse set of) direct dependencies and local distributions than complete joint distributions.


Inference and learning

Bayesian networks perform three main inference tasks:


Inferring unobserved variables

Because a Bayesian network is a complete model for its variables and their relationships, it can be used to answer probabilistic queries about them. For example, the network can be used to update knowledge of the state of a subset of variables when other variables (the ''evidence'' variables) are observed. This process of computing the ''posterior'' distribution of variables given evidence is called probabilistic inference. The posterior gives a universal
sufficient statistic In statistics, a statistic is ''sufficient'' with respect to a statistical model and its associated unknown parameter if "no other statistic that can be calculated from the same sample provides any additional information as to the value of the pa ...
for detection applications, when choosing values for the variable subset that minimize some expected loss function, for instance the probability of decision error. A Bayesian network can thus be considered a mechanism for automatically applying
Bayes' theorem In probability theory and statistics, Bayes' theorem (alternatively Bayes' law or Bayes' rule), named after Thomas Bayes, describes the probability of an event, based on prior knowledge of conditions that might be related to the event. For examp ...
to complex problems. The most common exact inference methods are:
variable elimination Variable elimination (VE) is a simple and general exact inference algorithm in probabilistic graphical models, such as Bayesian networks and Markov random fields.Zhang, N.L., Poole, D.:A Simple Approach to Bayesian Network Computations.In: 7th Can ...
, which eliminates (by integration or summation) the non-observed non-query variables one by one by distributing the sum over the product; clique tree propagation, which caches the computation so that many variables can be queried at one time and new evidence can be propagated quickly; and recursive conditioning and AND/OR search, which allow for a
space–time tradeoff A space–time trade-off or time–memory trade-off in computer science is a case where an algorithm or program trades increased space usage with decreased time. Here, ''space'' refers to the data storage consumed in performing a given task (RAM ...
and match the efficiency of variable elimination when enough space is used. All of these methods have complexity that is exponential in the network's
treewidth In graph theory, the treewidth of an undirected graph is an integer number which specifies, informally, how far the graph is from being a tree. The smallest treewidth is 1; the graphs with treewidth 1 are exactly the trees and the forests. The gra ...
. The most common
approximate inference Approximate inference methods make it possible to learn realistic models from big data Though used sometimes loosely partly because of a lack of formal definition, the interpretation that seems to best describe Big data is the one associated w ...
algorithms are
importance sampling Importance sampling is a Monte Carlo method for evaluating properties of a particular distribution, while only having samples generated from a different distribution than the distribution of interest. Its introduction in statistics is generally att ...
, stochastic MCMC simulation, mini-bucket elimination, loopy belief propagation,
generalized belief propagation A generalization is a form of abstraction whereby common properties of specific instances are formulated as general concepts or claims. Generalizations posit the existence of a domain or set of elements, as well as one or more common character ...
and
variational methods The calculus of variations (or Variational Calculus) is a field of mathematical analysis that uses variations, which are small changes in functions and functionals, to find maxima and minima of functionals: mappings from a set of functions t ...
.


Parameter learning

In order to fully specify the Bayesian network and thus fully represent the
joint probability distribution Given two random variables that are defined on the same probability space, the joint probability distribution is the corresponding probability distribution on all possible pairs of outputs. The joint distribution can just as well be considered ...
, it is necessary to specify for each node ''X'' the probability distribution for ''X'' conditional upon ''Xs parents. The distribution of ''X'' conditional upon its parents may have any form. It is common to work with discrete or Gaussian distributions since that simplifies calculations. Sometimes only constraints on distribution are known; one can then use the
principle of maximum entropy The principle of maximum entropy states that the probability distribution which best represents the current state of knowledge about a system is the one with largest entropy, in the context of precisely stated prior data (such as a proposition ...
to determine a single distribution, the one with the greatest
entropy Entropy is a scientific concept, as well as a measurable physical property, that is most commonly associated with a state of disorder, randomness, or uncertainty. The term and the concept are used in diverse fields, from classical thermodynam ...
given the constraints. (Analogously, in the specific context of a
dynamic Bayesian network A Dynamic Bayesian Network (DBN) is a Bayesian network (BN) which relates variables to each other over adjacent time steps. This is often called a ''Two-Timeslice'' BN (2TBN) because it says that at any point in time T, the value of a variable c ...
, the conditional distribution for the hidden state's temporal evolution is commonly specified to maximize the
entropy rate In the mathematical theory of probability, the entropy rate or source information rate of a stochastic process is, informally, the time density of the average information in a stochastic process. For stochastic processes with a countable index, the ...
of the implied stochastic process.) Often these conditional distributions include parameters that are unknown and must be estimated from data, e.g., via the
maximum likelihood In statistics, maximum likelihood estimation (MLE) is a method of estimation theory, estimating the Statistical parameter, parameters of an assumed probability distribution, given some observed data. This is achieved by Mathematical optimization, ...
approach. Direct maximization of the likelihood (or of the
posterior probability The posterior probability is a type of conditional probability that results from updating the prior probability with information summarized by the likelihood via an application of Bayes' rule. From an epistemological perspective, the posterior p ...
) is often complex given unobserved variables. A classical approach to this problem is the expectation-maximization algorithm, which alternates computing expected values of the unobserved variables conditional on observed data, with maximizing the complete likelihood (or posterior) assuming that previously computed expected values are correct. Under mild regularity conditions, this process converges on maximum likelihood (or maximum posterior) values for parameters. A more fully Bayesian approach to parameters is to treat them as additional unobserved variables and to compute a full posterior distribution over all nodes conditional upon observed data, then to integrate out the parameters. This approach can be expensive and lead to large dimension models, making classical parameter-setting approaches more tractable.


Structure learning

In the simplest case, a Bayesian network is specified by an expert and is then used to perform inference. In other applications, the task of defining the network is too complex for humans. In this case, the network structure and the parameters of the local distributions must be learned from data. Automatically learning the graph structure of a Bayesian network (BN) is a challenge pursued within
machine learning Machine learning (ML) is a field of inquiry devoted to understanding and building methods that 'learn', that is, methods that leverage data to improve performance on some set of tasks. It is seen as a part of artificial intelligence. Machine ...
. The basic idea goes back to a recovery algorithm developed by Rebane and
Pearl A pearl is a hard, glistening object produced within the soft tissue (specifically the mantle) of a living shelled mollusk or another animal, such as fossil conulariids. Just like the shell of a mollusk, a pearl is composed of calcium carb ...
and rests on the distinction between the three possible patterns allowed in a 3-node DAG: The first 2 represent the same dependencies (X and Z are independent given Y) and are, therefore, indistinguishable. The collider, however, can be uniquely identified, since X and Z are marginally independent and all other pairs are dependent. Thus, while the ''skeletons'' (the graphs stripped of arrows) of these three triplets are identical, the directionality of the arrows is partially identifiable. The same distinction applies when X and Z have common parents, except that one must first condition on those parents. Algorithms have been developed to systematically determine the skeleton of the underlying graph and, then, orient all arrows whose directionality is dictated by the conditional independences observed. An alternative method of structural learning uses optimization-based search. It requires a
scoring function In decision theory, a scoring rule provides a summary measure for the evaluation of probabilistic predictions or forecasts. It is applicable to tasks in which predictions assign probabilities to events, i.e. one issues a probability distribution ...
and a search strategy. A common scoring function is
posterior probability The posterior probability is a type of conditional probability that results from updating the prior probability with information summarized by the likelihood via an application of Bayes' rule. From an epistemological perspective, the posterior p ...
of the structure given the training data, like the BIC or the BDeu. The time requirement of an
exhaustive search In computer science, brute-force search or exhaustive search, also known as generate and test, is a very general problem-solving technique and algorithmic paradigm that consists of systematically enumerating all possible candidates for the soluti ...
returning a structure that maximizes the score is superexponential in the number of variables. A local search strategy makes incremental changes aimed at improving the score of the structure. A global search algorithm like
Markov chain Monte Carlo In statistics, Markov chain Monte Carlo (MCMC) methods comprise a class of algorithms for sampling from a probability distribution. By constructing a Markov chain that has the desired distribution as its equilibrium distribution, one can obtain ...
can avoid getting trapped in
local minima In mathematical analysis, the maxima and minima (the respective plurals of maximum and minimum) of a function, known collectively as extrema (the plural of extremum), are the largest and smallest value of the function, either within a given ran ...
. Friedman et al. discuss using
mutual information In probability theory and information theory, the mutual information (MI) of two random variables is a measure of the mutual dependence between the two variables. More specifically, it quantifies the " amount of information" (in units such ...
between variables and finding a structure that maximizes this. They do this by restricting the parent candidate set to ''k'' nodes and exhaustively searching therein. A particularly fast method for exact BN learning is to cast the problem as an optimization problem, and solve it using
integer programming An integer programming problem is a mathematical optimization or Constraint satisfaction problem, feasibility program in which some or all of the variables are restricted to be integers. In many settings the term refers to integer linear programmin ...
. Acyclicity constraints are added to the integer program (IP) during solving in the form of cutting planes. Such method can handle problems with up to 100 variables. In order to deal with problems with thousands of variables, a different approach is necessary. One is to first sample one ordering, and then find the optimal BN structure with respect to that ordering. This implies working on the search space of the possible orderings, which is convenient as it is smaller than the space of network structures. Multiple orderings are then sampled and evaluated. This method has been proven to be the best available in literature when the number of variables is huge. Another method consists of focusing on the sub-class of decomposable models, for which the MLE have a closed form. It is then possible to discover a consistent structure for hundreds of variables. Learning Bayesian networks with bounded treewidth is necessary to allow exact, tractable inference, since the worst-case inference complexity is exponential in the treewidth k (under the exponential time hypothesis). Yet, as a global property of the graph, it considerably increases the difficulty of the learning process. In this context it is possible to use
K-tree In graph theory, a ''k''-tree is an undirected graph formed by starting with a (''k'' + 1)-vertex complete graph and then repeatedly adding vertices in such a way that each added vertex ''v'' has exactly ''k'' neighbors ''U'' such that, t ...
for effective learning.


Statistical introduction

Given data x\,\! and parameter \theta, a simple
Bayesian analysis Bayesian inference is a method of statistical inference in which Bayes' theorem is used to update the probability for a hypothesis as more evidence or information becomes available. Bayesian inference is an important technique in statistics, and ...
starts with a
prior probability In Bayesian statistical inference, a prior probability distribution, often simply called the prior, of an uncertain quantity is the probability distribution that would express one's beliefs about this quantity before some evidence is taken into ...
(''prior'') p(\theta) and
likelihood The likelihood function (often simply called the likelihood) represents the probability of random variable realizations conditional on particular values of the statistical parameters. Thus, when evaluated on a given sample, the likelihood funct ...
p(x\mid\theta) to compute a
posterior probability The posterior probability is a type of conditional probability that results from updating the prior probability with information summarized by the likelihood via an application of Bayes' rule. From an epistemological perspective, the posterior p ...
p(\theta\mid x) \propto p(x\mid\theta)p(\theta). Often the prior on \theta depends in turn on other parameters \varphi that are not mentioned in the likelihood. So, the prior p(\theta) must be replaced by a likelihood p(\theta\mid \varphi), and a prior p(\varphi) on the newly introduced parameters \varphi is required, resulting in a posterior probability : p(\theta,\varphi\mid x) \propto p(x\mid\theta)p(\theta\mid\varphi)p(\varphi). This is the simplest example of a ''hierarchical Bayes model''. The process may be repeated; for example, the parameters \varphi may depend in turn on additional parameters \psi\,\!, which require their own prior. Eventually the process must terminate, with priors that do not depend on unmentioned parameters.


Introductory examples

Given the measured quantities x_1,\dots,x_n\,\!each with normally distributed errors of known
standard deviation In statistics, the standard deviation is a measure of the amount of variation or dispersion of a set of values. A low standard deviation indicates that the values tend to be close to the mean (also called the expected value) of the set, while ...
\sigma\,\!, : x_i \sim N(\theta_i, \sigma^2) Suppose we are interested in estimating the \theta_i. An approach would be to estimate the \theta_i using a
maximum likelihood In statistics, maximum likelihood estimation (MLE) is a method of estimation theory, estimating the Statistical parameter, parameters of an assumed probability distribution, given some observed data. This is achieved by Mathematical optimization, ...
approach; since the observations are independent, the likelihood factorizes and the maximum likelihood estimate is simply : \theta_i = x_i. However, if the quantities are related, so that for example the individual \theta_ihave themselves been drawn from an underlying distribution, then this relationship destroys the independence and suggests a more complex model, e.g., : x_i \sim N(\theta_i,\sigma^2), : \theta_i\sim N(\varphi, \tau^2), with
improper prior In Bayesian statistical inference, a prior probability distribution, often simply called the prior, of an uncertain quantity is the probability distribution that would express one's beliefs about this quantity before some evidence is taken into ...
s \varphi\sim\text, \tau\sim\text \in (0,\infty). When n\ge 3, this is an ''identified model'' (i.e. there exists a unique solution for the model's parameters), and the posterior distributions of the individual \theta_i will tend to move, or '' shrink'' away from the maximum likelihood estimates towards their common mean. This ''shrinkage'' is a typical behavior in hierarchical Bayes models.


Restrictions on priors

Some care is needed when choosing priors in a hierarchical model, particularly on scale variables at higher levels of the hierarchy such as the variable \tau\,\! in the example. The usual priors such as the
Jeffreys prior In Bayesian probability, the Jeffreys prior, named after Sir Harold Jeffreys, is a non-informative (objective) prior distribution for a parameter space; its density function is proportional to the square root of the determinant of the Fisher infor ...
often do not work, because the posterior distribution will not be normalizable and estimates made by minimizing the
expected loss Expected loss is the sum of the values of all possible losses, each multiplied by the probability of that loss occurring. In bank lending (homes, autos, credit cards, commercial lending, etc.) the expected loss on a loan varies over time for a num ...
will be inadmissible.


Definitions and concepts

Several equivalent definitions of a Bayesian network have been offered. For the following, let ''G'' = (''V'',''E'') be a
directed acyclic graph In mathematics, particularly graph theory, and computer science, a directed acyclic graph (DAG) is a directed graph with no directed cycles. That is, it consists of vertices and edges (also called ''arcs''), with each edge directed from one ve ...
(DAG) and let ''X'' = (''X''''v''), ''v'' ∈ ''V'' be a set of
random variable A random variable (also called random quantity, aleatory variable, or stochastic variable) is a mathematical formalization of a quantity or object which depends on random events. It is a mapping or a function from possible outcomes (e.g., the po ...
s indexed by ''V''.


Factorization definition

''X'' is a Bayesian network with respect to ''G'' if its joint
probability density function In probability theory, a probability density function (PDF), or density of a continuous random variable, is a function whose value at any given sample (or point) in the sample space (the set of possible values taken by the random variable) can ...
(with respect to a
product measure In mathematics, given two measurable spaces and measures on them, one can obtain a product measurable space and a product measure on that space. Conceptually, this is similar to defining the Cartesian product of sets and the product topology of two ...
) can be written as a product of the individual density functions, conditional on their parent variables: : p (x) = \prod_ p \left(x_v \,\big, \, x_ \right) where pa(''v'') is the set of parents of ''v'' (i.e. those vertices pointing directly to ''v'' via a single edge). For any set of random variables, the probability of any member of a
joint distribution Given two random variables that are defined on the same probability space, the joint probability distribution is the corresponding probability distribution on all possible pairs of outputs. The joint distribution can just as well be considered ...
can be calculated from conditional probabilities using the
chain rule In calculus, the chain rule is a formula that expresses the derivative of the composition of two differentiable functions and in terms of the derivatives of and . More precisely, if h=f\circ g is the function such that h(x)=f(g(x)) for every , ...
(given a
topological ordering In computer science, a topological sort or topological ordering of a directed graph is a linear ordering of its vertices such that for every directed edge ''uv'' from vertex ''u'' to vertex ''v'', ''u'' comes before ''v'' in the ordering. For ins ...
of ''X'') as follows: : \operatorname P(X_1=x_1, \ldots, X_n=x_n) = \prod_^n \operatorname P \left(X_v=x_v \mid X_=x_, \ldots, X_n=x_n \right) Using the definition above, this can be written as: : \operatorname P(X_1=x_1, \ldots, X_n=x_n) = \prod_^n \operatorname P (X_v=x_v \mid X_j=x_j \text X_j\, \text X_v\, ) The difference between the two expressions is the
conditional independence In probability theory, conditional independence describes situations wherein an observation is irrelevant or redundant when evaluating the certainty of a hypothesis. Conditional independence is usually formulated in terms of conditional probabil ...
of the variables from any of their non-descendants, given the values of their parent variables.


Local Markov property

''X'' is a Bayesian network with respect to ''G'' if it satisfies the ''local Markov property'': each variable is
conditionally independent In probability theory, conditional independence describes situations wherein an observation is irrelevant or redundant when evaluating the certainty of a hypothesis. Conditional independence is usually formulated in terms of conditional probabil ...
of its non-descendants given its parent variables: : X_v \perp\!\!\!\perp X_ \mid X_ \quad\textv \in V where de(''v'') is the set of descendants and ''V'' \ de(''v'') is the set of non-descendants of ''v''. This can be expressed in terms similar to the first definition, as : \begin & \operatorname P(X_v=x_v \mid X_i=x_i \text X_i \text X_v\, ) \\ pt= & P(X_v=x_v \mid X_j=x_j \text X_j \text X_v\, ) \end The set of parents is a subset of the set of non-descendants because the graph is acyclic.


Developing Bayesian networks

Developing a Bayesian network often begins with creating a DAG ''G'' such that ''X'' satisfies the local Markov property with respect to ''G''. Sometimes this is a
causal Causality (also referred to as causation, or cause and effect) is influence by which one event, process, state, or object (''a'' ''cause'') contributes to the production of another event, process, state, or object (an ''effect'') where the cau ...
DAG. The conditional probability distributions of each variable given its parents in ''G'' are assessed. In many cases, in particular in the case where the variables are discrete, if the joint distribution of ''X'' is the product of these conditional distributions, then ''X'' is a Bayesian network with respect to ''G''.


Markov blanket

The
Markov blanket In statistics and machine learning, when one wants to infer a random variable with a set of variables, usually a subset is enough, and other variables are useless. Such a subset that contains all the useful information is called a Markov blanket. ...
of a node is the set of nodes consisting of its parents, its children, and any other parents of its children. The Markov blanket renders the node independent of the rest of the network; the joint distribution of the variables in the Markov blanket of a node is sufficient knowledge for calculating the distribution of the node. ''X'' is a Bayesian network with respect to ''G'' if every node is conditionally independent of all other nodes in the network, given its
Markov blanket In statistics and machine learning, when one wants to infer a random variable with a set of variables, usually a subset is enough, and other variables are useless. Such a subset that contains all the useful information is called a Markov blanket. ...
.


''d''-separation

This definition can be made more general by defining the "d"-separation of two nodes, where d stands for directional. We first define the "d"-separation of a trail and then we will define the "d"-separation of two nodes in terms of that. Let ''P'' be a trail from node ''u'' to ''v''. A trail is a loop-free, undirected (i.e. all edge directions are ignored) path between two nodes. Then ''P'' is said to be ''d''-separated by a set of nodes ''Z'' if any of the following conditions holds: *''P'' contains (but does not need to be entirely) a directed chain, u \cdots \leftarrow m \leftarrow \cdots v or u \cdots \rightarrow m \rightarrow \cdots v, such that the middle node ''m'' is in ''Z'', *''P'' contains a fork, u \cdots \leftarrow m \rightarrow \cdots v, such that the middle node ''m'' is in ''Z'', or *''P'' contains an inverted fork (or collider), u \cdots \rightarrow m \leftarrow \cdots v, such that the middle node ''m'' is not in ''Z'' and no descendant of ''m'' is in ''Z''. The nodes ''u'' and ''v'' are ''d''-separated by ''Z'' if all trails between them are ''d''-separated. If ''u'' and ''v'' are not d-separated, they are d-connected. ''X'' is a Bayesian network with respect to ''G'' if, for any two nodes ''u'', ''v'': : X_u \perp\!\!\!\perp X_v \mid X_Z where ''Z'' is a set which ''d''-separates ''u'' and ''v''. (The
Markov blanket In statistics and machine learning, when one wants to infer a random variable with a set of variables, usually a subset is enough, and other variables are useless. Such a subset that contains all the useful information is called a Markov blanket. ...
is the minimal set of nodes which ''d''-separates node ''v'' from all other nodes.)


Causal networks

Although Bayesian networks are often used to represent
causal Causality (also referred to as causation, or cause and effect) is influence by which one event, process, state, or object (''a'' ''cause'') contributes to the production of another event, process, state, or object (an ''effect'') where the cau ...
relationships, this need not be the case: a directed edge from ''u'' to ''v'' does not require that ''Xv'' be causally dependent on ''Xu''. This is demonstrated by the fact that Bayesian networks on the graphs: : a \rightarrow b \rightarrow c \qquad \text \qquad a \leftarrow b \leftarrow c are equivalent: that is they impose exactly the same conditional independence requirements. A causal network is a Bayesian network with the requirement that the relationships be causal. The additional semantics of causal networks specify that if a node ''X'' is actively caused to be in a given state ''x'' (an action written as do(''X'' = ''x'')), then the probability density function changes to that of the network obtained by cutting the links from the parents of ''X'' to ''X'', and setting ''X'' to the caused value ''x''. Using these semantics, the impact of external interventions from data obtained prior to intervention can be predicted.


Inference complexity and approximation algorithms

In 1990, while working at Stanford University on large bioinformatic applications, Cooper proved that exact inference in Bayesian networks is
NP-hard In computational complexity theory, NP-hardness ( non-deterministic polynomial-time hardness) is the defining property of a class of problems that are informally "at least as hard as the hardest problems in NP". A simple example of an NP-hard pr ...
. This result prompted research on approximation algorithms with the aim of developing a tractable approximation to probabilistic inference. In 1993,
Paul Dagum Paul may refer to: *Paul (given name), a given name (includes a list of people with that name) *Paul (surname), a list of people People Christianity *Paul the Apostle (AD c.5–c.64/65), also known as Saul of Tarsus or Saint Paul, early Chris ...
and
Michael Luby Michael George Luby is a mathematician and computer scientist, CEO of BitRipple, Senior Research Scientist at the International Computer Science Institute (ICSI), former VP Technology at Qualcomm, co-founder and former Chief Technology Officer ...
proved two surprising results on the complexity of approximation of probabilistic inference in Bayesian networks. First, they proved that no tractable
deterministic algorithm In computer science, a deterministic algorithm is an algorithm that, given a particular input, will always produce the same output, with the underlying machine always passing through the same sequence of states. Deterministic algorithms are by far ...
can approximate probabilistic inference to within an
absolute error The approximation error in a data value is the discrepancy between an exact value and some ''approximation'' to it. This error can be expressed as an absolute error (the numerical amount of the discrepancy) or as a relative error (the absolute er ...
''ɛ'' < 1/2. Second, they proved that no tractable
randomized algorithm A randomized algorithm is an algorithm that employs a degree of randomness as part of its logic or procedure. The algorithm typically uses uniformly random bits as an auxiliary input to guide its behavior, in the hope of achieving good performan ...
can approximate probabilistic inference to within an absolute error ''ɛ'' < 1/2 with confidence probability greater than 1/2. At about the same time,
Roth Roth may refer to: Places Germany * Roth (district), in Bavaria, Germany ** Roth, Bavaria, capital of that district ** Roth (electoral district), a federal electoral district * Rhineland-Palatinate, Germany: ** Roth an der Our, in the district B ...
proved that exact inference in Bayesian networks is in fact #P-complete (and thus as hard as counting the number of satisfying assignments of a
conjunctive normal form In Boolean logic, a formula is in conjunctive normal form (CNF) or clausal normal form if it is a conjunction of one or more clauses, where a clause is a disjunction of literals; otherwise put, it is a product of sums or an AND of ORs. As a can ...
formula (CNF)) and that approximate inference within a factor 2''n''1−''ɛ'' for every ''ɛ'' > 0, even for Bayesian networks with restricted architecture, is NP-hard. In practical terms, these complexity results suggested that while Bayesian networks were rich representations for AI and machine learning applications, their use in large real-world applications would need to be tempered by either topological structural constraints, such as naïve Bayes networks, or by restrictions on the conditional probabilities. The bounded variance algorithm developed by Dagum and Luby was the first provable fast approximation algorithm to efficiently approximate probabilistic inference in Bayesian networks with guarantees on the error approximation. This powerful algorithm required the minor restriction on the conditional probabilities of the Bayesian network to be bounded away from zero and one by 1/''p''(''n'') where ''p''(''n'') was any polynomial on the number of nodes in the network ''n''.


Software

Notable software for Bayesian networks include: *
Just another Gibbs sampler Just another Gibbs sampler (JAGS) is a program for simulation from Bayesian hierarchical models using Markov chain Monte Carlo (MCMC), developed by Martyn Plummer. JAGS has been employed for statistical work in many fields, for example ecology, m ...
(JAGS) – Open-source alternative to WinBUGS. Uses Gibbs sampling. *
OpenBUGS OpenBUGS is a software application for the Bayesian analysis of complex statistical models using Markov chain Monte Carlo (MCMC) methods. OpenBUGS is the open source variant of WinBUGS (Bayesian inference Using Gibbs Sampling). It runs under Micr ...
– Open-source development of WinBUGS. *
SPSS Modeler IBM SPSS Modeler is a data mining and text analytics software application from IBM. It is used to build predictive models and conduct other analytic tasks. It has a visual interface which allows users to leverage statistical and data mining a ...
– Commercial software that includes an implementation for Bayesian networks. *
Stan (software) Stan is a probabilistic programming language for statistical inference written in C++.Stan Development Team. 2015Stan Modeling Language User's Guide and Reference Manual, Version 2.9.0/ref> The Stan language is used to specify a (Bayesian) stati ...
– Stan is an open-source package for obtaining Bayesian inference using the No-U-Turn sampler (NUTS), a variant of Hamiltonian Monte Carlo. *
PyMC3 PyMC (formerly known as PyMC3) is a Python (programming language), Python package for Bayesian statistical modeling and probabilistic machine learning which focuses on advanced Markov chain Monte Carlo and variational fitting algorithms. It is a r ...
– A Python library implementing an embedded domain specific language to represent bayesian networks, and a variety of samplers (including NUTS) *
WinBUGS WinBUGS is statistical software for Bayesian analysis using Markov chain Monte Carlo (MCMC) methods. It is based on the BUGS ( Bayesian inference Using Gibbs Sampling) project started in 1989. It runs under Microsoft Windows, though it can also ...
– One of the first computational implementations of MCMC samplers. No longer maintained.


History

The term Bayesian network was coined by
Judea Pearl Judea Pearl (born September 4, 1936) is an Israeli-American computer scientist and philosopher, best known for championing the probabilistic approach to artificial intelligence and the development of Bayesian networks (see the article on belief ...
in 1985 to emphasize: *the often subjective nature of the input information *the reliance on Bayes' conditioning as the basis for updating information *the distinction between causal and evidential modes of reasoning In the late 1980s Pearl's ''Probabilistic Reasoning in Intelligent Systems'' and
Neapolitan Neapolitan means of or pertaining to Naples, a city in Italy; or to: Geography and history * Province of Naples, a province in the Campania region of southern Italy that includes the city * Duchy of Naples, in existence during the Early and Hig ...
's ''Probabilistic Reasoning in Expert Systems'' summarized their properties and established them as a field of study.


See also


Notes


References

* * * * * * * (This paper puts
decision tree A decision tree is a decision support tool that uses a tree-like model of decisions and their possible consequences, including chance event outcomes, resource costs, and utility. It is one way to display an algorithm that only contains condit ...
s in internal nodes of Bayes networks usin
Minimum Message Length
(
MML MML can stand for: Academia *Master of Modern Languages, an undergraduate degree offered by the University of Manchester, which includes the study of two modern languages to Master's level *Medieval and Modern Languages, an undergraduate degree off ...
). * * * * * * :Also appears as :An earlier version appears as ,
Microsoft Research Microsoft Research (MSR) is the research subsidiary of Microsoft. It was created in 1991 by Richard Rashid, Bill Gates and Nathan Myhrvold with the intent to advance state-of-the-art computing and solve difficult world problems through technologi ...
, March 1, 1995. The paper is about both parameter and structure learning in Bayesian networks. * * * * * * * * * . * This paper presents variable elimination for belief networks.


Further reading

* * * *


External links


An Introduction to Bayesian Networks and their Contemporary ApplicationsWeb-App to create Bayesian nets and run it with a Monte Carlo methodContinuous Time Bayesian NetworksBayesian Networks: Explanation and AnalogyA live tutorial on learning Bayesian networksA hierarchical Bayes Model for handling sample heterogeneity in classification problems
provides a classification model taking into consideration the uncertainty associated with measuring replicate samples.
Hierarchical Naive Bayes Model for handling sample uncertainty
, shows how to perform classification and learning with continuous and discrete variables with replicated measurements. {{DEFAULTSORT:Bayesian Network Graphical models Causality Causal inference