Graph Cut Optimization
   HOME

TheInfoList



OR:

Graph cut optimization is a combinatorial optimization method applicable to a family of functions of
discrete variable In mathematics and statistics, a quantitative variable may be continuous or discrete if they are typically obtained by ''measuring'' or ''counting'', respectively. If it can take on two particular real values such that it can also take on all re ...
s, named after the concept of cut in the theory of flow networks. Thanks to the max-flow min-cut theorem, determining the
minimum cut In graph theory, a minimum cut or min-cut of a graph is a cut (a partition of the vertices of a graph into two disjoint subsets) that is minimal in some metric. Variations of the minimum cut problem consider weighted graphs, directed graphs, term ...
over a graph representing a flow network is equivalent to computing the maximum flow over the network. Given a
pseudo-Boolean function In mathematics and optimization, a pseudo-Boolean function is a function (mathematics), function of the form :f: \mathbf^n \to \R, where is a ''Boolean domain'' and is a nonnegative integer called the arity of the function. A Boolean function is ...
f, if it is possible to construct a flow network with positive weights such that * each cut C of the network can be mapped to an assignment of variables \mathbf to f (and vice versa), and * the cost of C equals f(\mathbf) (up to an additive constant) then it is possible to find the
global optimum In mathematical analysis, the maxima and minima (the respective plurals of maximum and minimum) of a function, known collectively as extrema (the plural of extremum), are the largest and smallest value of the function, either within a given ran ...
of f in
polynomial time In computer science, the time complexity is the computational complexity that describes the amount of computer time it takes to run an algorithm. Time complexity is commonly estimated by counting the number of elementary operations performed by ...
by computing a minimum cut of the graph. The mapping between cuts and variable assignments is done by representing each variable with one node in the graph and, given a cut, each variable will have a value of 0 if the corresponding node belongs to the component connected to the source, or 1 if it belong to the component connected to the sink. Not all pseudo-Boolean functions can be represented by a flow network, and in the general case the global optimization problem is
NP-hard In computational complexity theory, NP-hardness ( non-deterministic polynomial-time hardness) is the defining property of a class of problems that are informally "at least as hard as the hardest problems in NP". A simple example of an NP-hard pr ...
. There exist sufficient conditions to characterise families of functions that can be optimised through graph cuts, such as submodular quadratic functions. Graph cut optimization can be extended to functions of discrete variables with a finite number of values, that can be approached with iterative algorithms with strong optimality properties, computing one graph cut at each iteration. Graph cut optimization is an important tool for inference over
graphical models ''Graphical Models'' is an academic journal in computer graphics and geometry processing publisher by Elsevier. , its editor-in-chief is Jorg Peters of the University of Florida. History This journal has gone through multiple names. Founded in 1 ...
such as Markov random fields or conditional random fields, and it has applications in computer vision problems such as image segmentation,
denoising Noise reduction is the process of removing noise from a signal. Noise reduction techniques exist for audio and images. Noise reduction algorithms may distort the signal to some degree. Noise rejection is the ability of a circuit to isolate an un ...
, registration and
stereo matching Stereophonic sound, or more commonly stereo, is a method of sound reproduction that recreates a multi-directional, 3-dimensional audible perspective. This is usually achieved by using two independent audio channels through a configuration ...
.


Representability

A
pseudo-Boolean function In mathematics and optimization, a pseudo-Boolean function is a function (mathematics), function of the form :f: \mathbf^n \to \R, where is a ''Boolean domain'' and is a nonnegative integer called the arity of the function. A Boolean function is ...
f: \^n \to \mathbb is said to be ''representable'' if there exists a graph G = (V, E) with non-negative weights and with source and sink nodes s and t respectively, and there exists a set of nodes V_0 = \ \subset V - \ such that, for each tuple of values (x_1, \dots, x_n) \in \^n assigned to the variables, f(x_1, \dots, x_n) equals (up to a constant) the value of the flow determined by a minimum cut C = (S, T) of the graph G such that v_i \in S if x_i = 0 and v_i \in T if x_i = 1. It is possible to classify pseudo-Boolean functions according to their order, determined by the maximum number of variables contributing to each single term. All first order functions, where each term depends upon at most one variable, are always representable. Quadratic functions : f(\mathbf) = w_0 + \sum_i w_i(x_i) + \sum_ w_(x_i, x_j) . are representable if and only if they are submodular, i.e. for each quadratic term w_ the following condition is satisfied : w_(0, 0) + w_(1, 1) \le w_(0, 1) + w_(1, 0) . Cubic functions : f(\mathbf) = w_0 + \sum_i w_i(x_i) + \sum_ w_(x_i, x_j) + \sum_ w_(x_i, x_j, x_k) are representable if and only if they are ''regular'', i.e. all possible binary projections to two variables, obtained by fixing the value of the remaining variable, are submodular. For higher-order functions, regularity is a necessary condition for representability.


Graph construction

Graph construction for a representable function is simplified by the fact that the sum of two representable functions f' and f'' is representable, and its graph G = (V' \cup V'', E' \cup E'') is the union of the graphs G' = (V', E') and G'' = (V'', E'') representing the two functions. Such theorem allows to build separate graphs representing each term and combine them to obtain a graph representing the entire function. The graph representing a quadratic function of n variables contains n + 2 vertices, two of them representing the source and sink and the others representing the variables. When representing higher-order functions, the graph contains auxiliary nodes that allow to model higher-order interactions.


Unary terms

A unary term w_i depends only on one variable x_i and can be represented by a graph with one non-terminal node v_i and one edge s \rightarrow v_i with weight w_i(1) - w_i(0) if w_i(1) \ge w_i(0), or v_i \rightarrow t with weight w_i(0) - w_i(1) if w_i(1) < w_i(0).


Binary terms

A quadratic (or binary) term w_ can be represented by a graph containing two non-terminal nodes v_i and v_j. The term can be rewritten as :w_(x_i, x_j) = w_(0, 0) + k_i x_i + k_j x_j + k_ \left( (1 - x_i) x_j + x_i (1 - x_j) \right) with : \begin k_i &= \frac (w_(1, 0) - w_(0, 0)) \\ k_j &= \frac (w_(1, 1) - w_(1, 0)) \\ k_ &= \frac (w_(0, 1) + w_(1, 0) - w_(0, 0) - w_(1, 1)) . \end In this expression, the first term is constant and it is not represented by any edge, the two following terms depend on one variable and are represented by one edge, as shown in the previous section for unary terms, while the third term is represented by an edge v_i \rightarrow v_j with weight w_(0, 1) + w_(1, 0) - w_(0, 0) - w_(1, 1) (submodularity guarantees that the weight is non-negative).


Ternary terms

A cubic (or ternary) term w_ can be represented by a graph with four non-terminal nodes, three of them (v_i, v_j and v_k) associated to the three variables plus one fourth auxiliary node v_. A generic ternary term can be rewritten as the sum of a constant, three unary terms, three binary terms, and a ternary term in simplified form. There may be two different cases, according to the sign of p = w_(0, 0, 0) + w_(0, 1, 1) + w_(1, 0, 1) + w_(1, 1, 0). If p > 0 then : w_(x_i, x_j, x_k) = w_(0, 0, 0) + p_1 (x_i - 1) + p_2 (x_j - 1) + p_3 (x_k - 1) + p_(x_j - 1) x_k + p_ x_i (x_k - 1) + p_ (x_i - 1) x_j - p x_i x_j x_k with : \begin p_1 &= w_(1, 0, 1) - w_(0, 0, 1) \\ p_2 &= w_(1, 1, 0) - w_(1, 0, 1) \\ p_3 &= w_(0, 1, 1) - w_(0, 1, 0) \\ p_ &= w_(0, 0, 1) + w_(0, 1, 0) - w_(0, 0, 0) - w_(0, 1, 1) \\ p_ &= w_(0, 0, 1) + w_(1, 0, 0) - w_(0, 0, 0) - w_(1, 0, 1) \\ p_ &= w_(0, 1, 0) + w_(1, 0, 0) - w_(0, 0, 0) - w_(1, 1, 0) . \end If p < 0 the construction is similarly, but the variables will have opposite value. If the function is regular, then all its projections of two variables will be submodular, implying that p_, p_ and p_ are positive and then all terms in the new representation are submodular. In this decomposition, the constant, unary and binary terms can be represented as shown in the previous sections. If p > 0 the ternary term can be represented with a graph with four edges v_i \rightarrow v_, v_j \rightarrow v_, v_k \rightarrow v_, v_ \rightarrow t, all with weight p, while if p < 0 the term can be represented by four edges v_ \rightarrow v_i, v_ \rightarrow v_j, v_ \rightarrow v_k, s \rightarrow v_ with weight -p.


Minimum cut

After building a graph representing a pseudo-Boolean function, it is possible to compute a minimum cut using one among the various algorithms developed for flow networks, such as Ford–Fulkerson, Edmonds–Karp, and Boykov–Kolmogorov algorithm. The result is a partition of the graph in two connected components S and T such that s \in S and t \in T, and the function attains its global minimum when x_i = 0 for each i such that the corresponding node v_i \in S, and x_i = 1 for each i such that the corresponding node v_i \in T. Max-flow algorithms such as Boykov–Kolmogorov's are very efficient in practice for sequential computation, but they are difficult to parallelise, making them not suitable for
distributed computing A distributed system is a system whose components are located on different computer network, networked computers, which communicate and coordinate their actions by message passing, passing messages to one another from any system. Distributed com ...
applications and preventing them from exploiting the potential of modern
CPU A central processing unit (CPU), also called a central processor, main processor or just processor, is the electronic circuitry that executes instructions comprising a computer program. The CPU performs basic arithmetic, logic, controlling, and ...
s. Parallel max-flow algorithms were developed, such as push-relabel and jump-flood, that can also take advantage of hardware acceleration in GPGPU implementations.


Functions of discrete variables with more than two values

The previous construction allows global optimization of pseudo-Boolean functions only, but it can be extended to quadratic functions of discrete variables with a finite number of values, in the form :f(\mathbf) = \sum_ D(x_i) + \sum_ S(x_i, x_j) where E \subseteq V \times V and x_i \in \Lambda = \. The function D(x_i) represents the unary contribution of each variable (often referred as ''data term''), while the function S(x_i, x_j) represents binary interactions between variables (''smoothness term''). In the general case, optimization of such functions is a
NP-hard In computational complexity theory, NP-hardness ( non-deterministic polynomial-time hardness) is the defining property of a class of problems that are informally "at least as hard as the hardest problems in NP". A simple example of an NP-hard pr ...
problem, and
stochastic optimization Stochastic optimization (SO) methods are optimization methods that generate and use random variables. For stochastic problems, the random variables appear in the formulation of the optimization problem itself, which involves random objective funct ...
methods such as simulated annealing are sensitive to
local minima In mathematical analysis, the maxima and minima (the respective plurals of maximum and minimum) of a function, known collectively as extrema (the plural of extremum), are the largest and smallest value of the function, either within a given ran ...
and in practice they can generate arbitrarily sub-optimal results. With graph cuts it is possible to construct move-making algorithms that allow to reach in polynomial time a local minima with strong optimality properties for a wide family of quadratic functions of practical interest (when the binary interaction S(x_i, x_j) is a
metric Metric or metrical may refer to: * Metric system, an internationally adopted decimal system of measurement * An adjective indicating relation to measurement in general, or a noun describing a specific type of measurement Mathematics In mathema ...
or a
semimetric In mathematics, a metric space is a set together with a notion of ''distance'' between its elements, usually called points. The distance is measured by a function called a metric or distance function. Metric spaces are the most general setting ...
), such that the value of the function in the solution lies within a constant and known factor from the global optimum. Given a function f: \Lambda^n \to \mathbb with \Lambda = \, and a certain assignment of values \mathbf = (x_1, \dots, x_n) \in \Lambda^n to the variables, it is possible to associate each assignment \mathbf to a partition P = \ of the set of variables, such that, P_l = \. Give two distinct assignments P and P' and a value \alpha \in \Lambda, a move that transforms P into P' is said to be an \alpha-expansion if P_\alpha \subset P'_\alpha and P'_l \subset P_l \; \forall l \in \Lambda - \. Given a couple of values \alpha and \beta, a move is said to be an \alpha\beta-swap if P_l = P'_l \; \forall l \in \Lambda - \. Intuitively, an \alpha-expansion move from \mathbf assigns the value of \alpha to some variables that have a different value in \mathbf, while an \alpha\beta-swap move assigns \alpha to some variables that have value \beta in \mathbf and vice versa. For each iteration, the \alpha-expansion algorithm computes, for each possible value \alpha, the minimum of the function among all assignments \Alpha(\mathbf) that can be reached with a single \alpha-expansion move from the current temporary solution \mathbf, and takes it as the new temporary solution. \mathbf := \text \Lambda^n \text := 0 while \text \ne 1: \text = 1 foreach \alpha \in \Lambda: \mathbf := \arg \min_ f(\mathbf) if f(\mathbf) < \mathbf: \mathbf = \mathbf \text := 0 The \alpha\beta-swap algorithm is similar, but it searches for the minimum among all assignments \Alpha\Beta(\mathbf) reachable with a single \alpha\beta-swap move from \mathbf. \mathbf := \text \Lambda^n \text := 0 while \text \ne 1: \text = 1 foreach (\alpha, \beta) \in \Lambda^2: \mathbf := \arg \min_ f(\mathbf) if f(\mathbf) < \mathbf: \mathbf = \mathbf \text := 0 In both cases, the optimization problem in the innermost loop can be solved exactly and efficiently with a graph cut. Both algorithms terminate certainly in a finite number of iterations of the outer loop, and in practice such number is small, with most of the improvement happening at the first iteration. The algorithms can generate different solutions depending on the initial guess, but in practice they are robust with respect to initialisation, and starting with a point where all variables are assigned to the same random value is usually sufficient to produce good quality results. The solution generated by such algorithms is not necessarily a global optimum, but it has strong guarantees of optimality. If S(x_i, x_j) is a
metric Metric or metrical may refer to: * Metric system, an internationally adopted decimal system of measurement * An adjective indicating relation to measurement in general, or a noun describing a specific type of measurement Mathematics In mathema ...
and \mathbf is a solution generated by the \alpha-expansion algorithm, or if S(x_i, x_j) is a
semimetric In mathematics, a metric space is a set together with a notion of ''distance'' between its elements, usually called points. The distance is measured by a function called a metric or distance function. Metric spaces are the most general setting ...
and \mathbf is a solution generated by the \alpha\beta-swap algorithm, then f(\mathbf) lies within a known and constant factor from the global minimum f(\mathbf^*): :f(\mathbf) \le 2 \frac f(\mathbf^*) .


Non-submodular functions

Generally speaking, the problem of optimizing a non-submodular pseudo-Boolean function is
NP-hard In computational complexity theory, NP-hardness ( non-deterministic polynomial-time hardness) is the defining property of a class of problems that are informally "at least as hard as the hardest problems in NP". A simple example of an NP-hard pr ...
and cannot be solved in polynomial time with a simple graph cut. The simplest approach is to approximate the function with a similar but submodular one, for instance truncating all non-submodular terms or replacing them with similar submodular expressions. Such approach is generally sub-optimal, and it produces acceptable results only if the number of non-submodular terms is relatively small. In case of quadratic non-submodular functions, it is possible to compute in polynomial time a partial solution using algorithms such as QPBO. Higher-order functions can be reduced in polynomial time to a quadratic form that can be optimised with QPBO.


Higher-order functions

Quadratic functions are extensively studied and were characterised in detail, but more general results were derived also for higher-order functions. While quadratic functions can indeed model many problems of practical interest, they are limited by the fact they can represent only binary interactions between variables. The possibility to capture higher-order interactions allows to better capture the nature of the problem and it can provide higher quality results that could be difficult to achieve with quadratic models. For instance in
computer vision Computer vision is an interdisciplinary scientific field that deals with how computers can gain high-level understanding from digital images or videos. From the perspective of engineering, it seeks to understand and automate tasks that the hum ...
applications, where each variable represents a
pixel In digital imaging, a pixel (abbreviated px), pel, or picture element is the smallest addressable element in a raster image, or the smallest point in an all points addressable display device. In most digital display devices, pixels are the smal ...
or voxel of the image, higher-order interactions can be used to model texture information, that would be difficult to capture using only quadratic functions. Sufficient conditions analogous to submodularity were developed to characterise higher-order pseudo-Boolean functions that can be optimised in polynomial time, and there exists algorithms analogous to \alpha-expansion and \alpha\beta-swap for some families of higher-order functions. The problem is NP-hard in the general case, and approximate methods were developed for fast optimization of functions that do not satisfy such conditions.


References

Vineet and Narayanan (2008). Ishikawa (2014). Boykov et al. (2001). Freedman & Drineas (2005). Goldberg & Tarjan (1988). Rother et al. (2012). Hong and Chen (2004). Kim et al. (2003). Kohli et al. (2008). Lombaert and Cheriet (2012). Kohli et al. (2009). Peng et al. (2015). Kolmogorov and Rother (2007). So et al. (2011). Stich (2009). Tang and Chung (2007). Kolmogorov and Zabin (2004). * * * * * * * * * * * * * * * * *


Notes

Algorithms such as simulated annealing have strong theoretical convergence properties for some scheduling of the temperature to infinity. Such scheduling cannot be realised in practice. Adding one node is necessary, graphs without auxiliary nodes can only represent binary interactions between variables.


External links


Implementation (C++) of several graph cut algorithms
by Vladimir Kolmogorov.
GCO
graph cut optimization library by Olga Veksler and Andrew Delong. {{DEFAULTSORT:Graph cut optimization Combinatorial optimization Computer vision Computational problems in graph theory