Distributed Constraint Reasoning
   HOME

TheInfoList



OR:

Distributed constraint optimization (DCOP or DisCOP) is the
distributed Distribution may refer to: Mathematics *Distribution (mathematics), generalized functions used to formulate solutions of partial differential equations *Probability distribution, the probability of a particular value or value range of a varia ...
analogue to
constraint optimization In mathematical optimization, constrained optimization (in some contexts called constraint optimization) is the process of optimizing an objective function with respect to some variables in the presence of constraints on those variables. The obj ...
. A DCOP is a problem in which a group of agents must distributedly choose values for a set of variables such that the cost of a set of constraints over the variables is minimized. Distributed Constraint Satisfaction is a framework for describing a problem in terms of constraints that are known and enforced by distinct participants (agents). The constraints are described on some variables with predefined domains, and have to be assigned to the same values by the different agents. Problems defined with this framework can be solved by any of the algorithms that are designed for it. The framework was used under different names in the 1980s. The first known usage with the current name is in 1990.


Definitions


DCOP

The main ingredients of a DCOP problem are ''agents'' and ''variables''. Importantly, each variable is owned by an agent; this is what makes the problem distributed. Formally, a DCOP is a
tuple In mathematics, a tuple is a finite ordered list (sequence) of elements. An -tuple is a sequence (or ordered list) of elements, where is a non-negative integer. There is only one 0-tuple, referred to as ''the empty tuple''. An -tuple is defi ...
\langle A, V, \mathfrak, f, \alpha, \eta \rangle, where: *A is the
set Set, The Set, SET or SETS may refer to: Science, technology, and mathematics Mathematics *Set (mathematics), a collection of elements *Category of sets, the category whose objects and morphisms are sets and total functions, respectively Electro ...
of ''agents'', \. *V is the set of ''variables'', \. *\mathfrak is the set of ''variable-domains'', where each D_j \in \mathfrak is a
finite set In mathematics, particularly set theory, a finite set is a set that has a finite number of elements. Informally, a finite set is a set which one could in principle count and finish counting. For example, :\ is a finite set with five elements. Th ...
containing the possible values of variable v_j . **If D_j \in \mathfrak contains only two values (e.g. 0 or 1), then v_j is called a ''binary variable''. *f is the ''cost function''. It is a functionf : \bigcup_ \times_ D_j \to \R that maps every possible partial assignment to a cost. Usually, only few values of f are non-zero, and it is represented as a list of the tuples that are assigned a non-zero value. Each such tuple is called a ''constraint''. Each constraint C in this set is a function f_C: D_1\times\cdots\times D_k \to \R assigning a real value to each possible assignment of the variables. Some special kinds of constraints are: **''Unary constraints'' - constraints on a single variable, i.e., f_C: D_j \to \R for some v_j\in V. **''Binary constraints'' - constraints on two variables, i.e, f_C: D_ \times D_ \to \R for some *\alpha is the ''ownership function''. It is a function \alpha: V \to A mapping each variable to its associated agent. \alpha(v_j) \mapsto a_i means that variable v_j "belongs" to agent a_i. This implies that it is agent a_i's responsibility to assign the value of variable v_j. Note that \alpha is not necessarily an
injection Injection or injected may refer to: Science and technology * Injective function, a mathematical function mapping distinct arguments to distinct values * Injection (medicine), insertion of liquid into the body with a syringe * Injection, in broadca ...
, i.e., one agent may own more than one variables. It is also not necessarily a
surjection In mathematics, a surjective function (also known as surjection, or onto function) is a function that every element can be mapped from element so that . In other words, every element of the function's codomain is the image of one element of i ...
, i.e., some agents may own no variables. *\eta is the ''objective function''. It is an operator that aggregates all of the individual f costs for all possible variable assignments. This is usually accomplished through summation:\eta(f) \mapsto \sum_ f(s). The objective of a DCOP is to have each agent assign values to its associated variables in order to either minimize or maximize \eta(f) for a given assignment of the variables.


Assignments

A ''value assignment'' is a pair (v_j, d_j) where d_j is an element of the domain D_j. A ''partial assignment'' is a set of value-assignments where each v_j appears at most once. It is also called a ''context.'' This can be thought of as a function mapping variables in the DCOP to their current values:t : V \to (D \in \mathfrak) \cup \. Note that a context is essentially a partial solution and need not contain values for ''every'' variable in the problem; therefore, t(v_i) \mapsto \emptyset implies that the agent \alpha(v_i) has not yet assigned a value to variable v_i. Given this representation, the "
domain Domain may refer to: Mathematics *Domain of a function, the set of input values for which the (total) function is defined **Domain of definition of a partial function **Natural domain of a partial function **Domain of holomorphy of a function * Do ...
" (that is, the set of input values) of the function f can be thought of as the set of all possible contexts for the DCOP. Therefore, in the remainder of this article we may use the notion of a context (i.e., the t function) as an input to the f function. A ''full assignment'' is an assignment in which each v_j appears exactly once, that is, all variables are assigned. It is also called a ''solution'' to the DCOP. An ''optimal solution'' is a full assignment in which the objective function \eta(f) is optimized (i.e., maximized or minimized, depending on the type of problem).


Example problems

Various problems from different domains can be presented as DCOPs.


Distributed graph coloring

The
graph coloring In graph theory, graph coloring is a special case of graph labeling; it is an assignment of labels traditionally called "colors" to elements of a graph subject to certain constraints. In its simplest form, it is a way of coloring the vertices o ...
problem is as follows: given a
graph Graph may refer to: Mathematics *Graph (discrete mathematics), a structure made of vertices and edges **Graph theory, the study of such graphs and their properties *Graph (topology), a topological space resembling a graph in the sense of discre ...
G = \langle N, E \rangle and a set of colors C, assign each
vertex Vertex, vertices or vertexes may refer to: Science and technology Mathematics and computer science *Vertex (geometry), a point where two or more curves, lines, or edges meet *Vertex (computer graphics), a data structure that describes the position ...
, n\subset N, a color, c \leq C, such that the number of adjacent vertices with the same color is minimized. As a DCOP, there is one agent per vertex that is assigned to decide the associated color. Each agent has a single variable whose associated domain is of
cardinality In mathematics, the cardinality of a set is a measure of the number of elements of the set. For example, the set A = \ contains 3 elements, and therefore A has a cardinality of 3. Beginning in the late 19th century, this concept was generalized ...
, C, (there is one domain value for each possible color). For each vertex n_i \leq N, there is a variable v_i \in V with domain D_i = C. For each pair of adjacent vertices \langle n_i, n_j \rangle \in E, there is a constraint of cost 1 if both of the associated variables are assigned the same color: (\forall c \subseteq C : f(\langle v_i, c \rangle, \langle v_j, c \rangle ) \mapsto 1). The objective, then, is to minimize \eta(f).


Distributed multiple knapsack problem

The ''distributed multiple-'' variant of the
knapsack problem The knapsack problem is a problem in combinatorial optimization: Given a set of items, each with a weight and a value, determine the number of each item to include in a collection so that the total weight is less than or equal to a given limit an ...
is as follows: given a set of items of varying volume and a set of knapsacks of varying capacity, assign each item to a knapsack such that the amount of overflow is minimized. Let I be the set of items, K be the set of knapsacks, s : I \to \N be a function mapping items to their volume, and c : K \to \N be a function mapping knapsacks to their capacities. To encode this problem as a DCOP, for each i \in I create one variable v_i \in V with associated domain D_i = K. Then for all possible contexts t:f(t) \mapsto \sum_ \begin 0 & r(t,k) \leq c(k), \\ r(t,k)-c(k) & \text, \endwhere r(t,k) represents the total weight assigned by context t to knapsack k:r(t,k) = \sum_ s(i).


Distributed item allocation problem

The item allocation problem is as follows. There are several items that have to be divided among several agents. Each agent has a different valuation for the items. The goal is to optimize some global goal, such as maximizing the sum of utilities or minimizing the envy. The item allocation problem can be formulated as a DCOP as follows. * Add a binary variable ''vij'' for each agent ''i'' and item ''j''. The variable value is "1" if the agent gets the item, and "0" otherwise. The variable is owned by agent ''i''. * To express the constraint that each item is given to at most one agent, add binary constraints for each two different variables related to the same item, with an infinite cost if the two variables are simultaneously "1", and a zero cost otherwise. * To express the constraint that all items must be allocated, add an ''n''-ary constraint for each item (where ''n'' is the number of agents), with an infinite cost if no variable related to this item is "1".


Other applications

DCOP was applied to other problems, such as: * coordinating mobile sensors; * meeting and task scheduling.


Algorithms

DCOP algorithms can be classified in several ways: * ''Completeness'' - complete search algorithms finding the optimal solution, vs. local search algorithms finding a
local optimum In applied mathematics and computer science, a local optimum of an optimization problem is a solution that is optimal (either maximal or minimal) within a neighboring set of candidate solutions. This is in contrast to a global optimum, which i ...
. * ''Search strategy'' - best-first search or depth-first branch-and-bound search; * ''Synchronization'' among agents - synchronous or asynchronous; * ''Communication'' among agents - point-to-point with neighbors in the constraint graph, or broadcast; * ''Communication topology'' - chain or tree. ADOPT, for example, uses best-first search, asynchronous synchronization, point-to-point communication between neighboring agents in the constraint graph and a constraint tree as main communication topology. Hybrids of these DCOP algorithms also exist. BnB-Adopt, for example, changes the search strategy of Adopt from best-first search to depth-first branch-and-bound search.


Asymmetric DCOP

An asymmetric DCOP is an extension of DCOP in which the cost of each constraint may be different for different agents. Some example applications are: * ''
Event scheduling Event scheduling is the activity of finding a suitable time for an event such as meeting, conference, trip, etc. It is an important part of event planning that is usually carried out at its beginning stage. In general, event scheduling must take in ...
'': agents who attend the same event might derive different values from it. * ''
Smart grid A smart grid is an electrical grid which includes a variety of operation and energy measures including: *Advanced metering infrastructure (of which smart meters are a generic name for any utility side device even if it is more capable e.g. a f ...
'': the increase in price of electricity in loaded hours may be different agents. One way to represent an ADCOP is to represent the constraints as functions: f_C: D_1\times\dots\times D_k \to \R^k Here, for each constraint there is not a single cost but a vector of costs - one for each agent involved in the constraint. The vector of costs is of length ''k'' if each variable belongs to a different agent; if two or more variables belong to the same agent, then the vector of costs is shorter - there is a single cost for each involved ''agent'', not for each variable.


Approaches to solving an ADCOP

A simple way for solving an ADCOP is to replace each constraint f_C: D_1\times\cdots\times D_k \to \mathbb^k with a constraint f_C': D_1\times\cdots\times D_k \to \mathbb, which equals the sum of the functions f_C^1 + \cdots + f_C^k. However, this solution requires the agents to reveal their cost functions. Often, this is not desired due to privacy considerations. Another approach is called Private Events as Variables (PEAV). In this approach, each variable owns, in addition to his own variables, also "mirror variables" of all the variables owned by his neighbors in the constraint network. There are additional constraints (with a cost of infinity) that guarantee that the mirror variables equal the original variables. The disadvantage of this method is that the number of variables and constraints is much larger than the original, which leads to a higher run-time. A third approach is to adapt existing algorithms, developed for DCOPs, to the ADCOP framework. This has been done for both complete-search algorithms and local-search algorithms.


Comparison with strategic games

The structure of an ADCOP problem is similar to the game-theoretic concept of a
simultaneous game In game theory, a simultaneous game or static game is a game where each player chooses their action without knowledge of the actions chosen by other players. Simultaneous games contrast with sequential games, which are played by the players takin ...
. In both cases, there are agents who control variables (in game theory, the variables are the agents' possible actions or strategies). In both cases, each choice of variables by the different agents result in a different payoff to each agent. However, there is a fundamental difference: * In a simultaneous game, the agents are selfish - each of them wants to maximize his/her own utility (or minimize his/her own cost). Therefore, the best outcome that can be sought for in such setting is an equilibrium - a situation in which no agent can unilaterally increase his/her own gain. * In an ADCOP, the agents are considered cooperative: they act according to the protocol even if it decreases their own utility. Therefore, the goal is more challenging: we would like to maximize the sum of utilities (or minimize the sum of costs). A Nash equilibrium roughly corresponds to a
local optimum In applied mathematics and computer science, a local optimum of an optimization problem is a solution that is optimal (either maximal or minimal) within a neighboring set of candidate solutions. This is in contrast to a global optimum, which i ...
of this problem, while we are looking for a global optimum.


Partial cooperation

There are some intermediate models in which the agents are ''partially-cooperative'': they are willing to decrease their utility to help the global goal, but only if their own cost is not too high. An example of partially-cooperative agents are employees in a firm. On one hand, each employee wants to maximize their own utility; on the other hand, they also want to contribute to the success of the firm. Therefore, they are willing to help others or do some other time-consuming tasks that help the firm, as long as it is not too burdensome on them. Some models for partially-cooperative agents are: * ''Guaranteed personal benefit'': the agents agree to act for the global good if their own utility is at least as high as in the non-cooperative setting (i.e., the final outcome must be a
Pareto improvement Pareto efficiency or Pareto optimality is a situation where no action or allocation is available that makes one individual better off without making another worse off. The concept is named after Vilfredo Pareto (1848–1923), Italian civil engine ...
of the original state). * ''Lambda-cooperation'': there is a parameter \lambda\in ,1/math>. The agents agree to act for the global good if their own utility is at least as high as (1-\lambda) times their non-cooperative utility. Solving such partial-coopreation ADCOPs requires adaptations of ADCOP algorithms.


See also

*
Constraint satisfaction problem Constraint satisfaction problems (CSPs) are mathematical questions defined as a set of objects whose state must satisfy a number of constraints or limitations. CSPs represent the entities in a problem as a homogeneous collection of finite constra ...
*
Distributed algorithm A distributed algorithm is an algorithm designed to run on computer hardware constructed from interconnected processors. Distributed algorithms are used in different application areas of distributed computing, such as telecommunications, scientific ...
*
Distributed algorithmic mechanism design Distributed algorithmic mechanism design (DAMD) is an extension of algorithmic mechanism design. DAMD differs from Algorithmic mechanism design since the algorithm is computed in a distributed manner rather than by a central authority. This greatl ...


Notes and references


Books and surveys

* * A chapter in an edited book. * * See Chapters 1 and 2
downloadable free online
* * {{DEFAULTSORT:Distributed Constraint Optimization Mathematical optimization Constraint programming