HOME

TheInfoList



OR:

The configuration linear program (configuration-LP) is a particular linear programming used for solving
combinatorial optimization Combinatorial optimization is a subfield of mathematical optimization that consists of finding an optimal object from a finite set of objects, where the set of feasible solutions is discrete or can be reduced to a discrete set. Typical combi ...
problems. It was introduced in the context of the
cutting stock problem In operations research, the cutting-stock problem is the problem of cutting standard-sized pieces of stock material, such as paper rolls or sheet metal, into pieces of specified sizes while minimizing material wasted. It is an optimization problem ...
.Gilmore P. C., R. E. Gomory (1961).
A linear programming approach to the cutting-stock problem
'. Operations Research 9: 849-859
Later, it has been applied to bin packing and
job scheduling A job scheduler is a computer application for controlling unattended background program execution of jobs. This is commonly called batch scheduling, as execution of non-interactive jobs is often called batch processing, though traditional ''job ...
. In the configuration-LP, there is a variable for each possible ''configuration'' - each possible multiset of items that can fit in a single bin (these configurations are also known as ''patterns'') . Usually, the number of configurations is exponential in the problem size, but in some cases it is possible to attain approximate solutions using only a polynomial number of configurations.


In bin packing


The integral LP

In the
bin packing problem The bin packing problem is an optimization problem, in which items of different sizes must be packed into a finite number of bins or containers, each of a fixed given capacity, in a way that minimizes the number of bins used. The problem has ma ...
, there are ''n'' items with different sizes. The goal is to pack the items into a minimum number of bins, where each bin can contain at most ''B''. A ''feasible configuration'' is a set of sizes with a sum of at most ''B''. * ''Example'': suppose the item sizes are 3,3,3,3,3,4,4,4,4,4, and ''B''=12. Then the possible configurations are: 3333; 333; 33, 334; 3, 34, 344; 4, 44, 444. If we had only three items of size 3, then we could not use the 3333 configuration. Denote by ''S'' the set of different sizes (and their number). Denote by ''C'' the set of different configurations (and their number). For each size ''s'' in ''S'' and configuration ''c'' in ''C'', denote: * ''ns'' - the number of items of size ''s''. * ''as'',''c'' - the number of occurrences of size ''s'' in configuration ''c''. * ''xc'' - a variable denoting the number of bins with configuration ''c''. Then, the configuration LP of bin-packing is:
\text~~~\sum_x_c~~~\text \sum_a_x_c \geq n_s for all ''s'' in ''S'' (- all ''ns'' items of size ''s'' are packed). x_c\in\ for all ''c'' in ''C'' (- there are at most ''n'' bins overall, so at most ''n'' of each individual configuration).
The configuration LP is an
integer linear program An integer programming problem is a mathematical optimization or feasibility program in which some or all of the variables are restricted to be integers. In many settings the term refers to integer linear programming (ILP), in which the objectiv ...
, so in general it is NP-hard. Moreover, even the problem itself is generally very large: it has ''C'' variables and ''S'' constraints. If the smallest item size is ''eB'' (for some fraction ''e'' in (0,1)), then there can be up to 1/''e'' items in each bin, so the number of configurations ''C'' ~ ''S''1/''e'', which can be very large if ''e'' is small (if e is considered a constant, then the integer LP can be solved by exhaustive search: there are at most ''S1/e'' configurations, and for each configuration there are at most ''n'' possible values, so there are at most n^ combinations to check. For each combination, we have to check ''S'' constraints, so the run-time is S\cdot n^, which is polynomial in ''n'' when ''S, e'' are constant). However, this ILP serves as a basis for several approximation algorithms. The main idea of these algorithms is to reduce the original instance into a new instance in which ''S'' is small and ''e'' is large, so ''C'' is relatively small. Then, the ILP can be solved either by complete search (if ''S'', ''C'' are sufficiently small), or by relaxing it into a ''fractional'' LP.


The fractional LP

The fractional configuration LP of bin-packing It is the
linear programming relaxation In mathematics, the relaxation of a (mixed) integer linear program is the problem that arises by removing the integrality constraint of each variable. For example, in a 0–1 integer program, all constraints are of the form :x_i\in\. The relax ...
of the above ILP. It replaces the last constraint x_c\in\ with the constraint x_c \geq 0. In other words, each configuration can be used a fractional number of times. The relaxation was first presented by Gilmore and Gomory, and it is often called the Gilmore-Gomory linear program. * ''Example'': suppose there are 31 items of size 3 and 7 items of size 4, and the bin-size is 10. The configurations are: 4, 44, 34, 334, 3, 33, 333. The constraints are ,0,1,2,1,2,3x=31 and ,2,1,1,0,0,0x=7. An optimal solution to the fractional LP is ,0,0,7,0,0,17/3That is: there are 7 bins of configuration 334 and 17/3 bins of configuration 333. Note that only two different configurations are needed. In short, the fractional LP can be written as follows:
\text~~\mathbf\cdot \mathbf~~~\text~~ A \mathbf\geq \mathbf~~~\text~~ \mathbf\geq 0
Where 1 is the vector (1,...,1) of size ''C'', A is an ''S''-by-''C'' matrix in which each column represents a single configuration, and n is the vector (''n''1,...,''nS'').


Solving the fractional LP

A linear program with no integrality constraints can be solved in time polynomial in the number of variables and constraints. The problem is that the number of variables in the fractional configuration LP is equal to the number of possible configurations, which might be huge. Karmarkar and Karp present an algorithm that overcomes this problem. First, they construct the
dual linear program The dual of a given linear program (LP) is another LP that is derived from the original (the primal) LP in the following schematic way: * Each variable in the primal LP becomes a constraint in the dual LP; * Each constraint in the primal LP becomes ...
of the fractional LP:
\text~~\mathbf\cdot \mathbf~~~\text~~ A^T \mathbf \leq \mathbf~~~\text~~ \mathbf\geq 0.
It has ''S'' variables ''y''1,...,''yS'', and ''C'' constraints: for each configuration ''c'', there is a constraint A^c\cdot y\leq 1, where A^c is the column of ''A'' representing the configuration ''c''. 3It has the following economic interpretation. For each size ''s'', we should determine a nonnegative price ''ys''. Our profit is the total price of all items. We want to maximize the profit n y subject to the constraints that the total price of items in each configuration is at most 1. Second, they apply a variant of the
ellipsoid method In mathematical optimization, the ellipsoid method is an iterative method for minimizing convex functions. When specialized to solving feasible linear optimization problems with rational data, the ellipsoid method is an algorithm which find ...
, which does not need to list all the constraints - it just needs a '' separation oracle''. A separation oracle is an algorithm that, given a vector y, either asserts that it is feasible, or finds a constraint that it violates. The separation oracle for the dual LP can be implemented by solving the
knapsack problem The knapsack problem is a problem in combinatorial optimization: Given a set of items, each with a weight and a value, determine the number of each item to include in a collection so that the total weight is less than or equal to a given limit a ...
with sizes s and values y: if the optimal solution of the knapsack problem has a total value ''at most'' 1, then y is feasible; if it is ''larger'' than 1, than y is ''not'' feasible, and the optimal solution of the knapsack problem identifies a configuration for which the constraint is violated. Third, they show that, with an approximate solution to the knapsack problem, one can get an approximate solution to the dual LP, and from this, an approximate solution to the primal LP; see Karmarkar-Karp bin packing algorithms. All in all, for any tolerance factor ''h'', finds a basic feasible solution of cost at most LOPT(I) + ''h'', and runs in time: O\left(S^8 \log \log^2(\frac) + \frac\log(\frac) \right), where ''S'' is the number of different sizes, ''n'' is the number of different items, and the size of the smallest item is ''eB''. In particular, if ''e'' ≥ 1/''n'' and ''h''=1, the algorithm finds a solution with at most LOPT+1 bins in time: O\left(S^8 \log \log^2 + S^4 n \log\log \right). A randomized variant of this algorithm runs in expected time: O\left(S^7 \log \log^2(\frac) + \frac\log(\frac) \right).


Rounding the fractional LP

Karmarkar and Karp further developed a way to round the fractional LP into an approximate solution to the integral LP; see Karmarkar-Karp bin packing algorithms. Their proof shows that the additive
integrality gap In mathematics, the relaxation of a (mixed) integer linear program is the problem that arises by removing the integrality constraint of each variable. For example, in a 0–1 integer program, all constraints are of the form :x_i\in\. The relax ...
of this LP is in O(log2(''n'')). Later, Hoberg and Rothvoss improved their result and proved that the integrality gap is in O(log(''n'')). The best known lower bound on the integrality gap is a constant Ω(1). Finding the exact integrality gap is an open problem.


In bin covering

In the bin covering problem, there are ''n'' items with different sizes. The goal is to pack the items into a ''maximum'' number of bins, where each bin should contain ''at least'' ''B''. A natural configuration LP for this problem could be:
\text~~\mathbf\cdot \mathbf~~~\text~~ A \mathbf\leq \mathbf~~~\text~~ \mathbf\geq 0
where ''A'' represents all configurations of items with sum ''at least'' ''B'' (one can take only the inclusion-minimal configurations). The problem with this LP is that, in the bin-covering problem, handling small items is problematic, since small items may be essential for the optimal solution. With small items allowed, the number of configurations may be too large even for the technique of Karmarkar and Karp. Csirik, Johnson and Kenyon present an alternative LP. First, they define a set of items that are called ''small''. Let ''T'' be the total size of all small items. Then, they construct a matrix A representing all configurations with sum < 2. Then, they consider the above LP with one additional constraint:\text~~\mathbf\cdot \mathbf~~\textA \mathbf\leq \mathbf\sum_ (B-sum(c))\cdot x_c \leq T\mathbf\geq 0The additional constraint guarantees that the "vacant space" in the bins can be filled by the small items. The dual of this LP is more complex and cannot be solved by a simple knapsack-problem separation oracle. Csirik, Johnson and Kenyon present a different method to solve it approximately in time exponential in 1/epsilon. Jansen and Solis-Oba present an improved method to solve it approximately in time exponential in 1/epsilon.


In machine scheduling

In the problem of
unrelated-machines scheduling Unrelated-machines scheduling is an optimization problem in computer science and operations research. It is a variant of optimal job scheduling. We need to schedule ''n'' jobs ''J''1, ''J''2, ..., ''Jn'' on ''m'' different machines, such that a cert ...
, there are some ''m'' different machines that should process some ''n'' different jobs. When machine ''i'' processes job ''j'', it takes time ''pi'',''j''. The goal is to partition the jobs among the machines such that maximum completion time of a machine is as small as possible. The decision version of this problem is: given time ''T'', is there a partition in which the completion time of all machines is at most ''T''? For each machine ''i'', there are finitely many subsets of jobs that can be processed by machine ''i'' in time at most ''T''. Each such subset is called a ''configuration'' for machine ''i''. Denote by ''Ci''(''T'') the set of all configurations for machine ''i'', given time ''T''. For each machine ''i'' and configuration ''c'' in ''Ci''(''T''), define a variable x_ which equals 1 iff the actual configuration used in machine ''i'' is ''c'', and 0 otherwise. Then, the LP constraints are: * \sum_x_ = 1 for every machine ''i'' in 1,...,''m;'' * \sum_^m \sum_x_ = 1 for every job ''j'' in 1,...,''n;'' * x_ \in \ for every ''i'', ''j''.


Properties

The
integrality gap In mathematics, the relaxation of a (mixed) integer linear program is the problem that arises by removing the integrality constraint of each variable. For example, in a 0–1 integer program, all constraints are of the form :x_i\in\. The relax ...
of the configuration-LP for unrelated-machines scheduling is 2.


See also

* High-multiplicity bin packing


References

{{Reflist Bin packing Number partitioning Job scheduling Combinatorial optimization Linear programming