HOME

TheInfoList



OR:

A fully polynomial-time approximation scheme (FPTAS) is an
algorithm In mathematics and computer science, an algorithm () is a finite sequence of rigorous instructions, typically used to solve a class of specific problems or to perform a computation. Algorithms are used as specifications for performing ...
for finding approximate solutions to
function problem In computational complexity theory, a function problem is a computational problem where a single output (of a total function) is expected for every input, but the output is more complex than that of a decision problem. For function problems, the o ...
s, especially
optimization problem In mathematics, computer science and economics, an optimization problem is the problem of finding the ''best'' solution from all feasible solutions. Optimization problems can be divided into two categories, depending on whether the variables ...
s. An FPTAS takes as input an instance of the problem and a parameter ε > 0. It returns as output a value is at least 1-\epsilon times the correct value, and at most 1 + \epsilon times the correct value. In the context of optimization problems, the correct value is understood to be the value of the optimal solution, and it is often implied that an FPTAS should produce a valid solution (and not just the value of the solution). Returning a value and finding a solution with that value are equivalent assuming that the problem possesses self reducibility. Importantly, the run-time of an FPTAS is polynomial in the problem size and in 1/ε. This is in contrast to a general
polynomial-time approximation scheme In computer science (particularly algorithmics), a polynomial-time approximation scheme (PTAS) is a type of approximation algorithm for optimization problems (most often, NP-hard optimization problems). A PTAS is an algorithm which takes an insta ...
(PTAS). The run-time of a general PTAS is polynomial in the problem size for each specific ε, but might be exponential in 1/ε. The term FPTAS may also be used to refer to the class of problems that have an FPTAS. FPTAS is a subset of PTAS, and unless
P = NP The P versus NP problem is a major unsolved problem in theoretical computer science. In informal terms, it asks whether every problem whose solution can be quickly verified can also be quickly solved. The informal term ''quickly'', used above ...
, it is a strict subset.. See discussion following Definition 1.30 o
p. 20


Relation to other complexity classes

All problems in FPTAS are
fixed-parameter tractable In computer science, parameterized complexity is a branch of computational complexity theory that focuses on classifying computational problems according to their inherent difficulty with respect to ''multiple'' parameters of the input or output. T ...
with respect to the standard parameterization. Any strongly NP-hard optimization problem with a polynomially bounded objective function cannot have an FPTAS unless P=NP. However, the converse fails: e.g. if P does not equal NP, knapsack with two constraints is not strongly NP-hard, but has no FPTAS even when the optimal objective is polynomially bounded.


Converting a dynamic program to an FPTAS

Woeginger presented a general scheme for converting a certain class of dynamic programs to an FPTAS.


Input

The scheme handles optimization problems in which the input is defined as follows: *The input is made of ''n'' vectors, ''x''1,...,''xn''. * Each input vector is made of some a non-negative integers, where a may depend on the input. * All components of the input vectors are encoded in binary. So the size of the problem is O(''n''+log(''X'')), where X is the sum of all components in all vectors.


Extremely-simple dynamic program

It is assumed that the problem has a dynamic-programming (DP) algorithm using ''states''. Each state is a vector made of some b non-negative integers, where b is independent of the input. The DP works in ''n'' steps. At each step ''i'', it processes the input ''xi'', and constructs a set of states ''Si''. Each state encodes a partial solution to the problem, using inputs ''x''1,...,''xi''. The components of the DP are: * A set ''S''0 of ''initial states''. * A set ''F'' of ''transition functions.'' Each function ''f'' in ''F'' maps a pair (state,input) to a new state. * An objective function g, mapping a state to its value. The algorithm of the DP is: * Let ''S''0 := the set of initial states. * For ''k'' = 1 to ''n'' do: ** Let ''Sk'' := * Output min/max . The run-time of the DP is linear in the number of possible states. In general, this number can be exponential in the size of the input problem: it can be in O(''n Vb''), where ''V'' is the largest integer than can appear in a state. If ''V'' is in O(''X''), then the run-time is in O(''n Xb''), which is only
pseudo-polynomial time In computational complexity theory, a numeric algorithm runs in pseudo-polynomial time if its running time is a polynomial in the ''numeric value'' of the input (the largest integer present in the input)—but not necessarily in the ''length'' of t ...
, since it is exponential in the problem size which is in O(log ''X''). The way to make it polynomial is to ''trim the state-space'': instead of keeping all possible states in each step, keep only a subset of the states; remove states that are "sufficiently close" to other states. Under certain conditions, this trimming can be done in a way that does not change the objective value by too much. To formalize this, we assume that the problem at hand has a non-negative integer vector ''d'' = (''d''1,...,''db''), called the ''degree vector'' of the problem. For every real number ''r''>1, we say that two state-vectors ''s''1,''s''2 are ''(d,r)-close'' if, for each coordinate ''j'' in 1,...,''b'': r^ \cdot s_ \leq s_ \leq r^ \cdot s_ (in particular, if ''dj''=0 for some ''j'', then s_ = s_). A problem is called extremely-benevolent if it satisfies the following three conditions: # ''Proximity is preserved by the transition functions'': For any ''r''>1, for any transition function ''f'' in ''F'', for any input-vector ''x'', and for any two state-vectors ''s''1,''s''2, the following holds: if ''s''1 is (''d,r'')-close to ''s2'', then ''f''(''s''1,''x'') is (''d,r'')-close to ''f''(''s2,x''). #*A sufficient condition for this can be checked as follows. For every function ''f''(''s'',''x'') in ''F'', and for every coordinate ''j'' in 1,...,''b'', denote by ''fj''(s,x) the ''j''-th coordinate of ''f''. This ''fj'' can be seen as an integer function in ''b''+''a'' variables. Suppose that every such ''fj'' is a polynomial with non-negative coefficients. Convert it to a polynomial of a single variable ''z'', by substituting ''s''=(zd1,...,zdb) and ''x''=(1,...,1). If the degree of the resulting polynomial in ''z'' is at most ''dj'', then condition 1 is satisfied. #''Proximity is preserved by the value function:'' There exists an integer ''G'' ≥ 0 (which is a function of the value function ''g'' and the degree vector ''d''), such that for any ''r''>1, and for any two state-vectors ''s''1,''s''2, the following holds: if ''s''1 is (''d,r'')-close to ''s2'', then: ''g''(''s''1) ≤ ''rG'' · ''g''(''s2'') (in minimization problems); ''g''(''s''1) ≥ ''r(-G)'' · ''g''(''s2'') (in maximization problems). #*A sufficient condition for this is that the function ''g'' is a polynomial function (of ''b'' variables) with non-negative coefficients. #''Technical conditions'': #*All transition functions ''f'' in ''F'' and the value function ''g'' can be evaluated in polytime. #*The number , ''F'', of transition functions is polynomial in ''n'' and log(''X''). #*The set ''S''0 of initial states can be computed in time polynomial in ''n'' and log(''X''). #*Let ''Vj'' be the set of all values that can appear in coordinate ''j'' in a state. Then, the ln of every value in ''Vj'' is at most a polynomial P1(n,log(X)). #*If ''dj''=0, the cardinality of ''Vj'' is at most a polynomial P2(''n'',log(''X'')). For every extremely-benevolent problem, the dynamic program can be converted into an FPTAS. Define: * \epsilon := the required approximation ratio. * r := 1 + \frac, where ''G'' is the constant from condition 2. Note that \frac \leq 1 + \frac. * L := \left\lceil \frac \right\rceil, where ''P''1 is the polynomial from condition 3 (an upper bound on the ln of every value that can appear in a state vector). Note that L\leq \left\lceil \left(1 + \frac\right) P_1(n, \log) \right\rceil, so it is polynomial in the size of the input and in 1/\epsilon. Also, r^L = e^\cdot L \geq e^, so by definition of ''P''1, every integer that can appear in a state-vector is in the range ,''rL'' * Partition the range ,''rL''into ''L''+1 ''r''-intervals: I_0 = I_1 = ,r); I_2 = [r,r^2); \ldots ; I_L = [r^, r^L/math>. * Partition the state space into ''r-boxes'': each coordinate ''k'' with degree ''dk'' ≥ 1 is partitioned into the ''L''+1 intervals above; each coordinate with ''dk'' = 0 is partitioned into P2(''n'',log(''X'')) singleton intervals - an interval for each possible value of coordinate ''k'' (where ''P''2 is the polynomial from condition 3 above). **Note that every possible state is contained in exactly one ''r''-box; if two states are in the same ''r''-box, then they are (''d'',''r'')-close. *R := (L + 1 + P_2(n,\log))^b. **Note that the number of ''r''-boxes is at most ''R''. Since ''b'' is a fixed constant, this ''R'' is polynomial in the size of the input and in 1/\epsilon. The FPTAS runs similarly to the DP, but in each step, it ''trims'' the state set into a smaller set ''Tk'', that contains exactly one state in each ''r''-box. The algorithm of the FPTAS is: * Let ''T''0 := ''S''0 = the set of initial states. * For ''k'' = 1 to ''n'' do: ** Let ''Uk'' := **Let ''Tk'' := a trimmed copy of ''Uk'': for each ''r''-box that contains one or more states of ''Uk'', keep exactly one state in ''T''k. * Output min/max . The run-time of the FPTAS is polynomial in the total number of possible states in each ''Ti'', which is at most the total number of ''r''-boxes, which is at most ''R'', which is polynomial in ''n'', log(''X''), and 1/\epsilon. Note that, for each state ''su'' in ''Uk'', its subset ''Tk'' contains at least one state ''st'' that is (d,r)-close to ''su''. Also, each ''Uk'' is a subset of the ''Sk'' in the original (untrimmed) DP. The main lemma for proving the correctness of the FPTAS is:
For every step ''k'' in 0,...,''n'', for every state ''ss'' in ''Sk'', there is a state ''st'' in ''Tk'' that is (''d'',''rk'')-close to ''ss''.
The proof is by induction on ''k''. For ''k''=0 we have ''Tk''=''Sk''; every state is (''d'',1)-close to itself. Suppose the lemma holds for ''k''-1. For every state ''ss'' in ''Sk'', let ''ss-'' be one of its predecessors in ''Sk-1'', so that ''f''(''ss'',''x'')=''ss''. By the induction assumption, there is a state ''st-'' in ''Tk-1'', that is (''d'',''rk-1'')-close to ''ss''. Since proximity is preserved by transitions (Condition 1 above), ''f''(''st'',''x'') is (''d'',''rk-1'')-close to ''f''(''ss'',''x'')=''ss''. This ''f''(''st'',''x'') is in ''Uk''. After the trimming, there is a state ''st'' in ''Tk'' that is (''d'',''r'')-close to ''f(st-,x)''. This ''st'' is (''d'',''rk'')-close to ''ss''. Consider now the state ''s''* in ''Sn'', which corresponds to the optimal solution (that is, ''g''(''s*'')=OPT). By the lemma above, there is a state ''t''* in ''Tn'', which is (''d'',''rn'')-close to ''s*''. Since proximity is preserved by the value function, ''g''(t*) ≥ ''r(-Gn)'' · ''g''(''s*'') for a maximization problem. By definition of ''r'', r^\geq (1-\epsilon). So g(t^*)\geq (1-\epsilon)\cdot OPT. A similar argument works for a minimization problem.


Examples

Here are some examples of extremely-benevolent problems, that have an FPTAS by the above theorem. 1. Multiway number partitioning (equivalently, Identical-machines scheduling) with the goal of minimizing the largest sum is extremely-benevolent. Here, we have ''a'' = 1 (the inputs are integers) and ''b'' = the number of bins (which is considered fixed). Each state is a vector of ''b'' integers representing the sums of the ''b'' bins. There are ''b'' functions: each function ''j'' represents inserting the next input into bin ''j''. The function ''g''(''s'') picks the largest element of ''s''. ''S''0 = . The conditions for extreme-benevolence are satisfied with degree-vector ''d''=(1,...,1) and ''G''=1. The result extends to
Uniform-machines scheduling Uniform machine scheduling (also called uniformly-related machine scheduling or related machine scheduling) is an optimization problem in computer science and operations research. It is a variant of optimal job scheduling. We are given ''n'' jobs '' ...
and Unrelated-machines scheduling whenever the number of machines is fixed (this is required because ''R'' - the number of ''r''-boxes - is exponential in ''b''). Denoted Pm, , \max C_j or Qm, , \max C_j or Rm, , \max C_j. * ''Note'': consider the special case ''b''=2, where the goal is to minimize the ''square of the difference'' between the two part sums. The same DP can be used, but this time with value function ''g''(''s'') = (''s''1-''s''2)2. Now, condition 2 is violated: the states (''s''1,''s''1) and (''s''1,''s''2) may be (''d,r'')-close, but ''g''(''s''1,''s''1) = 0 while ''g''(''s''1,''s''2) > 0. so the above theorem cannot be applied. Indeed, the problem does not have an FPTAS unless P=NP, since an FPTAS could be used to decide in polytime whether the optimal value is 0. 2. Sum of cubed job completion time on any fixed number of identical or uniform machines - the latter denoted by Qm, , \sum C_j^3 - is ex-benevolent with ''a''=1, ''b''=3, d=(1,1,3). It can be extended to any fixed power of the completion time. 3. Sum of weighted completion time on any fixed number of identical or uniform machines - the latter denoted by Qm, , \sum w_j C_j. 4. Sum of completion time on any fixed number of identical or uniform machines, with time-dependent processing times: Qm, time-dep, \sum C_j. This holds even for ''weighted'' sum of completion time. 5. Weighted earliness-tardiness about a common due-date on any fixed number of machines: m, , \sum w_j, C_j, .


Simple dynamic program

Simple dynamic programs add to the above formulation the following components: * A set ''H'' of ''filtering functions'', of the same cardinality as ''F''. Each function ''hi'' in ''H'' maps a pair (state,input) to a Boolean value. The value should be "true" if and only if activating the transition ''fi'' on this pair would lead to a valid state. *A ''dominance relation'', which is a
partial order In mathematics, especially order theory, a partially ordered set (also poset) formalizes and generalizes the intuitive concept of an ordering, sequencing, or arrangement of the elements of a set. A poset consists of a set together with a bina ...
on states (no indifferences, not all pairs are comparable), and a ''quasi-dominance relation'' which is a
total preorder In mathematics, especially order theory, a weak ordering is a mathematical formalization of the intuitive notion of a ranking of a set, some of whose members may be tied with each other. Weak orders are a generalization of totally ordered set ...
on states (indifferences allowed, all pairs are comparable). The original DP is modified as follows: *Let ''S''0 := the set of initial states. * For ''k'' = 1 to ''n'' do: ** Let Sk := , where ''hj'' is the filter function corresponding to the transition function ''fj''. * Output min/max . A problem is called benevolent if it satisfies the following conditions (which extend conditions 1, 2, 3 above): # ''Proximity is preserved by the transition functions'': For any ''r''>1, for any transition function ''f'' in ''F'', for any input-vector ''x'', and for any two state-vectors ''s''1,''s''2, the following holds: #* if ''s''1 is (''d,r'')-close to ''s2'', and ''s''1 quasi-dominates ''s2'', then either (a) ''f''(''s''1,''x'') is (''d,r'')-close to ''f''(''s2,x''), and ''f''(''s''1,''x'') quasi-dominates ''f''(''s2,x''), or (b) ''f''(''s''1,''x'') dominates ''f''(''s2,x''). #* if ''s''1 dominates ''s2'', then ''f''(''s''1,''x'') dominates ''f''(''s2,x''). # ''Proximity is preserved by the value function:'' There exists an integer ''G'' ≥ 0 (a function of the value function ''g'' and the degree vector ''d''), such that for any ''r''>1, and for any two state-vectors ''s''1,''s''2, the following holds: #* if ''s''1 is (''d,r'')-close to ''s2'', and ''s''1 quasi-dominates ''s2'''','' then: ''g''(''s''1) ≤ ''rG'' · ''g''(''s2'') (in minimization problems); ''g''(''s''1) ≥ ''r(-G)'' · ''g''(''s2'') (in maximization problems). #* if ''s''1 dominates ''s2'', then ''g''(''s''1) ≤ ''g''(''s2'') (in minimization problems); ''g''(''s''1) ≥ ''g''(''s2'') (in maximization problems). # ''Technical conditions'' (in addition to the above): #* The quasi-dominance relation can be decided in polynomial time. # ''Conditions on the filter functions'': For any ''r''>1, for any filter function ''h'' in ''H'', for any input-vector ''x'', and for any two state-vectors ''s''1,''s''2, the following holds: #* if ''s''1 is (''d,r'')-close to ''s2'', and ''s''1 quasi-dominates ''s2'', then ''h''(''s''1,''x'') ≥ ''h''(''s''2,''x''). #* if ''s''1 dominates ''s2'', then ''h''(''s''1,''x'') ≥ ''h''(''s''2,''x''). For every benevolent problem, the dynamic program can be converted into an FPTAS similarly to the one above, with two changes (boldfaced): * Let ''T''0 := ''S''0 = the set of initial states. * For ''k'' = 1 to ''n'' do: ** Let ''U''k := , where ''hj'' is the filter function corresponding to the transition function ''fj''. **Let ''Tk'' := a trimmed copy of ''Uk'': for each ''r''-box that contains one or more states of ''Uk'', choose a single element that quasi-dominates all other elements in ''Uk'', and insert it into ''T''k. * Output min/max .


Examples

Here are some examples of benevolent problems, that have an FPTAS by the above theorem. 1. The 0-1
knapsack problem The knapsack problem is a problem in combinatorial optimization: Given a set of items, each with a weight and a value, determine the number of each item to include in a collection so that the total weight is less than or equal to a given limit a ...
is benevolent. Here, we have ''a''=2: each input is a 2-vector (weight, value). There is a DP with ''b''=2: each state encodes (current weight, current value). There are two transition functions: ''f''1 corresponds to adding the next input item, and ''f''2 corresponds to not adding it. The corresponding filter functions are: ''h''1 verifies that the weight with the next input item is at most the knapsack capacity; ''h''2 always returns True. The value function ''g''(''s'') returns ''s''2. The initial state-set is . The degree vector is (1,1). The dominance relation is trivial. The quasi-dominance relation compares only the weight coordinate: ''s'' quasi-dominates ''t'' iff ''s''1 ≤ ''t''1. The implication of this is that, if state ''t'' has a higher weight than state ''s'', then the transition functions are allowed to not preserve the proximity between ''t'' and ''s'' (it is possible, for example, that ''s'' has a successor and ''t'' does not have a corresponding successor). A similar algorithm was presented earlier by Ibarra and Kim. The run-time of this FPTAS can be improved to O(n \log + 1/\epsilon^4) operations on integers. The exponent was later improved to 2.5. * ''Note'': consider In the ''2-weighted knapsack problem'', where each item has two weights and a value, and the goal is to maximize the value such that the ''sum of squares of the total weights'' is at most the knapsack capacity: \left(\sum_ w_\right)^2 + \left(\sum_ w_\right)^2 \leq W. We could solve it using a similar DP, where each state is (current weight 1, current weight 2, value). The quasi-dominance relation should be modified to: ''s'' quasi-dominates ''t'' iff (''s''12 + ''s''22) ≤ (''t''12 + ''t''22). But it violates Condition 1 above: quasi-dominance is not preserved by transition functions or example, the state (2,2,..) quasi-dominates (1,3,..); but after adding the input (2,0,..) to both states, the result (4,2,..) does not quasi-dominate (3,3,..) So the theorem cannot be used. Indeed, this problem does not have an FPTAS unless P=NP. The same is true for the two-dimensional knapsack problem. The same is true for the multiple subset sum problem: the quasi-dominance relation should be: ''s'' quasi-dominates ''t'' iff max(''s''1,''s''2) ≤ max(''t''1,''t''2), but it is not preserved by transitions, by the same example as above. 2. Minimizing the weighted number of tardy jobs, or maximizing the weighted number of early jobs, on a single machine; denoted 1, , \sum w_j U_j. 3. Batch scheduling for minimizing the weighted number of tardy jobs: 1, batch, \sum w_j U_j. 4. Makespan of deteriorating jobs on a single machine: 1, deteriorate, \max C_j. 5. Total late work on a single machine: 1, , \sum V_j. 6. Total weighted late work on a single machine: 1, , \sum w_j V_j.


Non-examples

Despite the generality of the above result, there are cases in which it cannot be used. 1. In the total tardiness problem 1, , \sum T_j, the dynamic programming formulation of Lawler requires to update all states in the old state space some ''B'' times, where ''B'' is of the order of ''X'' (the maximum input size). The same is true for a DP for economic lot-sizing. In these cases, the number of transition functions in ''F'' is ''B'', which is exponential in the log(''X''), so the second technical condition is violated. The state-trimming technique is not useful, but another technique - input-rounding - has been used to design an FPTAS. 2. In the variance minimization problem 1, , CTV, the objective function is g(s) = s_5 - (s_4-s_3)^2/n, which violates Condition 2, so the theorem cannot be used. But different techniques have been used to design an FPTAS.


Some other problems that have an FPTAS

*The
knapsack problem The knapsack problem is a problem in combinatorial optimization: Given a set of items, each with a weight and a value, determine the number of each item to include in a collection so that the total weight is less than or equal to a given limit a ...
, as well as some of its variants: **0-1 knapsack problem. **Unbounded knapsack problem. **Multi-dimensional knapsack problem with Delta-modular constraints. **Multi-objective 0-1 knapsack problem. **Parametric knapsack problem. **Symmetric quadratic knapsack problem. * Count-subset-sum (# SubsetSum) - finding the number of distinct subsets with a sum of at most ''C''. *Restricted
shortest path In graph theory, the shortest path problem is the problem of finding a path between two vertices (or nodes) in a graph such that the sum of the weights of its constituent edges is minimized. The problem of finding the shortest path between ...
: finding a minimum-cost path between two nodes in a graph, subject to a delay constraint. *Shortest paths and non-linear objectives. *Counting edge-covers. *Vector subset search problem where the dimension is fixed.


See also

* The "benevolent dynamic programs", that admit an FPTAS, also admit an evolutionary algorithm.


References

{{Reflist


External links

* Complexity Zoo
FPTAS
Approximation algorithms Complexity classes