In
computer science
Computer science is the study of computation, automation, and information. Computer science spans theoretical disciplines (such as algorithms, theory of computation, information theory, and automation) to Applied science, practical discipli ...
, parameterized complexity is a branch of
computational complexity theory
In theoretical computer science and mathematics, computational complexity theory focuses on classifying computational problems according to their resource usage, and relating these classes to each other. A computational problem is a task solved by ...
that focuses on classifying
computational problems
In theoretical computer science, a computational problem is a problem that may be solved by an algorithm. For example, the problem of factoring
:"Given a positive integer ''n'', find a nontrivial prime factor of ''n''."
is a computational probl ...
according to their inherent difficulty with respect to ''multiple'' parameters of the input or output. The complexity of a problem is then measured as a
function
Function or functionality may refer to:
Computing
* Function key, a type of key on computer keyboards
* Function model, a structured representation of processes in a system
* Function object or functor or functionoid, a concept of object-oriente ...
of those parameters. This allows the classification of
NP-hard
In computational complexity theory, NP-hardness ( non-deterministic polynomial-time hardness) is the defining property of a class of problems that are informally "at least as hard as the hardest problems in NP". A simple example of an NP-hard pr ...
problems on a finer scale than in the classical setting, where the complexity of a problem is only measured as a function of the number of bits in the input. The first systematic work on parameterized complexity was done by .
Under the assumption that
P ≠ NP, there exist many natural problems that require superpolynomial
running time
In computer science, the time complexity is the computational complexity that describes the amount of computer time it takes to run an algorithm. Time complexity is commonly estimated by counting the number of elementary operations performed by t ...
when complexity is measured in terms of the input size only, but that are computable in a time that is polynomial in the input size and exponential or worse in a parameter . Hence, if is fixed at a small value and the growth of the function over is relatively small then such problems can still be considered "tractable" despite their traditional classification as "intractable".
The existence of efficient, exact, and deterministic solving algorithms for
NP-complete
In computational complexity theory, a problem is NP-complete when:
# it is a problem for which the correctness of each solution can be verified quickly (namely, in polynomial time) and a brute-force search algorithm can find a solution by tryi ...
, or otherwise
NP-hard
In computational complexity theory, NP-hardness ( non-deterministic polynomial-time hardness) is the defining property of a class of problems that are informally "at least as hard as the hardest problems in NP". A simple example of an NP-hard pr ...
, problems is considered unlikely, if input parameters are not fixed; all known solving algorithms for these problems require time that is
exponential
Exponential may refer to any of several mathematical topics related to exponentiation, including:
*Exponential function, also:
**Matrix exponential, the matrix analogue to the above
* Exponential decay, decrease at a rate proportional to value
*Exp ...
(or at least superpolynomial) in the total size of the input. However, some problems can be solved by algorithms that are exponential only in the size of a fixed parameter while polynomial in the size of the input. Such an algorithm is called a
fixed-parameter tractable
In computer science, parameterized complexity is a branch of computational complexity theory that focuses on classifying computational problems according to their inherent difficulty with respect to ''multiple'' parameters of the input or output. T ...
(fpt-)algorithm, because the problem can be solved efficiently for small values of the fixed parameter.
Problems in which some parameter is fixed are called parameterized problems. A parameterized problem that allows for such an fpt-algorithm is said to be a fixed-parameter tractable problem and belongs to the class , and the early name of the theory of parameterized complexity was fixed-parameter tractability.
Many problems have the following form: given an object and a nonnegative integer , does have some property that depends on ? For instance, for the
vertex cover problem
In graph theory, a vertex cover (sometimes node cover) of a graph is a set of vertices that includes at least one endpoint of every edge of the graph.
In computer science, the problem of finding a minimum vertex cover is a classical optimizat ...
, the parameter can be the number of vertices in the cover. In many applications, for example when modelling error correction, one can assume the parameter to be "small" compared to the total input size. Then it is challenging to find an algorithm which is exponential ''only'' in , and not in the input size.
In this way, parameterized complexity can be seen as ''two-dimensional'' complexity theory. This concept is formalized as follows:
:A ''parameterized problem'' is a language
, where
is a finite alphabet. The second component is called the ''parameter'' of the problem.
:A parameterized problem is ''fixed-parameter tractable'' if the question "
?" can be decided in running time
, where is an arbitrary function depending only on . The corresponding complexity class is called FPT.
For example, there is an algorithm which solves the vertex cover problem in
time, where is the number of vertices and is the size of the vertex cover. This means that vertex cover is fixed-parameter tractable with the size of the solution as the parameter.
Complexity classes
FPT
FPT contains the ''fixed parameter tractable'' problems, which are those that can be solved in time
for some computable function . Typically, this function is thought of as single exponential, such as
but the definition admits functions that grow even faster. This is essential for a large part of the early history of this class. The crucial part of the definition is to exclude functions of the form
, such as
. The class FPL (fixed parameter linear) is the class of problems solvable in time
for some computable function . FPL is thus a subclass of FPT.
An example is the
satisfiability
In mathematical logic, a formula is ''satisfiable'' if it is true under some assignment of values to its variables. For example, the formula x+3=y is satisfiable because it is true when x=3 and y=6, while the formula x+1=x is not satisfiable over ...
problem, parameterised by the number of variables. A given formula of size with variables can be checked by brute force in time
. A
vertex cover
In graph theory, a vertex cover (sometimes node cover) of a graph is a set of vertices that includes at least one endpoint of every edge of the graph.
In computer science, the problem of finding a minimum vertex cover is a classical optimizat ...
of size in a graph of order can be found in time
, so this problem is also in FPT.
An example of a problem that is thought not to be in FPT is
graph coloring
In graph theory, graph coloring is a special case of graph labeling; it is an assignment of labels traditionally called "colors" to elements of a graph subject to certain constraints. In its simplest form, it is a way of coloring the vertices o ...
parameterised by the number of colors. It is known that 3-coloring is
NP-hard
In computational complexity theory, NP-hardness ( non-deterministic polynomial-time hardness) is the defining property of a class of problems that are informally "at least as hard as the hardest problems in NP". A simple example of an NP-hard pr ...
, and an algorithm for graph -colouring in time
for
would run in polynomial time in the size of the input. Thus, if graph coloring parameterised by the number of colors were in FPT, then
P = NP.
There are a number of alternative definitions of FPT. For example, the running time requirement can be replaced by
. Also, a parameterised problem is in FPT if it has a so-called kernel.
Kernelization
In computer science, a kernelization is a technique for designing efficient algorithms that achieve their efficiency by a preprocessing stage in which inputs to the algorithm are replaced by a smaller input, called a "kernel". The result of solvi ...
is a preprocessing technique that reduces the original instance to its "hard kernel", a possibly much smaller instance that is equivalent to the original instance but has a size that is bounded by a function in the parameter.
FPT is closed under a parameterised notion of
reductions
Reductions ( es, reducciones, also called ; , pl. ) were settlements created by Spanish rulers and Roman Catholic missionaries in Spanish America and the Spanish East Indies (the Philippines). In Portuguese-speaking Latin America, such redu ...
called ''fpt-reductions''. Such reductions transform an instance
of some problem into an equivalent instance
of another problem (with
) and can be computed in time
where
is a polynomial.
Obviously, FPT contains all polynomial-time computable problems. Moreover, it contains all optimisation problems in NP that allow an
efficient polynomial-time approximation scheme (EPTAS).
''W'' hierarchy
The ''W'' hierarchy is a collection of computational complexity classes. A parameterized problem is in the class ''W''
'i'' if every instance
can be transformed (in fpt-time) to a combinatorial circuit that has
weft
Warp and weft are the two basic components used in weaving to turn thread or yarn into fabric. The lengthwise or longitudinal warp yarns are held stationary in tension on a frame or loom while the transverse weft (sometimes woof) is draw ...
at most ''i'', such that
if and only if there is a satisfying assignment to the inputs that assigns ''1'' to exactly ''k'' inputs. The weft is the largest number of logical units with fan-in greater than two on any path from an input to the output. The total number of logical units on the paths (known as depth) must be limited by a constant that holds for all instances of the problem.
Note that