Parameterized Approximation Algorithm
   HOME

TheInfoList



OR:

A parameterized approximation algorithm is a type of
algorithm In mathematics and computer science, an algorithm () is a finite sequence of rigorous instructions, typically used to solve a class of specific Computational problem, problems or to perform a computation. Algorithms are used as specificat ...
that aims to find approximate solutions to
NP-hard In computational complexity theory, NP-hardness ( non-deterministic polynomial-time hardness) is the defining property of a class of problems that are informally "at least as hard as the hardest problems in NP". A simple example of an NP-hard pr ...
optimization problems in polynomial time in the input size and a function of a specific parameter. These algorithms are designed to combine the best aspects of both traditional
approximation algorithms In computer science and operations research, approximation algorithms are efficient algorithms that find approximate solutions to optimization problems (in particular NP-hard problems) with provable guarantees on the distance of the returned solut ...
and fixed-parameter tractability. In traditional approximation algorithms, the goal is to find solutions that are at most a certain factor \alpha away from the optimal solution, known as an \alpha-approximation, in polynomial time. On the other hand, parameterized algorithms are designed to find exact solutions to problems, but with the constraint that the running time of the algorithm is polynomial in the input size and a function of a specific parameter k. The parameter describes some property of the input and is small in typical applications. The problem is said to be fixed-parameter tractable (FPT) if there is an algorithm that can find the optimum solution in f(k)n^ time, where f(k) is a function independent of the input size n. A parameterized approximation algorithm aims to find a balance between these two approaches by finding approximate solutions in FPT time: the algorithm computes an \alpha-approximation in f(k)n^ time, where f(k) is a function independent of the input size n. This approach aims to overcome the limitations of both traditional approaches by having stronger guarantees on the solution quality compared to traditional approximations while still having efficient running times as in FPT algorithms. An overview of the research area studying parameterized approximation algorithms can be found in the survey of Marx and the more recent survey by Feldmann et al.


Obtainable approximation ratios

The full potential of parameterized approximation algorithms is utilized when a given
optimization problem In mathematics, computer science and economics, an optimization problem is the problem of finding the ''best'' solution from all feasible solutions. Optimization problems can be divided into two categories, depending on whether the variables ...
is shown to admit an \alpha-approximation algorithm running in f(k)n^ time, while in contrast the problem neither has a polynomial-time \alpha-approximation algorithm (under some complexity assumption, e.g., P\neq NP), nor an FPT algorithm for the given parameter k (i.e., it is at least W hard). For example, some problems that are
APX-hard In computational complexity theory, the class APX (an abbreviation of "approximable") is the set of NP optimization problems that allow polynomial-time approximation algorithms with approximation ratio bounded by a constant (or constant-factor ap ...
and W hard admit a parameterized approximation scheme (PAS), i.e., for any \varepsilon>0 a (1+\varepsilon)-approximation can be computed in f(k,\varepsilon)n^ time for some functions f and g. This then circumvents the lower bounds in terms of polynomial-time approximation and fixed-parameter tractability. A PAS is similar in spirit to a polynomial-time approximation scheme (PTAS) but additionally exploits a given parameter k. Since the degree of the polynomial in the runtime of a PAS depends on a function g(\varepsilon), the value of \varepsilon is assumed to be arbitrary but constant in order for the PAS to run in FPT time. If this assumption is unsatisfying, \varepsilon is treated as a parameter as well to obtain an efficient parameterized approximation scheme (EPAS), which for any \varepsilon>0 computes a (1+\varepsilon)-approximation in f(k,\varepsilon)n^ time for some function f. This is similar in spirit to an efficient polynomial-time approximation scheme (EPTAS).


''k''-cut

The ''k''-cut problem has no polynomial-time (2-\varepsilon)-approximation algorithm for any \varepsilon>0, assuming P\neq NP and the small set expansion hypothesis. It is also W hard parameterized by the number k of required components. However an EPAS exists, which computes a (1+\varepsilon)-approximation in (k/\varepsilon)^n^ time.


Steiner Tree

The
Steiner tree problem In combinatorial mathematics, the Steiner tree problem, or minimum Steiner tree problem, named after Jakob Steiner, is an umbrella term for a class of problems in combinatorial optimization. While Steiner tree problems may be formulated in a nu ...
is FPT parameterized by the number of terminals. However for the "dual" parameter consisting of the number k of non-terminals contained in the optimum solution, the problem is W hard (due to a folklore reduction from the
Dominating Set problem In graph theory, a dominating set for a graph is a subset of its vertices, such that any vertex of is either in , or has a neighbor in . The domination number is the number of vertices in a smallest dominating set for . The dominating set ...
). Steiner Tree is also known to be
APX-hard In computational complexity theory, the class APX (an abbreviation of "approximable") is the set of NP optimization problems that allow polynomial-time approximation algorithms with approximation ratio bounded by a constant (or constant-factor ap ...
. However, there is an EPAS computing a (1+\varepsilon)-approximation in 2^n^ time.


Strongly-connected Steiner subgraph

It is known that the Strongly Connected Steiner Subgraph problem is W hard parameterized by the number k of terminals, and also does not admit an O(\log^ n)-approximation in polynomial time (under standard complexity assumptions). However a 2-approximation can be computed in 3^n^ time. Furthermore, this is best possible, since no (2-\varepsilon)-approximation can be computed in f(k)n^ time for any function f, under Gap-
ETH (colloquially) , former_name = eidgenössische polytechnische Schule , image = ETHZ.JPG , image_size = , established = , type = Public , budget = CHF 1.896 billion (2021) , rector = Günther Dissertori , president = Joël Mesot , a ...
.


''k''-median and ''k''-means

For the well-studied metric clustering problems of ''k''-median and ''k''-means parameterized by the number k of centers, it is known that no (1+2/e-\varepsilon)-approximation for k-Median and no (1+8/e-\varepsilon)-approximation for k-Means can be computed in f(k)n^ time for any function f, under Gap-
ETH (colloquially) , former_name = eidgenössische polytechnische Schule , image = ETHZ.JPG , image_size = , established = , type = Public , budget = CHF 1.896 billion (2021) , rector = Günther Dissertori , president = Joël Mesot , a ...
. Matching parameterized approximation algorithms exist, but it is not known whether matching approximations can be computed in polynomial time. Clustering is often considered in settings of low dimensional data, and thus a practically relevant parameterization is by the
dimension In physics and mathematics, the dimension of a Space (mathematics), mathematical space (or object) is informally defined as the minimum number of coordinates needed to specify any Point (geometry), point within it. Thus, a Line (geometry), lin ...
of the underlying
metric Metric or metrical may refer to: * Metric system, an internationally adopted decimal system of measurement * An adjective indicating relation to measurement in general, or a noun describing a specific type of measurement Mathematics In mathem ...
. In the
Euclidean space Euclidean space is the fundamental space of geometry, intended to represent physical space. Originally, that is, in Euclid's Elements, Euclid's ''Elements'', it was the three-dimensional space of Euclidean geometry, but in modern mathematics ther ...
, the k-Median and k-Means problems admit an EPAS parameterized by the dimension d, and also an EPAS parameterized by k. The former was generalized to an EPAS for the parameterization by the
doubling dimension In mathematics, a metric space with metric is said to be doubling if there is some doubling constant such that for any and , it is possible to cover the ball with the union of at most balls of radius . The base-2 logarithm of is called the d ...
. For the loosely related highway dimension parameter, only an approximation scheme with XP runtime is known to date.


''k''-center

For the metric ''k''-center problem a 2-approximation can be computed in polynomial time. However, when parameterizing by either the number k of centers, the
doubling dimension In mathematics, a metric space with metric is said to be doubling if there is some doubling constant such that for any and , it is possible to cover the ball with the union of at most balls of radius . The base-2 logarithm of is called the d ...
(in fact the dimension of a
Manhattan metric A taxicab geometry or a Manhattan geometry is a geometry whose usual distance function or metric of Euclidean geometry is replaced by a new metric in which the distance between two points is the sum of the absolute differences of their Cartesian co ...
), or the highway dimension, no parameterized (2-\varepsilon)-approximation algorithm exists, under standard complexity assumptions. Furthermore, the k-Center problem is W hard even on
planar graphs In graph theory, a planar graph is a graph that can be embedded in the plane, i.e., it can be drawn on the plane in such a way that its edges intersect only at their endpoints. In other words, it can be drawn in such a way that no edges cross ...
when simultaneously parameterizing it by the number k of centers, the
doubling dimension In mathematics, a metric space with metric is said to be doubling if there is some doubling constant such that for any and , it is possible to cover the ball with the union of at most balls of radius . The base-2 logarithm of is called the d ...
, the highway dimension, and the
pathwidth In graph theory, a path decomposition of a graph is, informally, a representation of as a "thickened" path graph, and the pathwidth of is a number that measures how much the path was thickened to form . More formally, a path-decomposition ...
. However, when combining k with the doubling dimension an EPAS exists, and the same is true when combining k with the highway dimension. For the more general version with vertex capacities, an EPAS exists for the parameterization by k and the doubling dimension, but not when using k and the highway dimension as the parameter. Regarding the pathwidth, k-Center admits an EPAS even for the more general
treewidth In graph theory, the treewidth of an undirected graph is an integer number which specifies, informally, how far the graph is from being a tree. The smallest treewidth is 1; the graphs with treewidth 1 are exactly the trees and the forests. The gra ...
parameter, and also for cliquewidth.


Densest subgraph

An
optimization Mathematical optimization (alternatively spelled ''optimisation'') or mathematical programming is the selection of a best element, with regard to some criterion, from some set of available alternatives. It is generally divided into two subfi ...
variant of the ''k''-Clique problem is the Densest ''k''-Subgraph problem (which is a 2-ary
Constraint Satisfaction problem Constraint satisfaction problems (CSPs) are mathematical questions defined as a set of objects whose state must satisfy a number of constraints or limitations. CSPs represent the entities in a problem as a homogeneous collection of finite constra ...
), where the task is to find a subgraph on k vertices with maximum number of edges. It is not hard to obtain a (k-1)-approximation by just picking a matching of size k/2 in the given input graph, since the maximum number of edges on k vertices is always at most = k(k-1)/2. This is also
asymptotically In analytic geometry, an asymptote () of a curve is a line such that the distance between the curve and the line approaches zero as one or both of the ''x'' or ''y'' coordinates tends to infinity. In projective geometry and related contexts, ...
optimal, since under Gap-
ETH (colloquially) , former_name = eidgenössische polytechnische Schule , image = ETHZ.JPG , image_size = , established = , type = Public , budget = CHF 1.896 billion (2021) , rector = Günther Dissertori , president = Joël Mesot , a ...
no k^-approximation can be computed in FPT time parameterized by k.


Dominating set

For the
Dominating set problem In graph theory, a dominating set for a graph is a subset of its vertices, such that any vertex of is either in , or has a neighbor in . The domination number is the number of vertices in a smallest dominating set for . The dominating set ...
it is W hard to compute any g(k)-approximation in f(k)n^ time for any functions g and f.


Approximate kernelization

Kernelization In computer science, a kernelization is a technique for designing efficient algorithms that achieve their efficiency by a preprocessing stage in which inputs to the algorithm are replaced by a smaller input, called a "kernel". The result of solvi ...
is a technique used in
fixed-parameter tractability In computer science, parameterized complexity is a branch of computational complexity theory that focuses on classifying computational problems according to their inherent difficulty with respect to ''multiple'' parameters of the input or output. ...
to pre-process an instance of an
NP-hard In computational complexity theory, NP-hardness ( non-deterministic polynomial-time hardness) is the defining property of a class of problems that are informally "at least as hard as the hardest problems in NP". A simple example of an NP-hard pr ...
problem in order to remove "easy parts" and reveal the NP-hard core of the instance. A kernelization algorithm takes an instance I and a parameter k, and returns a new instance I' with parameter k' such that the size of I' and k' is bounded as a function of the input parameter k, and the algorithm runs in polynomial time. An \alpha-approximate kernelization algorithm is a variation of this technique that is used in parameterized approximation algorithms. It returns a kernel I' such that any \beta-approximation in I' can be converted into an \alpha\beta-approximation to the input instance I in polynomial time. This notion was introduced by Lokshtanov et al., but there are other related notions in the literature such as Turing kernels and \alpha-fidelity kernelization. As for regular (non-approximate) kernels, a problem admits an α-approximate kernelization algorithm if and only if it has a parameterized α-approximation algorithm. The proof of this fact is very similar to the one for regular kernels. However the guaranteed approximate kernel might be of exponential size (or worse) in the input parameter. Hence it becomes interesting to find problems that admit polynomial sized approximate kernels. Furthermore, a polynomial-sized approximate kernelization scheme (PSAKS) is an \alpha-approximate kernelization algorithm that computes a polynomial-sized kernel and for which \alpha can be set to 1+\varepsilon for any \varepsilon>0. For example, while the Connected Vertex Cover problem is FPT parameterized by the solution size, it does not admit a (regular) polynomial sized kernel (unless NP\subseteq coNP/poly), but a PSAKS exists. Similarly, the Steiner Tree problem is FPT parameterized by the number of terminals, does not admit a polynomial sized kernel (unless NP\subseteq coNP/poly), but a PSAKS exists. When parameterizing Steiner Tree by the number of non-terminals in the optimum solution, the problem is W hard (and thus admits no exact kernel at all, unless FPT=W , but still admits a PSAKS.


References

{{reflist Algorithms Approximation algorithms Parameterized complexity