HOME

TheInfoList



OR:

In
computational geometry Computational geometry is a branch of computer science devoted to the study of algorithms which can be stated in terms of geometry. Some purely geometrical problems arise out of the study of computational geometric algorithms, and such problems ar ...
, a maximum disjoint set (MDS) is a largest set of non-overlapping geometric shapes selected from a given set of candidate shapes. Every set of non-overlapping shapes is an independent set in the
intersection graph In graph theory, an intersection graph is a graph that represents the pattern of intersections of a family of sets. Any graph can be represented as an intersection graph, but some important special classes of graphs can be defined by the types o ...
of the shapes. Therefore, the MDS problem is a special case of the
maximum independent set In graph theory, an independent set, stable set, coclique or anticlique is a set of vertices in a graph, no two of which are adjacent. That is, it is a set S of vertices such that for every two vertices in S, there is no edge connecting the two ...
(MIS) problem. Both problems are
NP complete In computational complexity theory, a problem is NP-complete when: # it is a problem for which the correctness of each solution can be verified quickly (namely, in polynomial time) and a brute-force search algorithm can find a solution by trying ...
, but finding a MDS may be easier than finding a MIS in two respects: * For the general MIS problem, the best known exact algorithms are exponential. In some geometric intersection graphs, there are sub-exponential algorithms for finding a MDS. * The general MIS problem is hard to approximate and doesn't even have a constant-factor approximation. In some geometric intersection graphs, there are
polynomial-time approximation scheme In computer science (particularly algorithmics), a polynomial-time approximation scheme (PTAS) is a type of approximation algorithm for optimization problems (most often, NP-hard optimization problems). A PTAS is an algorithm which takes an insta ...
s (PTAS) for finding a MDS. Finding an MDS is important in applications such as
automatic label placement Automatic label placement, sometimes called text placement or name placement, comprises the computer methods of placing labels automatically on a map or chart. This is related to the typographic design of such labels. The typical features depicted ...
,
VLSI Very large-scale integration (VLSI) is the process of creating an integrated circuit (IC) by combining millions or billions of MOS transistors onto a single chip. VLSI began in the 1970s when MOS integrated circuit (Metal Oxide Semiconductor) c ...
circuit design, and cellular
frequency division multiplexing In telecommunications, frequency-division multiplexing (FDM) is a technique by which the total bandwidth available in a communication medium is divided into a series of non-overlapping frequency bands, each of which is used to carry a separate ...
. The MDS problem can be generalized by assigning a different weight to each shape and searching for a disjoint set with a maximum total weight. In the following text, MDS(''C'') denotes the maximum disjoint set in a set ''C''.


Greedy algorithms

Given a set ''C'' of shapes, an approximation to MDS(''C'') can be found by the following
greedy algorithm A greedy algorithm is any algorithm that follows the problem-solving heuristic of making the locally optimal choice at each stage. In many problems, a greedy strategy does not produce an optimal solution, but a greedy heuristic can yield locally ...
: * INITIALIZATION: Initialize an empty set, ''S''. * SEARCH: For every shape ''x'' in ''C'': *# Calculate ''N(x)'' - the subset of all shapes in ''C'' that intersect ''x'' (including ''x'' itself). *# Calculate the largest independent set in this subset: ''MDS(N(x))''. *# Select an ''x'' such that '', MDS(N(x)), '' is minimized. * Add ''x'' to ''S''. * Remove ''x'' and ''N(x)'' from ''C''. * If there are shapes in ''C'', go back to Search. * END: return the set ''S''. For every shape ''x'' that we add to ''S'', we lose the shapes in ''N(x)'', because they are intersected by ''x'' and thus cannot be added to ''S'' later on. However, some of these shapes themselves intersect each other, and thus in any case it is not possible that they all be in the optimal solution ''MDS(S)''. The largest subset of shapes that ''can'' all be in the optimal solution is ''MDS(N(x))''. Therefore, selecting an ''x'' that minimizes '', MDS(N(x)), '' minimizes the loss from adding ''x'' to ''S''. In particular, if we can guarantee that there is an ''x'' for which '', MDS(N(x)), '' is bounded by a constant (say, ''M''), then this greedy algorithm yields a constant ''M''-factor approximation, as we can guarantee that: , S, \geq\frac Such an upper bound ''M'' exists for several interesting cases:


1-dimensional intervals: exact polynomial algorithm

When ''C'' is a set of intervals on a line, ''M''=1, and thus the greedy algorithm finds the exact MDS. To see this, assume w.l.o.g. that the intervals are vertical, and let ''x'' be the interval with the ''highest bottom endpoint''. All other intervals intersected by ''x'' must cross its bottom endpoint. Therefore, all intervals in ''N(x)'' intersect each other, and ''MDS(N(x))'' has a size of at most 1 (see figure). Therefore, in the 1-dimensional case, the MDS can be found exactly in time ''O''(''n'' log ''n''): # Sort the intervals in ascending order of their bottom endpoints (this takes time ''O''(''n'' log ''n'')). # Add an interval with the highest bottom endpoint, and delete all intervals intersecting it. # Continue until no intervals remain. This algorithm is analogous to the
earliest deadline first scheduling Earliest deadline first (EDF) or least time to go is a dynamic priority scheduling algorithm used in real-time operating systems to place processes in a priority queue. Whenever a scheduling event occurs (task finishes, new task released, etc.) th ...
solution to the
interval scheduling Interval scheduling is a class of problems in computer science, particularly in the area of algorithm design. The problems consider a set of tasks. Each task is represented by an ''interval'' describing the time in which it needs to be processed b ...
problem. In contrast to the 1-dimensional case, in 2 or more dimensions the MDS problem becomes NP-complete, and thus has either exact super-polynomial algorithms or approximate polynomial algorithms.


Fat shapes: constant-factor approximations

When ''C'' is a set of unit disks, ''M''=3, because the leftmost disk (the disk whose center has the smallest ''x'' coordinate) intersects at most 3 other disjoint disks (see figure). Therefore the greedy algorithm yields a 3-approximation, i.e., it finds a disjoint set with a size of at least ''MDS(C)''/3. Similarly, when ''C'' is a set of axis-parallel unit squares, ''M''=2. When ''C'' is a set of arbitrary-size disks, ''M''=5, because the disk with the smallest radius intersects at most 5 other disjoint disks (see figure). Similarly, when ''C'' is a set of arbitrary-size axis-parallel squares, ''M''=4. Other constants can be calculated for other
regular polygon In Euclidean geometry, a regular polygon is a polygon that is Equiangular polygon, direct equiangular (all angles are equal in measure) and Equilateral polygon, equilateral (all sides have the same length). Regular polygons may be either convex p ...
s.


Divide-and-conquer algorithms

The most common approach to finding a MDS is divide-and-conquer. A typical algorithm in this approach looks like the following: # Divide the given set of shapes into two or more subsets, such that the shapes in each subset cannot overlap the shapes in other subsets because of geometric considerations. # Recursively find the MDS in each subset separately. # Return the union of the MDSs from all subsets. The main challenge with this approach is to find a geometric way to divide the set into subsets. This may require to discard a small number of shapes that do not fit into any one of the subsets, as explained in the following subsections.


Axis-parallel rectangles with the same height: 2-approximation

Let ''C'' be a set of ''n'' axis-parallel rectangles in the plane, all with the same height ''H'' but with varying lengths. The following algorithm finds a disjoint set with a size of at least , MDS(''C''), /2 in time ''O''(''n'' log ''n''): * Draw ''m'' horizontal lines, such that: *# The separation between two lines is strictly more than ''H''. *# Each line intersects at least one rectangle (hence ''m'' ≤ ''n''). *# Each rectangle is intersected by exactly one line. * Since the height of all rectangles is ''H'', it is not possible that a rectangle is intersected by more than one line. Therefore the lines partition the set of rectangles into ''m'' subsets (R_i,\ldots,R_m) – each subset includes the rectangles intersected by a single line. * For each subset R_i, compute an exact MDS M_i using the one-dimensional greedy algorithm (see above). * By construction, the rectangles in (R_i) can intersect only rectangles in R_ or in R_. Therefore, each of the following two unions is a disjoint sets: ** Union of odd MDSs: M_1 \cup M_3 \cup \cdots ** Union of even MDSs: M_2 \cup M_4 \cup \cdots * Return the largest of these two unions. Its size must be at least , MDS, /2.


Axis-parallel rectangles with the same height: PTAS

Let ''C'' be a set of ''n'' axis-parallel rectangles in the plane, all with the same height but with varying lengths. There is an algorithm that finds a disjoint set with a size of at least , MDS(''C''), /(1 + 1/''k'') in time ''O''(''n''2''k''−1), for every constant ''k'' > 1. The algorithm is an improvement of the above-mentioned 2-approximation, by combining
dynamic programming Dynamic programming is both a mathematical optimization method and a computer programming method. The method was developed by Richard Bellman in the 1950s and has found applications in numerous fields, from aerospace engineering to economics. I ...
with the shifting technique of Hochbaum and Maass. This algorithm can be generalized to ''d'' dimensions. If the labels have the same size in all dimensions except one, it is possible to find a similar approximation by applying
dynamic programming Dynamic programming is both a mathematical optimization method and a computer programming method. The method was developed by Richard Bellman in the 1950s and has found applications in numerous fields, from aerospace engineering to economics. I ...
along one of the dimensions. This also reduces the time to n^O(1/e).


Axis-parallel rectangles: Logarithmic-factor approximation

Let ''C'' be a set of ''n'' axis-parallel rectangles in the plane. The following algorithm finds a disjoint set with a size of at least \frac in time O(n \log): * INITIALIZATION: sort the horizontal edges of the given rectangles by their ''y''-coordinate, and the vertical edges by their ''x''-coordinate (this step takes time ''O''(''n'' log ''n'')). * STOP CONDITION: If there are at most ''n'' ≤ 2 shapes, compute the MDS directly and return. * RECURSIVE PART: *# Let x_\mathrm be the median ''x''-coordinate. *# Partition the input rectangles into three groups according to their relation to the line x=x_\mathrm: those entirely to its left (R_\mathrm), those entirely to its right (R_\mathrm), and those intersected by it (R_\mathrm). By construction, the cardinalities of R_\mathrm and R_\mathrm are at most ''n''/2. *# Recursively compute an ''approximate'' MDS in R_\mathrm (M_\mathrm) and in R_\mathrm (M_\mathrm), and calculate their union. By construction, the rectangles in M_\mathrm and M_\mathrm are all disjoint, so M_\mathrm \cup M_\mathrm is a disjoint set. *# Compute an ''exact'' MDS in R_\mathrm (M_\mathrm). Since all rectangles in R_\mathrm intersect a single vertical line x=x_\mathrm, this computation is equivalent to finding an MDS from a set of intervals, and can be solved exactly in time ''O(n log n)'' (see above). * Return either M_\mathrm \cup M_\mathrm or M_\mathrm – whichever of them is larger. It is provable by induction that, at the last step, either M_\mathrm \cup M_\mathrm or M_\mathrm have a cardinality of at least \frac. Chalermsookk and Chuzoy have improved the factor to O(\log). Chalermsook and Walczak have presented an O(\log)-approximation algorithm to the more general setting, in which each rectangle has a ''weight'', and the goal is to find an independent set of maximum total weight.


Axis-parallel rectangles: constant-factor approximation

For a long time, it was not known whether a constant-factor approximation exists for axis-parallel rectangles of different lengths and heights. It was conjectured that such an approximation could be found using ''guillotine cuts''. Particularly, if there exists a guillotine separation of axes-parallel rectangles in which \Omega(n) rectangles are separated, then it can be used in a dynamic programming approach to find a constant-factor approximation to the MDS. To date, it is not known whether such a guillotine separation exists. However, there are constant-factor approximation algorithms using non-guillotine cuts: * Joseph S. B. Mitchell presented a 10-factor approximation algorithm. His algorithm is based on partitioning the plane into ''corner-clipped rectangles''. * Gálvez, Khan, Mari, Mömke, Pittu, and Wiese presented an algorithm partitioning the plane into a more general class of polygons. This simplifies the analysis and improves the approximation to 6-factor. Additionally, they improved the approximation to 3-factor and then to (2+ε)-factor.


Fat objects with identical sizes: PTAS

Let ''C'' be a set of ''n''
square In Euclidean geometry, a square is a regular quadrilateral, which means that it has four equal sides and four equal angles (90-degree angles, π/2 radian angles, or right angles). It can also be defined as a rectangle with two equal-length adj ...
s or
circles A circle is a shape consisting of all points in a plane that are at a given distance from a given point, the centre. Equivalently, it is the curve traced out by a point that moves in a plane so that its distance from a given point is const ...
of identical size. Hochbaum and Maass presented a
polynomial-time approximation scheme In computer science (particularly algorithmics), a polynomial-time approximation scheme (PTAS) is a type of approximation algorithm for optimization problems (most often, NP-hard optimization problems). A PTAS is an algorithm which takes an insta ...
for finding an MDS using a simple shifted-grid strategy. It finds a solution within (1 − ε) of the maximum in time ''n''''O''(1/''ε''2) time and linear space. The strategy generalizes to any collection of fat objects of roughly the same size (i.e., when the maximum-to-minimum size ratio is bounded by a constant).


Fat objects with arbitrary sizes: PTAS

Let ''C'' be a set of ''n'' fat objects, such as
square In Euclidean geometry, a square is a regular quadrilateral, which means that it has four equal sides and four equal angles (90-degree angles, π/2 radian angles, or right angles). It can also be defined as a rectangle with two equal-length adj ...
s or
circles A circle is a shape consisting of all points in a plane that are at a given distance from a given point, the centre. Equivalently, it is the curve traced out by a point that moves in a plane so that its distance from a given point is const ...
, of arbitrary sizes. There is a PTAS for finding an MDS based on multi-level grid alignment. It has been discovered by two groups in approximately the same time, and described in two different ways.


Level partitioning

An algorithm of Erlebach, Jansen and Seidel finds a disjoint set with a size of at least (1 − 1/''k'')2 · , MDS(''C''), in time ''n''''O''(''k''2), for every constant ''k'' > 1. It works in the following way. Scale the disks so that the smallest disk has diameter 1. Partition the disks to levels, based on the logarithm of their size. I.e., the ''j''-th level contains all disks with diameter between (''k'' + 1)''j'' and (''k'' + 1)''j''+1, for ''j'' ≤ 0 (the smallest disk is in level 0). For each level ''j'', impose a grid on the plane that consists of lines that are (''k'' + 1)''j''+1 apart from each other. By construction, every disk can intersect at most one horizontal line and one vertical line from its level. For every ''r'', ''s'' between 0 and ''k'', define ''D''(''r'',''s'') as the subset of disks that are not intersected by any horizontal line whose index modulo ''k'' is ''r'', nor by any vertical line whose index modulu ''k'' is ''s''. By the
pigeonhole principle In mathematics, the pigeonhole principle states that if items are put into containers, with , then at least one container must contain more than one item. For example, if one has three gloves (and none is ambidextrous/reversible), then there mu ...
, there is at least one pair ''(r,s)'' such that , \mathrm(D(r,s)), \geq (1-\frac)^2 \cdot , \mathrm, , i.e., we can find the MDS only in ''D''(''r'',''s'') and miss only a small fraction of the disks in the optimal solution: * For all ''k''2 possible values of ''r'',''s'' (0 ≤ ''r'',''s'' < ''k''), calculate ''D''(''r'',''s'') using
dynamic programming Dynamic programming is both a mathematical optimization method and a computer programming method. The method was developed by Richard Bellman in the 1950s and has found applications in numerous fields, from aerospace engineering to economics. I ...
. * Return the largest of these ''k''2 sets.


Shifted quadtrees

An algorithm of Chan finds a disjoint set with a size of at least (1 − 2/''k'')·, MDS(''C''), in time ''n''''O''(''k''), for every constant ''k'' > 1. The algorithm uses shifted
quadtree A quadtree is a tree data structure in which each internal node has exactly four children. Quadtrees are the two-dimensional analog of octrees and are most often used to partition a two-dimensional space by recursively subdividing it into four q ...
s. The key concept of the algorithm is ''alignment'' to the quadtree grid. An object of size ''r'' is called ''k-aligned'' (where ''k'' ≥ 1 is a constant) if it is inside a quadtree cell of size at most ''kr'' (''R'' ≤ ''kr''). By definition, a ''k''-aligned object that intersects the boundary of a quatree cell of size ''R'' must have a size of at least ''R''/''k'' (''r'' > ''R''/''k''). The boundary of a cell of size ''R'' can be covered by 4''k'' squares of size ''R''/''k''; hence the number of disjoint fat objects intersecting the boundary of that cell is at most 4''kc'', where ''c'' is a constant measuring the fatness of the objects. Therefore, if all objects are fat and ''k''-aligned, it is possible to find the exact maximum disjoint set in time ''n''''O''(''kc'') using a divide-and-conquer algorithm. Start with a quadtree cell that contains all objects. Then recursively divide it to smaller quadtree cells, find the maximum in each smaller cell, and combine the results to get the maximum in the larger cell. Since the number of disjoint fat objects intersecting the boundary of every quadtree cell is bounded by 4''kc'', we can simply "guess" which objects intersect the boundary in the optimal solution, and then apply divide-and-conquer to the objects inside. If ''almost'' all objects are ''k''-aligned, we can just discard the objects that are not ''k''-aligned, and find a maximum disjoint set of the remaining objects in time ''n''''O''(''k''). This results in a (1 − ''e'') approximation, where e is the fraction of objects that are not ''k''-aligned. If most objects are not ''k''-aligned, we can try to make them ''k''-aligned by ''shifting'' the grid in multiples of (1/''k'',1/''k''). First, scale the objects such that they are all contained in the unit square. Then, consider ''k'' shifts of the grid: (0,0), (1/''k'',1/''k''), (2/''k'',2/''k''), ..., ((''k'' − 1)/''k'',(''k'' − 1)/''k''). I.e., for each ''j'' in , consider a shift of the grid in (j/k,j/k). It is possible to prove that every label will be 2''k''-aligned for at least ''k'' − 2 values of ''j''. Now, for every ''j'', discard the objects that are not ''k''-aligned in the (''j''/''k'',''j''/''k'') shift, and find a maximum disjoint set of the remaining objects. Call that set ''A''(''j''). Call the real maximum disjoint set is ''A''*. Then: \sum_ \geq (k-2), A*, Therefore, the largest ''A''(''j'') has a size of at least: (1 − 2/''k''), ''A''*, . The return value of the algorithm is the largest ''A''(''j''); the approximation factor is (1 − 2/''k''), and the run time is ''n''''O''(''k''). We can make the approximation factor as small as we want, so this is a PTAS. Both versions can be generalized to ''d'' dimensions (with different approximation ratios) and to the weighted case.


Geometric separator algorithms

Several divide-and-conquer algorithms are based on a certain
geometric separator A geometric separator is a line (or another shape) that partitions a collection of geometric shapes into two subsets, such that proportion of shapes in each subset is bounded, and the number of shapes that do not belong to any subset (i.e. the shape ...
theorem. A geometric separator is a line or shape that separates a given set of shapes to two smaller subsets, such that the number of shapes lost during the division is relatively small. This allows both PTASs and sub-exponential exact algorithms, as explained below.


Fat objects with arbitrary sizes: PTAS using geometric separators

Let ''C'' be a set of ''n'' fat objects, such as squares or circles, of arbitrary sizes. Chan described an algorithm finds a disjoint set with a size of at least (1 − ''O''())·, MDS(''C''), in time ''n''''O''(''b''), for every constant ''b'' > 1. The algorithm is based on the following geometric separator theorem, which can be proved similarly to the proof of the existence of geometric separator for disjoint squares: ::For every set ''C'' of fat objects, there is a rectangle that partitions ''C'' into three subsets of objects – ''Cinside'', ''Coutside'' and ''C''boundary, such that: ::* , MDS(''C''inside), ≤ ''a'', MDS(''C''), ::* , MDS(''C''outside), ≤ a, MDS(''C''), ::* , MDS(''C''boundary), ''c'' where ''a'' and ''c'' are constants. If we could calculate MDS(''C'') exactly, we could make the constant ''a'' as low as 2/3 by a proper selection of the separator rectangle. But since we can only approximate MDS(''C'') by a constant factor, the constant ''a'' must be larger. Fortunately, ''a'' remains a constant independent of , ''C'', . This separator theorem allows to build the following PTAS: Select a constant ''b''. Check all possible combinations of up to ''b'' + 1 labels. * If , MDS(''C''), has a size of at most ''b'' (i.e. all sets of ''b'' + 1 labels are not disjoint) then just return that MDS and exit. This step takes ''n''''O''(''b'') time. * Otherwise, use a geometric separator to separate ''C'' to two subsets. Find the approximate MDS in ''C''inside and ''C''outside separately, and return their combination as the approximate MDS in ''C''. Let ''E''(''m'') be the error of the above algorithm when the optimal MDS size is MDS(''C'') = ''m''. When ''m'' ≤ ''b'', the error is 0 because the maximum disjoint set is calculated exactly; when ''m'' > ''b'', the error increases by at most ''c'' the number of labels intersected by the separator. The worst case for the algorithm is when the split in each step is in the maximum possible ratio which is ''a'':(1 − ''a''). Therefore the error function satisfies the following recurrence relation: : E(m) = 0\ \ \ \ \text m\leq b : E(m) = E(a\cdot m) + E((1-a)\cdot m) + c\cdot\sqrt \text m>b The solution to this recurrence is: : E(m) = (\frac+\frac)\cdot m - \frac\cdot\sqrt. i.e., E(m)=O(m/\sqrt). We can make the approximation factor as small as we want by a proper selection of ''b''. This PTAS is more space-efficient than the PTAS based on quadtrees, and can handle a generalization where the objects may slide, but it cannot handle the weighted case.


Disks with a bounded size-ratio: exact sub-exponential algorithm

Let ''C'' be a set of ''n'' disks, such that the ratio between the largest radius and the smallest radius is at most ''r''. The following algorithm finds MDS(''C'') exactly in time 2^. The algorithm is based on a width-bounded geometric separator on the set ''Q'' of the centers of all disks in ''C''. This separator theorem allows to build the following exact algorithm: * Find a separator line such that at most 2''n''/3 centers are to its right (''C''right), at most 2''n''/3 centers are to its left (''C''left), and at most ''O''() centers are at a distance of less than ''r''/2 from the line (''C''int). * Consider all possible non-overlapping subsets of ''C''int. There are at most 2^ such subsets. For each such subset, recursively compute the MDS of ''C''left and the MDS of ''C''right, and return the largest combined set. The run time of this algorithm satisfies the following recurrence relation: : T(1) = 1 : T(n) = 2^ T\left(\frac\right) \text n>1 The solution to this recurrence is: : T(n) = 2^


Local search algorithms


Pseudo-disks: a PTAS

A ''pseudo-disks-set'' is a set of objects in which the boundaries of every pair of objects intersect at most twice (Note that this definition relates to a whole collection, and does not say anything about the shapes of the specific objects in the collection). A pseudo-disks-set has a bounded
union complexity Union commonly refers to: * Trade union, an organization of workers * Union (set theory), in mathematics, a fundamental operation on sets Union may also refer to: Arts and entertainment Music * Union (band), an American rock group ** ''Un ...
, i.e., the number of intersection points on the boundary of the union of all objects is linear in the number of objects. For example, a set of squares or circles of arbitrary sizes is a pseudo-disks-set. Let ''C'' be a pseudo-disks-set with ''n'' objects. A
local search algorithm In computer science, local search is a heuristic method for solving computationally hard optimization problems. Local search can be used on problems that can be formulated as finding a solution maximizing a criterion among a number of candidate so ...
by Chan and Har-Peled finds a disjoint set of size at least (1-O(\frac))\cdot, MDS(C), in time O(n^), for every integer constant b\geq 0: * INITIALIZATION: Initialize an empty set, S. * SEARCH: Loop over all the subsets of C-S whose size is between 1 and b+1. For each such subset ''X'': ** Verify that ''X'' itself is independent (otherwise go to the next subset); ** Calculate the set ''Y'' of objects in ''S'' that intersect ''X''. ** If , Y, <, X, , then remove ''Y'' from ''S'' and insert ''X'': S := S-Y+X. * END: return the set ''S''. Every exchange in the search step increases the size of ''S'' by at least 1, and thus can happen at most ''n'' times. The algorithm is very simple; the difficult part is to prove the approximation ratio. See also.


Linear programming relaxation algorithms


Pseudo-disks: a PTAS

Let ''C'' be a pseudo-disks-set with ''n'' objects and union complexity ''u''. Using
linear programming relaxation In mathematics, the relaxation of a (mixed) integer linear program is the problem that arises by removing the integrality constraint of each variable. For example, in a 0–1 integer program, all constraints are of the form :x_i\in\. The relax ...
, it is possible to find a disjoint set of size at least \frac\cdot, MDS(C), . This is possible either with a randomized algorithm that has a high probability of success and run time O(n^3), or a deterministic algorithm with a slower run time (but still polynomial). This algorithm can be generalized to the weighted case.


Other classes of shapes for which approximations are known

* Line segments in the two-dimensional plane. * Arbitrary two-dimensional convex objects. * Curves with a bounded number of intersection points.


External links


Approximation Algorithms for Maximum Independent Set of Pseudo-Disks
- presentation by
Sariel Har-Peled Sariel Har-Peled (born July 14, 1971, in Jerusalem) is an Israeli–American computer scientist known for his research in computational geometry. He is a Donald Biggar Willett Professor in Engineering at the University of Illinois at Urbana–Champ ...
.
Javascript code for calculating exact maximum disjoint set of rectangles


Notes

{{reflist Computational geometry