Longest-processing-time-first Scheduling
   HOME

TheInfoList



OR:

Longest-processing-time-first (LPT) is a
greedy algorithm A greedy algorithm is any algorithm that follows the problem-solving heuristic of making the locally optimal choice at each stage. In many problems, a greedy strategy does not produce an optimal solution, but a greedy heuristic can yield locally ...
for
job scheduling A job scheduler is a computer application for controlling unattended background program execution of jobs. This is commonly called batch scheduling, as execution of non-interactive jobs is often called batch processing, though traditional ''job ...
. The input to the algorithm is a set of ''jobs'', each of which has a specific processing-time. There is also a number ''m'' specifying the number of ''machines'' that can process the jobs. The LPT algorithm works as follows: # Order the jobs by descending order of their processing-time, such that the job with the longest processing time is first. # Schedule each job in this sequence into a machine in which the current load (= total processing-time of scheduled jobs) is smallest. Step 2 of the algorithm is essentially the list-scheduling (LS) algorithm. The difference is that LS loops over the jobs in an arbitrary order, while LPT pre-orders them by descending processing time. LPT was first analyzed by
Ronald Graham Ronald Lewis Graham (October 31, 1935July 6, 2020) was an American mathematician credited by the American Mathematical Society as "one of the principal architects of the rapid development worldwide of discrete mathematics in recent years". He ...
in the 1960s in the context of the
identical-machines scheduling Identical-machines scheduling is an optimization problem in computer science and operations research. We are given ''n'' jobs ''J''1, ''J''2, ..., ''Jn'' of varying processing times, which need to be scheduled on ''m'' identical machines, such that ...
problem. Later, it was applied to many other variants of the problem. LPT can also be described in a more abstract way, as an algorithm for multiway number partitioning. The input is a set ''S'' of numbers, and a positive integer ''m''; the output is a partition of ''S'' into ''m'' subsets. LPT orders the input from largest to smallest, and puts each input in turn into the part with the smallest sum so far.


Examples

If the input set is S = and ''m'' = 2, then the resulting partition is , . If ''m'' = 3, then the resulting 3-way partition is , , .


Properties

LPT might not find the optimal partition. For example, in the above instance the optimal partition , , where both sums are equal to 15. However, its suboptimality is bounded both in the worst case and in the average case; see Performance guarantees below. The running time of LPT is dominated by the sorting, which takes O(''n'' log ''n'') time, where ''n'' is the number of inputs.


Performance guarantees: identical machines

When used for
identical-machines scheduling Identical-machines scheduling is an optimization problem in computer science and operations research. We are given ''n'' jobs ''J''1, ''J''2, ..., ''Jn'' of varying processing times, which need to be scheduled on ''m'' identical machines, such that ...
, LPT attains the following approximation ratios.


Worst-case maximum sum

In the worst case, the largest sum in the greedy partition is at most \frac times the optimal (minimum) largest sum.Normalize the input items such that OPT=1. This implies that the sum of all items is at most ''m''. Partition the items into ''large'' (more than 2/3), ''medium'' (between 1/3 and 2/3), and ''small'' (less than 1/3). Let their numbers be ''nL'', ''nM'' and ''nS''. In each optimal partition, each part contains at most one large item, so ''nL'' ≤ ''m''. Moreover, each optimal part cannot contain both a large and a medium item, or three medium items; so ''nM'' ≤ 2(''m''-''nL''). The operation of the greedy algorithm can be partitioned into three phases: # Allocating the large items - each of which is put in a different bin. Since ''nL'' ≤ ''m'', when this phase completes, each bin contains at most one item, so the max-sum is at most 1. # Allocating the medium items. The first ''m-nL'' ones are put in empty bins, and the next ''m-nL'' ones (if any) are added into the same bins. Since ''nM'' ≤ 2(''m-nL''), when this phase completes, each bin contains either ''one large item'' - with sum at most 1, or ''at most two medium items'' - with sum at most 4/3. #Allocating the small items. Each item is added into the bin with the smallest sum. The smallest sum is at most the average sum, which is at most 1. Hence, once a small item is added, the new sum becomes at most 4/3. A more detailed analysis yields a factor of \frac = \frac-\frac times the optimal (minimum) largest sum. (for example, when ''m'' =2 this ratio is 7/6\approx 1.167). The previous proof can be refined in two ways. First, one can prove that, once all large and medium items are allocated, the sum in each bin is at most 1. * If there are at most ''m-nL'' medium items, then each large and medium item is placed in a separate bin, so the sum is clearly at most 1. * Otherwise, denote the first ''m''-''nL'' medium items by ''top-medium items'', and the others (at most ''m-nL'') by ''bottom-medium items''. * Assume first that item #''m'' is larger than 1/2. This means that the items #1,...,#''m'' are all larger than 1/2, so each must be in a different optimal part. Each of the bottom-medium items (items #''m''+1,...#''nM'') must fit into an optimal part with exactly one of the items #1,...,#''m'' . Let us call two items ''matchable'' if their sum is at most 1, so that they can fit into the same optimal part. By Hall's theorem, each subset of ''k'' bottom-medium items must be matchable to at least ''k'' of the items #1,...,#''m''. In particular, the item #''m''+1 must be matchable to item #''m''; items #''m''+1 and #''m''+2 must be matchable to item #''m''-1; and in general, item #''m''+''k'' must be matchable to item #'m''-''k''+1. LPT indeed puts item #''m''+''k'' in the same bin as #'m''-''k''+1, so the sum of each bin is at most 1. Second, one can prove that, when allocating the small inputs, the sum of every new bin is at most 4/3-1/(3''m''). There are two cases: * If the current smallest sum is at most 1-1/(3''m''), then the new sum - after adding one small input - is at most 1-1/(3''m'')+1/3 = 4/3-1/(3''m''). * Otherwise, all sums are larger than 1-1/(3''m''), so the sum of the ''m''-1 largest bins is larger than ''m''-1-1/3+1/(3''m'') = ''m''-(4/3-1/(3''m'')). Since the total sum of all inputs is at most ''m'', the new sum must be less than 4/3-1/(3''m''). The factor \frac is tight. Suppose there are 2 m+1 inputs (where ''m'' is even): 2 m-1,2 m-1, 2 m-2,2 m-2, \ldots, m+1,m+1, m,m, m. Then the greedy algorithm returns: * 2 m-1,m,m * 2 m-1,m * 2 m-2,m+1 * 2 m-2,m+1, * ... * 3 m/2,3 m/2-1 * 3 m/2,3 m/2-1 with a maximum of 4 m-1, but the optimal partition is: * m,m,m * 2 m-1,m+1 * 2 m-1,m+1 * 2 m-2,m+2 * 2 m-2,m+2 * ... * 3 m/2,3 m/2 with a maximum of 3 m. An even more detalied analysis takes into account the number of inputs in the max-sum part. # In each part of the greedy partition, the ''j''-th highest number is at most OPT/j''.'' # Suppose that, in the greedy part ''P'' with the max-sum, there are ''L'' inputs. Then, the approximation ratio of the greedy algorithm is \frac-\frac = 1 + \frac - \frac . It is tight for ''L≥''3 (For ''L''=3, we get the general factor \frac-\frac). ''Proof''. Denote the numbers in ''P'' by x1,...,''x''L. Before ''x''L was inserted into ''P'', its sum was smallest. Therefore, the ''average'' sum per part is at least the sum x1+...+''x''L-1 ''+ x''L/''m''. The optimal max-sum must be at least the average sum. In contrast, the greedy sum is x1+...+''x''L-1+''x''L. So the difference is at most (1-1/''m'')''xL'', which by (1) is at most (1-1/''m'')*OPT/''L''. So the ratio is at most (1 + 1/''L'' - 1/''Lm'').


Worst-case minimum sum

In the worst case, the ''smallest'' sum in the returned partition is at least \frac times the optimal (maximum) smallest sum. The proof is by contradiction. We consider a ''minimal counterexample'', that is, a counterexample with a smallest ''m'' and fewest input numbers. Denote the greedy partition by P1,...,P''m'', and the optimal partition by Q1,...,Q''m''. Some properties of a minimal counterexample are: * The min-sum in the optimal partition is 4, and the min-sum in the greedy partition is less than 3 (this is just normalization - it is without loss of generality). * The max-sum in the greedy partition is more than 4 (since the total sum in both partitions is the same, and it is at least 4''m''). * If sum(P''i'')≥3 for some greedy bin P''i'', then P''i'' is not dominated by any optimal bin Q''j''. ''Proof'': if P''i'' is dominated by Q''j'', then we can construct a smaller counterexample by decreasing ''m'' to ''m''-1 and removing the items in P''i''. The min-sum in the greedy partition remains less than 3. In the optimal partition, the items in P''i'' can be replaced by their dominating items in Q''j'', so the min-sum remains at least 4. * If sum(P''i'')≥3 for some greedy bin P''i'', then P''i'' contains at least two numbers. ''Proof'': if P''i'' contains only one number x, then it is dominated by the optimal bin Q''j'' which contains x.givese some input ''x'' is at least 3, and the greedy algorithm puts it in some P''i''. Then, since there is a bundle with sum less than 3, the greedy algorithm will not put any other input in P''i'', contradicting the previous lemma. * Every greedy bin P''i'' contains at most one input weakly-larger than 3/2. ''Proof'': Let P''i'' be the first greedy bin which is assigned two such inputs. Since inputs are assigned in descending order, P''i'' ''is'' the first greedy bin assigned two inputs. This means that it must contain the smallest two inputs from among the largest ''m''+1 inputs. Moreover, since the sum of these two items is at least 3/2+3/2=3, P''i'' is not assigned any other input. On the other hand, by the
pigeonhole principle In mathematics, the pigeonhole principle states that if items are put into containers, with , then at least one container must contain more than one item. For example, if one has three gloves (and none is ambidextrous/reversible), then there mu ...
, there must be some optimal bin Q''j'' that contains some two inputs from among the largest ''m''+1 inputs; so P''i'' is dominated by Q''j''. *During the run of the greedy algorithm, the sum in every bin P''i'' becomes at least 8/3 before the sum of any bin exceeds 4. ''Proof'': Let ''y'' be the first input added to some bin P''i'', which made its sum larger than 4. Before ''y'' was added, P''i'' had the smallest sum, which by assumption was smaller than 8/3; this means that y>4/3. Let T denote the set of all inputs from the first one down to ''y''; all these inputs are larger than 4/3 too. Since P''i'' was smaller than 8/3, it contained exactly one item x from T. So now P''i'' contains exactly 2 items , and remains with these 2 items until the algorithm ends. Let ''m'' be the number of items from the first one down to x. We now show a contradiction by counting the items in T in two ways. **First, consider the ''n'' optimal bins. If any such bin contains an item at least as large as x, then it cannot contain any other item of T, since otherwise it dominates . Moreover, any such bin cannot contain three items from T, since the sum of any two of them is larger than 8/3, which is larger than x; and the third one is at least y, so it dominates . Therefore, the number of items in T is at most 1*''m'' + 2*(''n''-''m'') = 2''n''-''m''. **Now, consider the ''n'' greedy bins. When ''y'' is added to the bundle containing ''x'', it is the bundle with the smallest sum. Therefore, all elements of ''T'' that are smaller than x, must be in a greedy bin with at least one other item of T. The same is true for x and y. Therefore, the number of items in T is at least (m-1)+2*(n-m+1) = 2''n''-''m''+1 - contradiction. ** *We can assume, without loss of generality, that all inputs are either smaller than 1/3, or at least 1. ''Proof'': Suppose some input ''x'' is in (1/3,1). We replace x with 1. This obviously does not decrease the optimal min-sum. We show that it does not change the greedy min-sum. We know that some greedy bundle P''i'' has a final sum larger than 4. Before the last input was added into P''i'', its sum was smaller than 3; so P''i'' became larger than 4 when some input larger than 1 was added to it. By the previous lemma, at that point the sum of all other greedy bundles was at least 8/3. The algorithm arrives at x afterwards. Once the algorithm adds x to some bin P''j'', the sum of P''j'' becomes at least 8/3+1/3=3, so no more items are added into P''j''. So P''j'' contains only one input with size in [1/3,1). Once x is replaced with 1, it is still inserted into P''j'', and its sum is still above 3. So the greedy min-sum does not change. *We can now partition the inputs into ''small'' (less than 1/3) and ''large'' (at least 1). The set of small items in P''i'' is denoted by S''i''. Note that, when the algorithm starts processing small items, the sum in all bundles is at least 8/3. The proof that a minimal counterexample does not exist uses a ''weighting scheme''. Each input x is assigned a weight w(x) according to its size and greedy bundle Pi: * If x is a large item: ** If x is the single large item in P''i'', then w(x)=8/3. ** If P''i'' contains exactly two items and both of them are large, and x>y, and sum(P''i'')≥3, then w(x)=8/3. ** Otherwise, w(x)=4/3. * If x is a small item: ** if sum(P''i'')≥3, then w(x) = 4x/(3 sum(S''i'')); so w(S''i'') = 4/3. ** if sum(P''i'')<3, then w(x) = 2x/(3 sum(S''i'')); so w(S''i'') = 2/3. This weighting scheme has the following properties: * If x≥2, then w(x)=8/3. ''Proof'': x is large. Suppose it is in Pi. If Pi contains another large item y, then x+y≥3 so there is no other item in Pi. Moreover, x>y since there is at most one item larger than 3/2. So w(x)=8/3. * If x<1/3, then w(x) > 2x. ''Proof'': x is small. Suppose it is in Pi. **If sum(P''i'')≥3 then, since sum(P''i'') was smaller than 3 before x was added to it, it is now smaller than 10/3. But when the algorithm started processing small items, sum(P''i'') was at least 8/3. This means that sum(S''i'') < 2/3, so w(x) = 4x/(3 sum(S''i'')) > 2x. **If sum(P''i'')<3 then sum(S''i'') < 3-8/3=1/3, so w(x) = 2x/(3 sum(S''i'')) > 2x. * The weight of every greedy bin P''i'' is at most 4, and the weight of at least one greedy bin is at most 10/3. ''Proof'': **If all inputs in P''i'' are large, then it contains either a single input with weight 8/3, two inputs with weights 8/3+4/3, or three inputs with weights 4/3+4/3+4/3. **If some inputs in P''i'' are small, then their total weight is at most 4/3. There are at most two large inputs, and their weights are either 8/3 or 4/3+4/3. **Finally, the weight of the greedy bin with sum smaller than 3 is at most 8/3 (if it has only large inputs) or 10/3 (if it has some small inputs). * The weight of every optimal bin Q''j'' is at least 4. ''Proof'': **If Q''j'' contains only small items, then each of them satisfies w(x) > 2x, so w(Q''j'') > 2 sum(Q''j'') ≥ 8. **If Q''j'' contains exactly one large item x, then it must contain some small items whose sum is at least 4-x and weight at least 8-2x. Then, either x<2 and the weight of small items is at least 8-4=4, or x in (2,3) and w(x)=8/3 and the weight of small items is at least 8-6=2. In both cases the total weight is at least 4. **If Q''j'' contains exactly two large items x>y, and x≥2, then there is at least 8/3+4/3=4. If x+y≤10/3, then the sum of small items must be at least 2/3, so the total weight is at least 4/3+4/3+2*2/3=4. Otherwise, x>5/3. So x was the first input in some greedy bin P''m''. Let z be the second input added into P''m''. If x+z≥3, then there are no more inputs in P''m'', so w(x)=8/3 and we are done. Otherwise, x+z<3. Let v be the smallest input in some greedy bin whose sum exceeds 4. Since x<8/3, z must have been processed before v, so z≥v. Consider now any small item t in Q''j'', and suppose it is in some greedy bin P''i''''.'' ***If sum(P''i'')<3, then the fact that v was not put in P''i'' implies that v > 4-sum(large-items-in-P''i'') > 1+sum(small-items-in-P''i''). Therefore, 1+sum(S''i'')+x < v+x ≤ z+x < 3 and sum(S''i'') < 2-x. This means that 2*sum(S''i'') < 4-2x ≤ 4-x-y ≤ sum(small-items-in-Q''j''). So w(t) = 2t/(3sum(S''i'')) > 4t/(3sum(small-items-in-Q''j'')). ***If sum(P''i'')≥3, and sum(S''i'')≤1, then w(t)=4/3 and we are done. Since sum(P''i'') was less than 3 before t was added into it, sum(P''i'')<3+sum(S''i'')/2. The fact that v was not put in P''i'' implies that v > 4-sum(large-items-in-P''i'') > 1+sum(small-items-in-P''i'')/2. Similariy to the previous paragraph, w(t) > 4t/(3sum(small-items-in-Q''j'')). ***Therefore, the total weight of all small items in Q''j'' is at least 4/3, so the total weight of Q''j'' is at least 4/3+10/3>4. **If Q''j'' contains exactly three or more large items, then its total weight is at least 4/3+4/3+4/3=4. *The last two claims are contradictory, since the former implies that the weight of all inputs is at most 4''m''-2/3, and the latter implies that the weight of all inputs is at least 4''m.'' Therefore, a counterexample does not exist. A more sophisticated analysis shows that the ratio is at most \frac (for example, when ''m''=2 the ratio is 5/6). It is tight. Suppose there are 3''m''-1 inputs (where ''m'' is even). The first 2''m'' inputs are: 2''m''-1, 2''m''-1, 2''m''-2, 2''m''-2, ..., ''m'', ''m''. The last ''m''-1 inputs are all ''m''. Then the greedy algorithm returns: * 2''m''-1, ''m'', ''m'' * 2''m''-1, ''m'', ''m'' * 2''m''-2, ''m+1'', ''m'' * 2''m''-2, ''m+1'', ''m'' * ... * 3 m/2, 3 m/2-1, m * 3 m/2, 3 m/2-1 with a minimum of 3''m''-1. But the optimal partition is: * 2''m''-1, 2''m''-1 * 2''m''-2, ''m'', ''m'' * 2''m''-2, ''m'', ''m'' * 2''m''-3, ''m''+1, ''m'' * 2''m''-3, ''m''+1, ''m'' * ... * 3 m/2, 3 m/2-2, ''m'' * 3 m/2, 3 m/2-2, ''m'' * 3 m/2-1, 3 m/2-1, ''m'' with a minimum of 4''m''-2. There is a variant of LPT, called Restricted-LPT or RLPT, in which the inputs are partitioned into subsets of size ''m'' called ''ranks'' (rank 1 contains the largest ''m'' inputs, rank 2 the next-largest ''m'' inputs, etc.). The inputs in each rank must be assigned to ''m'' different bins: rank 1 first, then rank 2, etc. The minimum sum in RLPT is at most the minimum sum at LPT. The approximation ratio of RLPT for maximizing the minimum sum is at most ''m''.


Average-case maximum sum

In the average case, if the input numbers are distributed uniformly in [0,1], then the largest sum in an LPT schedule satisfies the following properties: * The expected largest sum for ''m''=2 machines is at least \frac+\frac and at most \frac+\frac, where ''n'' is the number of inputs. * The largest sum is at most 1 + O(\log/n) times the optimum
almost surely In probability theory, an event is said to happen almost surely (sometimes abbreviated as a.s.) if it happens with probability 1 (or Lebesgue measure 1). In other words, the set of possible exceptions may be non-empty, but it has probability 0 ...
, and 1 + O(1/n) in expectation, where n is the number of inputs. * The difference between the LPT largest sum and the optimal largest sum is at most O(\log/n) almost surely (for uniform or negative exponential distributions), and at most O(m^2/n) in expectation (for uniform distribution). These results hold also for machines with different speeds.


General objectives

Let ''Ci'' (for ''i'' between 1 and ''m'') be the sum of subset ''i'' in a given partition. Instead of minimizing the objective function max(''Ci''), one can minimize the objective function max(''f''(''Ci'')), where ''f'' is any fixed function. Similarly, one can minimize the objective function sum(''f''(''Ci'')). Alon, Azar, Woeginger and Yadid prove that, if ''f'' satisfies the following two conditions: # A strong continuity condition called ''Condition F*'': for every ε>0 there exists δ>0 such that, if , ''y''-''x'', <δ''x'', then , f(''y'')-f(''x''), <ε''f''(''x''). #
Convexity Convex or convexity may refer to: Science and technology * Convex lens, in optics Mathematics * Convex set, containing the whole line segment that joins points ** Convex polygon, a polygon which encloses a convex set of points ** Convex polytope ...
. Then the LPT rule has a finite approximation ratio for minimizing sum(''f''(''Ci'')).


Adaptations to other settings

Besides the simple case of identical-machines scheduling, LPT has been adapted to more general settings.


Uniform machines

In
uniform-machines scheduling Uniform machine scheduling (also called uniformly-related machine scheduling or related machine scheduling) is an optimization problem in computer science and operations research. It is a variant of optimal job scheduling. We are given ''n'' jobs '' ...
, different machines may have different speeds. The LPT rule assigns each job to the machine on which its ''completion time'' will be earliest (that is, LPT may assign a job to a machine with a ''larger'' current load, if this machine is so fast that it would finish that job ''earlier'' than all other machines). * Gonzalez, Ibarra and Sahni show that the approximation ratio of LPT with ''m'' uniform machines is at most 2 m/(m+1). It is not tight, but there is an asymptotic lower bound of 1.5 when ''m'' approaches infinity. For the special case of ''m''=2 machines, the approximation ratio is at most (1+\sqrt)/4\approx 1.281 and it is tight. *Mireault, Orlin and Vohra study a setting with ''two'' machines, in which one machine is ''q'' times faster than the other. They compute the approximation ratio of LPT as a function of ''q''. When ''q''=1, their result coincides with the 7/6 factor known for identical machines. *Koulamas and Kyparisis present a modification of LPT in which the three longest jobs are scheduled optimally, and the remaining jobs are scheduled based on the LPT rule. The approximation ratio for two machines is \sqrt\approx 1.2247 and it is tight.


Cardinality constraints

In the balanced partition problem, there are constraints on the ''number'' of jobs that can be assigned to each machine. A simple constraint is that each machine can process at most ''c'' jobs. The LPT rule assigns each job to the machine with the smallest load from among those with fewer than ''c'' jobs. This rule is called ''modified LPT'' or ''MLPT''. * Kellerer and Woeginger study a variant in which there are at most 3*''m'' jobs, and each machine must contain at most 3 jobs (this can be seen as a generalization of
3-partition problem The 3-partition problem is a strongly NP-complete problem in computer science. The problem is to decide whether a given multiset of integers can be partitioned into triplets that all have the same sum. More precisely: * The input to the proble ...
). They show that MLPT attains at most (4 m-1)/(3 m) of the ''minimum largest'' sum, which the same approximation ratio that LPT attains for the unconstrained problem. The bound is tight for MLPT. It is conjectured that MLPT has the same approximation ratio for more general cardinality constraints (''c''>3). Currently, it is known that the approximation ratio of MLPT for general ''c''>3 is at most 2. * Chen, He and Lin show that, for the same problem, MLPT attains at least (3 m-1)/(4 m-2) of the ''maximum smallest'' sum, which is again the same ratio that LPT attains for the unconstrained problem. Another constraint is that the number of jobs on all machines should be n/m rounded either up or down. In an adaptation of LPT called ''restricted LPT'' or ''RLPT'', inputs are assigned in pairs - one to each machine (for ''m''=2 machines). The resulting partition is balanced by design. * Coffman, Frederickson and Lueker show that the expected largest sum of RLPT when inputs are uniformly-distributed random variables is exactly \frac+\frac. The expected difference between the largest and smallest sum is \Theta(1/n).


Kernel constraints - non-simultanoues availability

In the ''kernel partitioning problem'', there are some ''m'' pre-specified jobs called kernels, and each kernel must be scheduled to a unique machine. An equivalent problem is scheduling when machines are available in different times: each machine ''i'' becomes available at some time ''ti ≥'' 0 (the time ''ti'' can be thought of as the length of the kernel job). A simple heuristic algorithm, called SLPT, assigns each kernel to a different subset, and then runs the LPT algorithm. * Lee proves that this heuristic has a tight approximation ratio \frac for the minimum largest sum. He then suggests running, in the second step, a modified version of LPT, and proves that it attains an approximation ratio \frac . * Lin, Yao and He prove that this heuristic has a tight approximation ratio \frac for the maximum smallest sum. *Shen, Wang and Wang study different objective functions for this setting, and present polynomial-time algorithms.


Online settings

Often, the inputs come
online In computer technology and telecommunications, online indicates a state of connectivity and offline indicates a disconnected state. In modern terminology, this usually refers to an Internet connection, but (especially when expressed "on line" o ...
, and their sizes becomes known only when they arrive. In this case, it is not possible to sort them in advance.
List scheduling List scheduling is a greedy algorithm for Identical-machines scheduling. The input to this algorithm is a list of jobs that should be executed on a set of ''m'' machines. The list is ordered in a fixed order, which can be determined e.g. by the pri ...
is a similar algorithm that takes a list in any order, not necessarily sorted. Its approximation ratio is \frac = 2-\frac. A more sophisticated adaptation of LPT to an online setting attains an approximation ratio of 3/2.{{cite journal, last1=Chen, first1=Bo, last2=Vestjens, first2=Arjen P. A., date=1 November 1997, title=Scheduling on identical machines: How good is LPT in an on-line setting?, journal=Operations Research Letters, volume=21, issue=4, pages=165–169, doi=10.1016/S0167-6377(97)00040-0


Implementations

* Python: there is an implementation of LPT ("greedy"
in the numberpartitioning package
as well a
in the prtpy package


See also

* Greedy number partitioning - generalizations and extensions of LPT for the problem of multiway number partitioning.


References

Scheduling algorithms Number partitioning