HOME

TheInfoList



OR:

Uniform machine scheduling (also called uniformly-related machine scheduling or related machine scheduling) is an
optimization problem In mathematics, computer science and economics, an optimization problem is the problem of finding the ''best'' solution from all feasible solutions. Optimization problems can be divided into two categories, depending on whether the variables ...
in
computer science Computer science is the study of computation, automation, and information. Computer science spans theoretical disciplines (such as algorithms, theory of computation, information theory, and automation) to Applied science, practical discipli ...
and
operations research Operations research ( en-GB, operational research) (U.S. Air Force Specialty Code: Operations Analysis), often shortened to the initialism OR, is a discipline that deals with the development and application of analytical methods to improve deci ...
. It is a variant of
optimal job scheduling Optimal job scheduling is a class of optimization problems related to scheduling. The inputs to such problems are a list of ''jobs'' (also called ''processes'' or ''tasks'') and a list of ''machines'' (also called ''processors'' or ''workers''). The ...
. We are given ''n'' jobs ''J''1, ''J''2, ..., ''Jn'' of varying processing times, which need to be scheduled on ''m'' different machines. The goal is to minimize the
makespan In operations research, the makespan of a project is the length of time that elapses from the start of work to the end. This type of multi-mode resource constrained project scheduling problem (MRCPSP) seeks to create the shortest logical project sc ...
- the total time required to execute the schedule. The time that machine ''i'' needs in order to process job j is denoted by ''pi,j''. In the general case, the times ''pi,j'' are unrelated, and any matrix of positive processing times is possible. In the specific variant called ''uniform machine scheduling'', some machines are ''uniformly'' faster than others. This means that, for each machine ''i'', there is a speed factor ''si'', and the run-time of job ''j'' on machine ''i'' is ''pi,j'' = ''pj'' / ''si''. In the standard three-field notation for optimal job scheduling problems, the uniform-machine variant is denoted by Q in the first field. For example, the problem denoted by " Q, , C_\max" is a uniform machine scheduling problem with no constraints, where the goal is to minimize the maximum completion time. A special case of uniform machine scheduling is identical machine scheduling, in which all machines have the same speed. This variant is denoted by P in the first field. In some variants of the problem, instead of minimizing the ''maximum'' completion time, it is desired to minimize the ''average'' completion time (averaged over all ''n'' jobs); it is denoted by Q, , \sum C_i. More generally, when some jobs are more important than others, it may be desired to minimize a ''
weighted average The weighted arithmetic mean is similar to an ordinary arithmetic mean (the most common type of average), except that instead of each of the data points contributing equally to the final average, some data points contribute more than others. The ...
'' of the completion time, where each job has a different weight. This is denoted by Q, , \sum w_i C_i.


Algorithms


Minimizing the average completion time

Minimizing the ''average'' completion time can be done in polynomial time: * The SPT algorithm (Shortest Processing Time First), sorts the jobs by their length, shortest first, and then assigns them to the processor with the earliest end time so far. It runs in time O(''n'' log ''n''), and minimizes the average completion time on ''identical'' machines, P, , \sum C_i. *Horowitz and Sahni present an exact algorithm, with run time O(''n'' log ''m n''), for minimizing the average completion time on ''uniform'' machines, Q, , \sum C_i. * Bruno, Coffman and Sethi present an algorithm, running in time O(\max(m n^2,n^3)), for minimizing the average completion time on ''unrelated'' machines, R, , \sum C_i.


Minimizing the weighted-average completion time

Minimizing the ''weighted average'' completion time is NP-hard even on ''identical'' machines, by reduction from the
knapsack problem The knapsack problem is a problem in combinatorial optimization: Given a set of items, each with a weight and a value, determine the number of each item to include in a collection so that the total weight is less than or equal to a given limit an ...
. It is NP-hard even if the number of machines is fixed and at least 2, by reduction from the
partition problem In number theory and computer science, the partition problem, or number partitioning, is the task of deciding whether a given multiset ''S'' of positive integers can be partitioned into two subsets ''S''1 and ''S''2 such that the sum of the numbers ...
. Sahni presents an exponential-time algorithm and a polynomial-time approximation algorithm for ''identical'' machines. Horowitz and Sahni presented: * Exact
dynamic programming Dynamic programming is both a mathematical optimization method and a computer programming method. The method was developed by Richard Bellman in the 1950s and has found applications in numerous fields, from aerospace engineering to economics. I ...
algorithms for minimizing the ''weighted-average completion time'' on ''uniform'' machines. These algorithms run in exponential time. *
Polynomial-time approximation scheme In computer science (particularly algorithmics), a polynomial-time approximation scheme (PTAS) is a type of approximation algorithm for optimization problems (most often, NP-hard optimization problems). A PTAS is an algorithm which takes an insta ...
s, which for any ''ε''>0, attain at most (1+ε)OPT. For minimizing the ''weighted average'' completion time on two ''uniform'' machines, the run-time is O(10^ n^2) = O( n^2 / \epsilon), so it is an FPTAS. They claim that their algorithms can be easily extended for any number of uniform machines, but do not analyze the run-time in this case. They do not present an algorithm for ''weighted-average'' completion time on ''unrelated'' machines.


Minimizing the maximum completion time (makespan)

Minimizing the ''maximum'' completion time is NP-hard even for ''identical'' machines, by reduction from the
partition problem In number theory and computer science, the partition problem, or number partitioning, is the task of deciding whether a given multiset ''S'' of positive integers can be partitioned into two subsets ''S''1 and ''S''2 such that the sum of the numbers ...
. A constant-factor approximation is attained by the Longest-processing-time-first algorithm (LPT). Horowitz and Sahni presented: * Exact
dynamic programming Dynamic programming is both a mathematical optimization method and a computer programming method. The method was developed by Richard Bellman in the 1950s and has found applications in numerous fields, from aerospace engineering to economics. I ...
algorithms for minimizing the ''maximum'' completion time on both uniform and unrelated machines. These algorithms run in exponential time (recall that these problems are all NP-hard). *
Polynomial-time approximation scheme In computer science (particularly algorithmics), a polynomial-time approximation scheme (PTAS) is a type of approximation algorithm for optimization problems (most often, NP-hard optimization problems). A PTAS is an algorithm which takes an insta ...
s, which for any ''ε''>0, attain at most (1+ε)OPT. For minimizing the ''maximum'' completion time on two ''uniform'' machines, their algorithm runs in time O(10^ n), where l is the smallest integer for which \epsilon \geq 2\cdot 10^. Therefore, the run-time is in O( n / \epsilon^2), so it is an
FPTAS A fully polynomial-time approximation scheme (FPTAS) is an algorithm for finding approximate solutions to function problems, especially optimization problems. An FPTAS takes as input an instance of the problem and a parameter ε > 0. It r ...
. For minimizing the ''maximum'' completion time on two ''unrelated'' machines, the run-time is O(10^ n^2) = O( n^2 / \epsilon). They claim that their algorithms can be easily extended for any number of uniform machines, but do not analyze the run-time in this case. Hochbaum and Shmoys presented several approximation algorithms for any number of ''identical'' machines. Later, they developed a PTAS for ''uniform'' machines. Epstein and Sgall generalized the PTAS for uniform machines to handle more general objective functions. Let ''Ci'' (for ''i'' between 1 and ''m'') be the makespan of machine ''i'' in a given schedule. Instead of minimizing the objective function max(''Ci''), one can minimize the objective function max(''f''(''Ci'')), where ''f'' is any fixed function. Similarly, one can minimize the objective function sum(''f''(''Ci'')).


Monotonicity and Truthfulness

In some settings, the machine speed is the machine's private information, and we want to incentivize machines to reveal their true speed, that is, we want a
truthful mechanism In game theory, an asymmetric game where players have private information is said to be strategy-proof or strategyproof (SP) if it is a weakly-dominant strategy for every player to reveal his/her private information, i.e. given no information about ...
. An important consideration for attaining truthfulness is ''monotonicity''. It means that, if a machine reports a higher speed, and all other inputs remain the same, then the total processing time allocated to the machine weakly increases. For this problem: * Auletta, De Prisco, Penna and Persiano presented a 4-approximation monotone algorithm, which runs in polytime when the number of machines is fixed. *Ambrosio and Auletta proved that the
Longest Processing Time Longest-processing-time-first (LPT) is a greedy algorithm for job scheduling. The input to the algorithm is a set of ''jobs'', each of which has a specific processing-time. There is also a number ''m'' specifying the number of ''machines'' that can ...
algorithm is monotone whenever the machine speeds are powers of some c ≥ 2, but not when c ≤ 1.78. In contrast,
List scheduling List scheduling is a greedy algorithm for Identical-machines scheduling. The input to this algorithm is a list of jobs that should be executed on a set of ''m'' machines. The list is ordered in a fixed order, which can be determined e.g. by the pri ...
is not monotone for ''c'' > 2. * Andelman, Azar and Sorani presented a 5-approximation monotone algorithm, which runs in polytime even when the number of machines is variable. * Kovacz presented a 3-approximation monotone algorithm.


Extensions

Dependent jobs: In some cases, the jobs may be dependent. For example, take the case of reading user credentials from console, then use it to authenticate, then if authentication is successful display some data on the console. Clearly one task is dependent upon another. This is a clear case of where some kind of
ordering Order, ORDER or Orders may refer to: * Categorization, the process in which ideas and objects are recognized, differentiated, and understood * Heterarchy, a system of organization wherein the elements have the potential to be ranked a number of d ...
exists between the tasks. In fact it is clear that it can be modelled with
partial ordering In mathematics, especially order theory, a partially ordered set (also poset) formalizes and generalizes the intuitive concept of an ordering, sequencing, or arrangement of the elements of a set. A poset consists of a set together with a binary r ...
. Then, by definition, the set of tasks constitute a
lattice structure In crystallography, crystal structure is a description of the ordered arrangement of atoms, ions or molecules in a crystalline material. Ordered structures occur from the intrinsic nature of the constituent particles to form symmetric patterns th ...
. This adds further complication to the multiprocessor scheduling problem. Static versus Dynamic: Machine scheduling algorithms are static or dynamic. A scheduling algorithm is static if the scheduling decisions as to what computational tasks will be allocated to what processors are made before running the program. An algorithm is dynamic if it is taken at run time. For static scheduling algorithms, a typical approach is to rank the tasks according to their precedence relationships and use a list scheduling technique to schedule them onto the processors. Multi-stage jobs: In various settings, each job might have several operations that must be executed in parallel. Some such settings are handled by
open shop scheduling Open-shop scheduling or open-shop scheduling problem (OSSP) is an optimization problem in computer science and operations research. It is a variant of optimal job scheduling. In a general job-scheduling problem, we are given ''n'' jobs ''J''1,  ...
,
flow shop scheduling Flow-shop scheduling is an optimization problem in computer science and operations research. It is a variant of optimal job scheduling. In a general job-scheduling problem, we are given ''n'' jobs ''J''1, ''J''2, ..., ''Jn'' of varyi ...
and
job shop scheduling Job-shop scheduling, the job-shop problem (JSP) or job-shop scheduling problem (JSSP) is an optimization problem in computer science and operations research. It is a variant of optimal job scheduling. In a general job scheduling problem, we are give ...
.


External links


Summary of parallel machine problems without preemtion


References

{{Scheduling problems Optimal scheduling