HOME

TheInfoList



OR:

Single-machine scheduling or single-resource scheduling is an
optimization problem In mathematics, computer science and economics, an optimization problem is the problem of finding the ''best'' solution from all feasible solutions. Optimization problems can be divided into two categories, depending on whether the variables ...
in
computer science Computer science is the study of computation, automation, and information. Computer science spans theoretical disciplines (such as algorithms, theory of computation, information theory, and automation) to Applied science, practical discipli ...
and
operations research Operations research ( en-GB, operational research) (U.S. Air Force Specialty Code: Operations Analysis), often shortened to the initialism OR, is a discipline that deals with the development and application of analytical methods to improve deci ...
. We are given ''n'' jobs ''J''1, ''J''2, ..., ''Jn'' of varying processing times, which need to be scheduled on a single machine, in a way that optimizes a certain objective, such as the
throughput Network throughput (or just throughput, when in context) refers to the rate of message delivery over a communication channel, such as Ethernet or packet radio, in a communication network. The data that these messages contain may be delivered ov ...
. Single-machine scheduling is a special case of
identical-machines scheduling Identical-machines scheduling is an optimization problem in computer science and operations research. We are given ''n'' jobs ''J''1, ''J''2, ..., ''Jn'' of varying processing times, which need to be scheduled on ''m'' identical machines, such that ...
, which is itself a special case of
optimal job scheduling Optimal job scheduling is a class of optimization problems related to scheduling. The inputs to such problems are a list of ''jobs'' (also called ''processes'' or ''tasks'') and a list of ''machines'' (also called ''processors'' or ''workers''). The ...
. Many problems, which are NP-hard in general, can be solved in polynomial time in the single-machine case. In the standard three-field notation for optimal job scheduling problems, the single-machine variant is denoted by 1 in the first field. For example, " 1, , \sum C_j" is an single-machine scheduling problem with no constraints, where the goal is to minimize the sum of completion times. The makespan-minimization problem 1, , C_, which is a common objective with multiple machines, is trivial with a single machine, since the makespan is always identical. Therefore, other objectives have been studied.


Minimizing the sum of completion times

The problem 1, , \sum C_j aims to minimize the sum of completion times. It can be solved optimally by the Shortest Processing Time First rule (SPT): the jobs are scheduled by ascending order of their processing time p_j. The problem 1, , \sum w_j C_j aims to minimize the ''weighted'' sum of completion times. It can be solved optimally by the Weighted Shortest Processing Time First rule (WSPT): the jobs are scheduled by ascending order of the ratio p_j/w_j. The problem 1, chains, \sum w_j C_j is a generalization of the above problem for jobs with dependencies in the form of chains. It can also be solved optimally by a suitable generalization of WSPT.


Minimizing the cost of lateness

The problem 1, , L_ aims to minimize the maximum ''lateness''. For each job ''j'', there is a due date d_j. If it is completed after its due date, it suffers ''
lateness Late may refer to: * LATE, an acronym which could stand for: ** Limbic-predominant age-related TDP-43 encephalopathy, a proposed form of dementia ** Local-authority trading enterprise, a New Zealand business law ** Local average treatment effect, ...
'' defined as L_j := C_j - d_j . 1, , L_ can be solved optimally by the Earliest Due Date First rule (EDD): the jobs are scheduled by ascending order of their deadline d_j. The problem 1, prec, h_ generalizes the 1, , L_ in two ways: first, it allows arbitrary precedence constraints on the jobs; second, it allows each job to have an arbitrary cost function ''hj'', which is a function of its completion time (lateness is a special case of a cost function). The maximum cost can be minimized by a greedy algorithm known as Lawler's algorithm. The problem 1, r_j, L_ generalizes 1, , L_ by allowing each job to have a different ''release time'' by which it becomes available for processing. The presence of release times means that, in some cases, it may be optimal to leave the machine idle, in order to wait for an important job that is not released yet. Minimizing maximum lateness in this setting is NP-hard. But in practice, it can be solved using a
branch-and-bound Branch and bound (BB, B&B, or BnB) is an algorithm design paradigm for discrete and combinatorial optimization problems, as well as mathematical optimization. A branch-and-bound algorithm consists of a systematic enumeration of candidate soluti ...
algorithm.


Maximizing the profit of earliness

In settings with deadlines, it is possible that, if the job is completed by the deadline, there is a profit ''pj''. Otherwise, there is no profit. The goal is to maximize the profit. Single-machine scheduling with deadlines is NP-hard; Sahni presents both exact exponential-time algorithms and a polynomial-time approximation algorithm.


Maximizing the throughput

The problem 1, , \sum U_j aims to minimize the ''number'' of late jobs, regardless of the amount of lateness. It can be solved optimally by the Hodgson-Moore algorithm. It can also be interpreted as maximizing the number of jobs that complete on time; this number is called the throughput. The problem 1, , \sum w_j U_j aims to minimize the ''weight'' of late jobs. It is NP-hard, since the special case in which all jobs have the same deadline (denoted by 1, d_j=d, \sum w_j U_j ) is equivalent to the
Knapsack problem The knapsack problem is a problem in combinatorial optimization: Given a set of items, each with a weight and a value, determine the number of each item to include in a collection so that the total weight is less than or equal to a given limit an ...
. The problem 1, r_j, \sum U_j generalizes 1, , \sum U_j by allowing different jobs to have different ''release times''. The problem is NP-hard. However, when all job lengths are equal, the problem can be solved in polynomial time. It has several variants: * The weighted optimization variant, 1, r_j, p_j=p, \sum w_j U_j, can be solved in time O(n^7). * The unweighted optimization variant, maximizing the number of jobs that finish on time,denoted 1, r_j, p_j=p, \sum U_j, can be solved in time O(n^5) using dynamic programming, when all release times and deadlines are integers. * The decision variant - deciding whether it is possible that all given jobs complete on time - can be solved by several algorithms, the fastest of them runs in time O(n \log n). Jobs can have ''execution intervals''. For each job ''j'', there is a processing time ''tj'' and a start-time ''sj'', so it must be executed in the interval 'sj,'' ''sj+tj'' Since some of the intervals overlap, not all jobs can be completed. The goal is to maximize the number of completed jobs, that is, the
throughput Network throughput (or just throughput, when in context) refers to the rate of message delivery over a communication channel, such as Ethernet or packet radio, in a communication network. The data that these messages contain may be delivered ov ...
. More generally, each job may have several possible intervals, and each interval may be associated with a different profit. The goal is to choose at most one interval for each job, such that the total profit is maximized. For more details, see the page on
interval scheduling Interval scheduling is a class of problems in computer science, particularly in the area of algorithm design. The problems consider a set of tasks. Each task is represented by an ''interval'' describing the time in which it needs to be processed b ...
. More generally, jobs can have ''time-windows'', with both start-times and deadlines, which may be larger than the job length. Each job can be scheduled anywhere within its time-window. Bar-Noy, Bar-Yehuda, Freund, Naor and Schieber present a (1-''ε'')/2 approximation.


See also

Many solution techniques have been applied to solving single machine scheduling problems. Some of them are listed below. *
Genetic algorithm In computer science and operations research, a genetic algorithm (GA) is a metaheuristic inspired by the process of natural selection that belongs to the larger class of evolutionary algorithms (EA). Genetic algorithms are commonly used to gene ...
s *
Neural network A neural network is a network or circuit of biological neurons, or, in a modern sense, an artificial neural network, composed of artificial neurons or nodes. Thus, a neural network is either a biological neural network, made up of biological ...
s *
Simulated annealing Simulated annealing (SA) is a probabilistic technique for approximating the global optimum of a given function. Specifically, it is a metaheuristic to approximate global optimization in a large search space for an optimization problem. It ...
*
Ant colony optimization In computer science and operations research, the ant colony optimization algorithm (ACO) is a probabilistic technique for solving computational problems which can be reduced to finding good paths through graphs. Artificial ants stand for multi ...
*
Tabu search Tabu search is a metaheuristic search method employing local search methods used for mathematical optimization. It was created by Fred W. Glover in 1986 and formalized in 1989. Local (neighborhood) searches take a potential solution to a pro ...


References

{{DEFAULTSORT:Single-Machine Scheduling Optimal scheduling