HOME
*





Identical Machine Scheduling
Identical-machines scheduling is an optimization problem in computer science and operations research. We are given ''n'' jobs ''J''1, ''J''2, ..., ''Jn'' of varying processing times, which need to be scheduled on ''m'' identical machines, such that a certain objective function is optimized, for example, the makespan is minimized. Identical machine scheduling is a special case of uniform machine scheduling, which is itself a special case of optimal job scheduling. In the general case, the processing time of each job may be different on different machines; in the case of identical machine scheduling, the processing time of each job is the same on each machine. Therefore, identical machine scheduling is equivalent to multiway number partitioning. A special case of identical machine scheduling is single-machine scheduling. In the standard three-field notation for optimal job scheduling problems, the identical-machines variant is denoted by P in the first field. For example, " P, , C_ ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Optimization Problem
In mathematics, computer science and economics, an optimization problem is the problem of finding the ''best'' solution from all feasible solutions. Optimization problems can be divided into two categories, depending on whether the variables are continuous or discrete: * An optimization problem with discrete variables is known as a ''discrete optimization'', in which an object such as an integer, permutation or graph must be found from a countable set. * A problem with continuous variables is known as a ''continuous optimization'', in which an optimal value from a continuous function must be found. They can include constrained problems and multimodal problems. Continuous optimization problem The '' standard form'' of a continuous optimization problem is \begin &\underset& & f(x) \\ &\operatorname & &g_i(x) \leq 0, \quad i = 1,\dots,m \\ &&&h_j(x) = 0, \quad j = 1, \dots,p \end where * is the objective function to be minimized over the -variable vector , * are called ine ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Longest-processing-time-first Scheduling
Longest-processing-time-first (LPT) is a greedy algorithm for job scheduling. The input to the algorithm is a set of ''jobs'', each of which has a specific processing-time. There is also a number ''m'' specifying the number of ''machines'' that can process the jobs. The LPT algorithm works as follows: # Order the jobs by descending order of their processing-time, such that the job with the longest processing time is first. # Schedule each job in this sequence into a machine in which the current load (= total processing-time of scheduled jobs) is smallest. Step 2 of the algorithm is essentially the list-scheduling (LS) algorithm. The difference is that LS loops over the jobs in an arbitrary order, while LPT pre-orders them by descending processing time. LPT was first analyzed by Ronald Graham in the 1960s in the context of the identical-machines scheduling problem. Later, it was applied to many other variants of the problem. LPT can also be described in a more abstract way, as an ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Fernandez's Method
Fernandez's method (FB) in computer science and operations research, is a method which is used in the multiprocessor scheduling algorithm. It is actually used to improve the quality of the lower bounding schemes which are adopted by branch and bound Branch and bound (BB, B&B, or BnB) is an algorithm design paradigm for discrete and combinatorial optimization problems, as well as mathematical optimization. A branch-and-bound algorithm consists of a systematic enumeration of candidate solut ... algorithms for solving multiprocessor scheduling problem. Fernandez's problem derives a better lower bound than HF, and propose a quadratic-time algorithm from calculating the bound. It is known that a straightforward calculation of FB takes O(n^3) time, since it must examine O(n^2) combinations each of which takes O(n) time in the worst case. Further reading *''A Comparison of List Scheduling for Parallel Processing Systems'' References Optimization algorithms and methods {{ ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Convex Function
In mathematics, a real-valued function is called convex if the line segment between any two points on the graph of a function, graph of the function lies above the graph between the two points. Equivalently, a function is convex if its epigraph (mathematics), epigraph (the set of points on or above the graph of the function) is a convex set. A twice-differentiable function of a single variable is convex if and only if its second derivative is nonnegative on its entire domain. Well-known examples of convex functions of a single variable include the quadratic function x^2 and the exponential function e^x. In simple terms, a convex function refers to a function whose graph is shaped like a cup \cup, while a concave function's graph is shaped like a cap \cap. Convex functions play an important role in many areas of mathematics. They are especially important in the study of optimization problems where they are distinguished by a number of convenient properties. For instance, a st ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Integer Linear Programming
An integer programming problem is a mathematical optimization or feasibility program in which some or all of the variables are restricted to be integers. In many settings the term refers to integer linear programming (ILP), in which the objective function and the constraints (other than the integer constraints) are linear. Integer programming is NP-complete. In particular, the special case of 0-1 integer linear programming, in which unknowns are binary, and only the restrictions must be satisfied, is one of Karp's 21 NP-complete problems. If some decision variables are not discrete, the problem is known as a mixed-integer programming problem. Canonical and standard form for ILPs In integer linear programming, the ''canonical form'' is distinct from the ''standard form''. An integer linear program in canonical form is expressed thus (note that it is the \mathbf vector which is to be decided): : \begin & \text && \mathbf^\mathrm \mathbf\\ & \text && A \mathbf \le \mathbf, \\ & ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




LPT Algorithm
Longest-processing-time-first (LPT) is a greedy algorithm for job scheduling. The input to the algorithm is a set of ''jobs'', each of which has a specific processing-time. There is also a number ''m'' specifying the number of ''machines'' that can process the jobs. The LPT algorithm works as follows: # Order the jobs by descending order of their processing-time, such that the job with the longest processing time is first. # Schedule each job in this sequence into a machine in which the current load (= total processing-time of scheduled jobs) is smallest. Step 2 of the algorithm is essentially the list-scheduling (LS) algorithm. The difference is that LS loops over the jobs in an arbitrary order, while LPT pre-orders them by descending processing time. LPT was first analyzed by Ronald Graham in the 1960s in the context of the identical-machines scheduling problem. Later, it was applied to many other variants of the problem. LPT can also be described in a more abstract way, as an ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Strong NP-completeness
In computational complexity, strong NP-completeness is a property of computational problems that is a special case of NP-completeness. A general computational problem may have numerical parameters. For example, the input to the bin packing problem is a list of objects of specific sizes and a size for the bins that must contain the objects—these object sizes and bin size are numerical parameters. A problem is said to be strongly NP-complete (NP-complete in the strong sense), if it remains NP-complete even when all of its numerical parameters are bounded by a polynomial in the length of the input. A problem is said to be strongly NP-hard if a strongly NP-complete problem has a polynomial reduction to it; in combinatorial optimization, particularly, the phrase "strongly NP-hard" is reserved for problems that are not known to have a polynomial reduction to another strongly NP-complete problem. Normally numerical parameters to a problem are given in positional notation, so a problem ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Polynomial-time Approximation Scheme
In computer science (particularly algorithmics), a polynomial-time approximation scheme (PTAS) is a type of approximation algorithm for optimization problems (most often, NP-hard optimization problems). A PTAS is an algorithm which takes an instance of an optimization problem and a parameter and produces a solution that is within a factor of being optimal (or for maximization problems). For example, for the Euclidean traveling salesman problem, a PTAS would produce a tour with length at most , with being the length of the shortest tour. The running time of a PTAS is required to be polynomial in the problem size for every fixed ε, but can be different for different ε. Thus an algorithm running in time or even counts as a PTAS. Variants Deterministic A practical problem with PTAS algorithms is that the exponent of the polynomial could increase dramatically as ε shrinks, for example if the runtime is . One way of addressing this is to define the efficient polynomial-time a ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Maximin-share
Maximin share (MMS) is a criterion of fair item allocation. Given a set of items with different values, the ''1-out-of-n maximin-share'' is the maximum value that can be gained by partitioning the items into ''n'' parts and taking the part with the minimum value. An allocation of items among ''n'' agents with different valuations is called MMS-fair if each agent gets a bundle that is at least as good as his/her 1-out-of-''n'' maximin-share. MMS fairness was invented by Eric Budish as a relaxation of the criterion of proportionality - each agent gets a bundle that is at least as good as the equal split (1/''n'' of every resource). Proportionality can be guaranteed when the items are divisible, but not when they are indivisible, even if all agents have identical valuations. In contrast, MMS fairness can always be guaranteed to identical agents, so it is a natural alternative to proportionality even when the agents are different. Motivation and examples Identical items. Suppose fi ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




Bin Packing Problem
The bin packing problem is an optimization problem, in which items of different sizes must be packed into a finite number of bins or containers, each of a fixed given capacity, in a way that minimizes the number of bins used. The problem has many applications, such as filling up containers, loading trucks with weight capacity constraints, creating file backups in media and technology mapping in FPGA semiconductor chip design. Computationally, the problem is NP-hard, and the corresponding decision problem - deciding if items can fit into a specified number of bins - is NP-complete. Despite its worst-case hardness, optimal solutions to very large instances of the problem can be produced with sophisticated algorithms. In addition, many approximation algorithms exist. For example, the first fit algorithm provides a fast but often non-optimal solution, involving placing each item into the first bin in which it will fit. It requires '' Θ''(''n'' log ''n'') time, where ''n' ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Multifit Algorithm
The multifit algorithm is an algorithm for multiway number partitioning, originally developed for the problem of identical-machines scheduling. It was developed by Coffman, Garey and Johnson. Its novelty comes from the fact that it uses an algorithm for another famous problem - the bin packing problem - as a subroutine. The algorithm The input to the algorithm is a set ''S'' of numbers, and a parameter ''n''. The required output is a partition of ''S'' into ''n'' subsets, such that the largest subset sum (also called the makespan) is as small as possible. The algorithm uses as a subroutine, an algorithm called '' first-fit-decreasing bin packing'' (FFD). The FFD algorithm takes as input the same set ''S'' of numbers, and a bin-capacity ''c''. It heuristically packs numbers into bins such that the sum of numbers in each bin is at most ''C'', aiming to use as few bins as possible. Multifit runs FFD multiple times, each time with a different capacity ''C'', until it finds some ''C'' ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Greedy Number Partitioning
In computer science, greedy number partitioning is a class of greedy algorithms for multiway number partitioning. The input to the algorithm is a set ''S'' of numbers, and a parameter ''k''. The required output is a partition of ''S'' into ''k'' subsets, such that the sums in the subsets are as nearly equal as possible. Greedy algorithms process the numbers sequentially, and insert the next number into a bin in which the sum of numbers is currently smallest. Approximate algorithms The simplest greedy partitioning algorithm is called list scheduling. It just processes the inputs in any order they arrive. It always returns a partition in which the largest sum is at most 2-\frac times the optimal (minimum) largest sum. This heuristic can be used as an online algorithm, when the order in which the items arrive cannot be controlled. An improved greedy algorithm is called LPT scheduling. It processes the inputs by descending order of value, from large to small. Since it needs to pre-o ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]