Dynamic Priority Scheduling
   HOME
*





Dynamic Priority Scheduling
Dynamic priority scheduling is a type of scheduling algorithm in which the priorities are calculated during the execution of the system. The goal of dynamic priority scheduling is to adapt to dynamically changing progress and to form an optimal configuration in a self-sustained manner. It can be very hard to produce well-defined policies to achieve the goal depending on the difficulty of a given problem. Earliest deadline first scheduling and Least slack time scheduling are examples of Dynamic priority scheduling algorithms. Optimal Schedulable Utilization The idea of real-time scheduling is to confine processor utilization under schedulable utilization of a certain scheduling algorithm, which is scaled from 0 to 1. Higher schedulable utilization means higher utilization of resource and the better the algorithm. In preemptible scheduling, dynamic priority scheduling such as earliest deadline first (EDF) provides the optimal schedulable utilization of 1 in contrast to less than 0.6 ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Scheduling Algorithm
In computing, scheduling is the action of assigning ''resources'' to perform ''tasks''. The ''resources'' may be processors, network links or expansion cards. The ''tasks'' may be threads, processes or data flows. The scheduling activity is carried out by a process called scheduler. Schedulers are often designed so as to keep all computer resources busy (as in load balancing), allow multiple users to share system resources effectively, or to achieve a target quality-of-service. Scheduling is fundamental to computation itself, and an intrinsic part of the execution model of a computer system; the concept of scheduling makes it possible to have computer multitasking with a single central processing unit (CPU). Goals A scheduler may aim at one or more goals, for example: * maximizing ''throughput'' (the total amount of work completed per time unit); * minimizing '' wait time'' (time from work becoming ready until the first point it begins execution); * minimizing '' latency ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Earliest Deadline First Scheduling
Earliest deadline first (EDF) or least time to go is a dynamic priority scheduling algorithm used in real-time operating systems to place processes in a priority queue. Whenever a scheduling event occurs (task finishes, new task released, etc.) the queue will be searched for the process closest to its deadline. This process is the next to be scheduled for execution. EDF is an ''optimal'' scheduling algorithm on preemptive uniprocessors, in the following sense: if a collection of independent ''jobs,'' each characterized by an arrival time, an execution requirement and a deadline, can be scheduled (by any algorithm) in a way that ensures all the jobs complete by their deadline, the EDF will schedule this collection of jobs so they all complete by their deadline. With scheduling periodic processes that have deadlines equal to their periods, EDF has a utilization bound of 100%. Thus, the schedulability test for EDF is: :U = \sum_^ \frac \leq 1, where the \left\ are the worst-case com ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Least Slack Time Scheduling
Least slack time (LST) scheduling is an algorithm for dynamic priority scheduling. It assigns priorities to processes based on their ''slack time''. Slack time is the amount of time left after a job if the job was started now. This algorithm is also known as least laxity first. Its most common use is in embedded systems, especially those with multiple processors. It imposes the simple constraint that each process on each available processor possesses the same run time, and that individual processes do not have an affinity to a certain processor. This is what lends it a suitability to embedded systems. Slack time This scheduling algorithm first selects those processes that have the smallest "slack time". Slack time is defined as the temporal difference between the deadline, the ready time and the run time. More formally, the ''slack time'' s for a process is defined as: s = (d - t) -c' where d is the process deadline, t is the real time since the cycle start, and c' is the remain ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Rate-monotonic Scheduling
In computer science, rate-monotonic scheduling (RMS) is a priority assignment algorithm used in real-time operating systems (RTOS) with a static-priority scheduling class. The static priorities are assigned according to the cycle duration of the job, so a shorter cycle duration results in a higher job priority. These operating systems are generally preemptive and have deterministic guarantees with regard to response times. Rate monotonic analysis is used in conjunction with those systems to provide scheduling guarantees for a particular application. Introduction A simple version of rate-monotonic analysis assumes that threads have the following properties: *No resource sharing (processes do not share resources, ''e.g.'' a hardware resource, a queue, or any kind of semaphore blocking or non-blocking ( busy-waits)) *Deterministic deadlines are exactly equal to periods *Static priorities (the task with the highest static priority that is runnable immediately preempts all other tasks ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]