HOME

TheInfoList



OR:

Coscheduling is the principle for
concurrent system Concurrency refers to the ability of a system to execute multiple tasks through simultaneous execution or time-sharing (context switching), sharing resources and managing interactions. Concurrency improves responsiveness, throughput, and scalabi ...
s of scheduling related processes to run on different processors at the same time (in parallel). There are various specific implementations to realize this. If an application consists of a collection of processes working closely together, and if some but not all of the processes are scheduled for execution, the executing processes may attempt to communicate with those that are not executing, which will cause them to block. Eventually the other processes will be scheduled for execution, but by this time the situation may be reversed so that these processes also block waiting for interactions with others. As a result, the application makes progress at the rate of at most one interprocess interaction per
time slice In computing, preemption is the act performed by an external scheduler — without assistance or cooperation from the task — of temporarily interrupting an executing task, with the intention of resuming it at a later time. This preemptive s ...
, and will have low
throughput Network throughput (or just throughput, when in context) refers to the rate of message delivery over a communication channel in a communication network, such as Ethernet or packet radio. The data that these messages contain may be delivered ov ...
and high latency.


Implementation

Coscheduling consists of two ideas: * When scheduling any of the processes in the related group, schedule all of them for execution so that they can communicate efficiently. * When a process in the group blocks while communicating with another process in the group, don't remove it from its processor. Instead, leave its state loaded on its processor for a short time, under the assumption that it will receive a response shortly. If this time elapses and the process still hasn't become runnable, then assume it will sleep for a long time and reschedule the processor. Some coscheduling techniques exhibit ''fragments'' of processes that do not run concurrently with the rest of the coscheduled set. The occurrence of these fragments is usually minimized by these algorithms. Gang scheduling is a stricter variant of coscheduling that disallows fragments completely.


Types of coscheduling

Researchers have classified three types of coscheduling: ''explicit coscheduling'', ''local scheduling'' and ''implicit or dynamic coscheduling''.Fabrizio Petrini, Wu-chun Feng. ''Improved Resource Utilization with Buffered Coscheduling'', Journal of Parallel Algorithms and Applications, 2000 Explicit coscheduling requires all processing to take place at the same time, and is typically implemented by global scheduling across all processors. A specific algorithm is known as gang scheduling. Local coscheduling allows individual processors to schedule the processing independently. Dynamic (or implicit) coscheduling is a form of coscheduling where individual processors can still schedule processing independently, but they make scheduling decisions in cooperation with other processors.


History

The term "coscheduling" was introduced by . The original definition is that ''the process working set must be coscheduled (scheduled for execution simultaneously) for the parallel program to make progress''.


See also

* Gang scheduling


Notes

* {{refend Parallel computing Processor scheduling algorithms