
The NOOP scheduler is the simplest
I/O scheduler
Input/output (I/O) scheduling is the method that computer operating systems use to decide in which order I/O operations will be submitted to storage volumes. I/O scheduling is sometimes called disk scheduling.
Purpose
I/O scheduling usually ...
for the
Linux kernel
The Linux kernel is a Free and open-source software, free and open source Unix-like kernel (operating system), kernel that is used in many computer systems worldwide. The kernel was created by Linus Torvalds in 1991 and was soon adopted as the k ...
. This scheduler was developed by
Jens Axboe.
Overview
The NOOP scheduler inserts all incoming I/O requests into a simple
FIFO queue and implements request merging. This scheduler is useful when it has been determined that the host should ''not'' attempt to re-order requests based on the sector numbers contained therein. In other words, the scheduler assumes that the host is unaware of how to productively re-order requests.
There are (generally) three basic situations where this situation is desirable:
* If I/O scheduling will be handled at a lower layer of the I/O stack. Examples of lower layers that might handle the scheduling include block devices, intelligent RAID controllers, Network Attached Storage, or an externally attached controller such as a storage subsystem accessed through a switched Storage Area Network. Since I/O requests are potentially rescheduled at the lower level, resequencing IOPs at the host level uses host CPU time on operations that will just be undone at the lower level, increasing latency/decreasing throughput for no productive reason.
* Because accurate details of sector position are hidden from the host system. An example would be a RAID controller that performs no scheduling on its own. Even though the host has the ability to re-order requests and the RAID controller does not, the host system lacks the visibility to accurately re-order the requests to lower seek time. Since the host has no way of judging whether one sequence is better than another, it cannot restructure the active queue optimally and should, therefore, pass it on to the device that is (theoretically) more aware of such details.
* Because read/write head movement doesn't impact application performance enough to justify the reordering overhead. This is usually the case with non-rotational media such as flash drives or
solid-state drives
A solid-state drive (SSD) is a type of solid-state storage device that uses integrated circuits to store data persistently. It is sometimes called semiconductor storage device, solid-state device, or solid-state disk.
SSDs rely on non-vo ...
(SSDs).
However, NOOP is not necessarily the preferred I/O scheduler for the above scenarios. Typical to performance tuning, all guidance shall be based on observed work load patterns (undermining one's ability to create simplistic rules of thumb). If there is contention for available I/O bandwidth from other applications, it is still possible that other schedulers will generate better performance by virtue of more intelligently carving up that bandwidth for the applications deemed most important. For example, running an LDAP directory server may benefit from
deadline's read preference and latency guarantees. At the same time, a user with a desktop system running many different applications may want to have access to
CFQ's tunables or its ability to prioritize bandwidth for particular applications over others (
ionice
nice is a program found on Unix and Unix-like operating systems such as Linux. It directly maps to a kernel call of the same name. nice is used to invoke a utility or shell script with a particular CPU priority, thus giving the process more o ...
).
If there is no contention between applications, then there are little to no benefits from selecting a scheduler for the above-listed three scenarios. This is due to a resulting inability to deprioritize one workload's operations in a way that makes additional capacity available to another workload. In other words, if the I/O paths are not saturated and the requests for all the workloads fail to cause an unreasonable shifting around of drive heads (which the operating system is aware of), the benefit of prioritizing one workload may create a situation where CPU time spent scheduling I/O is wasted instead of providing desired benefits.
The Linux kernel also exposes the
sysfs
sysfs is a pseudo file system provided by the Linux kernel that exports information about various kernel subsystems, hardware devices, and associated device drivers from the kernel's device model to user space through virtual files. In addition ...
parameter as a scheduler-agnostic configuration, making it possible for the block layer's requests merging logic to be disabled either entirely, or only for more complex merging attempts. This reduces the need for the NOOP scheduler as the overhead of most I/O schedulers is associated with their attempts to locate adjacent sectors in the request queue in order to merge them. However, most I/O workloads benefit from a certain level of requests merging, even on fast low-latency storage such as SSDs.
See also
*
Anticipatory scheduling
*
Deadline scheduler
*
CFQ scheduler
References
External links
Understanding and Optimizing Disk I/OWorkload Dependent Performance Evaluation of the Linux 2.6 I/O SchedulersBest practices for the Kernel-based Virtual Machine(provides general info on I/O schedulers)
{{Linux
Disk scheduling algorithms