The Brain Fuck Scheduler (BFS) is a
process scheduler designed for the
Linux kernel in August 2009 as an alternative to the
Completely Fair Scheduler
The Completely Fair Scheduler (CFS) is a process scheduler that was merged into the 2.6.23 (October 2007) release of the Linux kernel and is the default scheduler of the tasks of the SCHED_NORMAL class (i.e., tasks that have no real-time execution ...
(CFS) and the
O(1) scheduler
An O(1) scheduler (pronounced "O of 1 scheduler", "Big O of 1 scheduler", or "constant time scheduler") is a kernel scheduling design that can schedule processes within a constant amount of time, regardless of how many processes are running on the ...
.
BFS was created by an experienced kernel programmer
Con Kolivas.
The objective of BFS, compared to other schedulers, is to provide a scheduler with a simpler
algorithm
In mathematics and computer science, an algorithm () is a finite sequence of rigorous instructions, typically used to solve a class of specific problems or to perform a computation. Algorithms are used as specifications for performing ...
, that does not require adjustment of
heuristic
A heuristic (; ), or heuristic technique, is any approach to problem solving or self-discovery that employs a practical method that is not guaranteed to be optimal, perfect, or rational, but is nevertheless sufficient for reaching an immediate ...
s or tuning
parameters
A parameter (), generally, is any characteristic that can help in defining or classifying a particular system (meaning an event, project, object, situation, etc.). That is, a parameter is an element of a system that is useful, or critical, when ...
to tailor
performance to a specific type of
computational
Computation is any type of arithmetic or non-arithmetic calculation that follows a well-defined model (e.g., an algorithm).
Mechanical or electronic devices (or, historically, people) that perform computations are known as ''computers''. An espe ...
workload. Kolivas asserted that these tunable parameters were difficult for the average user to understand, especially in terms of interactions of multiple parameters with each other, and claimed that the use of such tuning parameters could often result in improved performance in a specific targeted type of computation, at the cost of worse performance in the general case.
BFS has been reported to improve responsiveness on Linux desktop computers with fewer than 16
cores.
Shortly following its introduction, the new scheduler made headlines within the Linux community, appearing on ''
Slashdot
''Slashdot'' (sometimes abbreviated as ''/.'') is a social news website that originally advertised itself as "News for Nerds. Stuff that Matters". It features news stories concerning science, technology, and politics that are submitted and eval ...
'', with reviews in ''
Linux Magazine
''Linux Magazine'' is an international magazine for Linux software enthusiasts and professionals. It is published by the former Linux New Media division of the German media company Medialinx AG.
The magazine was first published in German in 199 ...
'' and ''
Linux Pro Magazine''.
Although there have been varied reviews of improved performance and responsiveness, Con Kolivas did not intend for BFS to be integrated into the mainline kernel.
Theoretical design and efficiency
In 2009, BFS was introduced and had originally used a
doubly linked list
In computer science, a doubly linked list is a linked data structure that consists of a set of sequentially linked records called nodes. Each node contains three fields: two link fields (references to the previous and to the next node in the s ...
data structure,
but the data structure is treated like a
queue __NOTOC__
Queue () may refer to:
* Queue area, or queue, a line or area where people wait for goods or services
Arts, entertainment, and media
*''ACM Queue'', a computer magazine
* ''The Queue'' (Sorokin novel), a 1983 novel by Russian author ...
. Task insertion is .
Task search for next task to execute is worst case.
It uses a single global
run queue
In modern computers many processes run at once. Active processes are placed in an array called a run queue, or runqueue. The run queue may contain priority values for each process, which will be used by the scheduler to determine which process to ...
which all CPUs use. Tasks with higher scheduling priorities get executed first.
Tasks are ordered (or distributed) and chosen based on the virtual deadline formula in all policies except for the realtime and Isochronous priority classes.
The execution behavior is still a weighted variation of the
Round-Robin Scheduler especially when tasks have the same priority below the Isochronous policy.
The user tuneable round robin interval (
time slice
In computing, preemption is the act of temporarily interrupting an executing task, with the intention of resuming it at a later time. This interrupt is done by an external scheduler with no assistance or cooperation from the task. This preempt ...
) is 6 milliseconds by default which was chosen as the minimal
jitter just below detectable by humans.
Kolivas claimed that anything below the 6 ms was pointless and anything above 300 ms for the round robin timeslice is fruitless in terms of throughput.
This important tuneable can tailor the round robin scheduler as a trade off between throughput and latency.
All tasks get the same time slice with the exception of realtime FIFO which is assumed to have infinite time slice.
Kolivas explained the reason why he choose to go with the doubly linked list mono-runqueue than the multi-runqueue (round robin
) priority array
per CPU that was used in his RDSL scheduler was to put to ease fairness among the multiple CPU scenario and remove complexity that each runqueue in a multi-runqueue scenario had to maintain its own latencies and
ask/nowiki> fairness. He claimed that deterministic latencies was guaranteed with BFS in his later iteration of MuQSS. He also recognized possible lock contention problem (related to the altering, removal, creation of task node data) with increasing CPUs and the overhead of next task for execution lookup. MuQSS tried to resolve those problems.
Kolivas later changed the design to a skip list
In computer science, a skip list (or skiplist) is a probabilistic data structure that allows \mathcal(\log n) average complexity for search as well as \mathcal(\log n) average complexity for insertion within an ordered sequence of n elements. T ...
in the v0.480 release of BFS in 2016. This time this altered the efficiency of the scheduler. He noted task insertion, task lookup; , with , for task removal.
Virtual deadline
The virtual deadline formula is a future deadline time that is the scaled round robin timeslice based on the nice level offset by the current time (in niffy units or nanosecond jiffies, an internal kernel time counter). The virtual deadline only suggests the order but does not guarantee that a task will run exactly on the future scheduled niffy.
First a prio ratios lookup table is created. It is based on a recursive sequence. It increases 10% each nice level. It follows a parabolic pattern if graphed, and the niced tasks are distributed as a moving squared function from 0 to 39 (corresponding from highest to lowest nice priority) as the domain and 128 to 5089 as the range. The moving part comes from the variable in the virtual deadline formula that Kolivas hinted.
The task's nice
Nice ( , ; Niçard dialect, Niçard: , classical norm, or , nonstandard, ; it, Nizza ; lij, Nissa; grc, Νίκαια; la, Nicaea) is the prefecture of the Alpes-Maritimes departments of France, department in France. The Nice urban unit, agg ...
-to-index mapping function is mapped from nice −20...19 to index 0...39 to be used as the input to the prio ratio lookup table. This mapping function is the macro in sched.h in the kernel header. The internal kernel implementation slightly differs with range between 100 and 140 static priority but users will see it as −20...19 nice.
The virtual deadline is based on this exact formula:
Alternatively,
where is the virtual deadline in u64 integer nanoseconds as a function of nice and which is the current time in niffies, is the prio ratio table lookup as a function of index, is the task's nice-to-index mapping function, is the round robin timeslice in milliseconds, is a constant of 1 millisecond in terms of nanoseconds as a latency reducing approximation of the conversion factor of but Kolivas uses a base 2 constant with approximately that scale. Smaller values of mean that the virtual deadline is earlier corresponding to negative nice values. Larger values of indicate the virtual deadline is pushed back later corresponding to positive nice values. It uses this formula whenever the timeslice expires.
128 in base 2 corresponds to 100 in base 10 and possibly a "pseudo 100". 115 in base 2 corresponds to 90 in base 10. Kolivas uses 128 for "fast shifts", as in division is right shift base 2.
Scheduling policies
BFS uses scheduling policies to determine how much of the CPU tasks may use. BFS uses 4 scheduling tiers (called scheduling policies or scheduling classes) ordered from best to worst which determines how tasks are selected with the ones on top being executed first.
Each task has a special value called a prio. In the v0.462 edition (used in the -ck 4.0 kernel patchset), there are total of 103 "priority queues" (aka prio) or allowed values that it can take. No actual special data structure was used as the priority queue but only the doubly linked list runqueue itself. The lower prio value means it is more important and gets executed first.
Realtime policy
The realtime policy was designed for realtime tasks. This policy implies that the running tasks cannot be interrupted (i.e. preempted) by the lower prio-ed task or lower priority policy tiers. Priority classes considered under the realtime policy by the scheduler are those marked SCHED_RR and SCHED_FIFO. The scheduler treats realtime round robin (SCHED_RR) and realtime FIFO (SCHED_FIFO) differently.
The design laid out first 100 static priority queues.
The task that will get chosen for execution is based on task availability of the lowest value of prio of the 100 queues and FIFO scheduling.
On forks, the process priority will be demoted to normal policy.
On unprivileged use (i.e. non-root user) of sched_setscheduler called with a request for realtime policy class, the scheduler will demote the task to Isochronous policy.
Isochronous policy
The Isochronous policy was designed for near realtime performance for non-root
In vascular plants, the roots are the organs of a plant that are modified to provide anchorage for the plant and take in water and nutrients into the plant body, which allows plants to grow taller and faster. They are most often below the su ...
users.
The design laid out 1 priority queue that by default ran as pseudo-realtime tasks, but can be tuned as a degree of realtime.
The behavior of the policy can allow a task can be demoted to normal policy when it exceeds a tuneable resource handling percentage (70% by default) of 5 seconds scaled to the number of online CPUs and the timer resolution plus 1 tick. The formula was altered in MuQSS due to the multi-runqueue design. The exact formulas are:
where is the total number of isochronous ticks, is the timer frequency, is the number of online CPUs, is the tuneable resource handling percentage not in decimal but as a whole number. The timer frequency is set to 250 by default and editable in the kernel, but usually tuned to 100 Hz for servers and 1000 Hz for interactive desktops. 250 is the balanced value. Setting to 100 made tasks behave as realtime and 0 made it not pseudo-realtime and anything in the middle was pseudo-realtime.
The task that had an earliest virtual deadline was chosen for execution, but when multiple Isochronous tasks are in existence, they schedule as round robin allowing tasks to run the tuneable round robin value (with 6 ms as the default) one after another in a fair equal chance without considering the nice level.
This behavior of the Isochronous policy is unique to only BFS and MuQSS and may not be implemented in other CPU schedulers.
Normal policy
The normal policy was designed for regular use and is the default policy. Newly created tasks are typically marked normal.
The design laid out one priority queue and tasks are chosen to be executed first based on earliest virtual deadline.
Idle priority policy
The idle priority was designed for background processes such as distributed programs and transcoders so that foreground processes or those above this scheduling policy can run uninterrupted.
The design laid out 1 priority queue and tasks can be promoted to normal policy automatically to prevent indefinite resource hold.
The next executed task with Idle priority with others residing in the same priority policy is selected by the earliest virtual deadline.
Preemption
Preemption can occur when a newly ready task with a higher priority policy (i.e. higher prio) has an earlier virtual deadline than the currently running task - which will be descheduled and put at the back of the queue. Descheduled means that its virtual deadline is updated. The task's time gets refilled to max round robin quantum when it has used up all its time. If the scheduler found the task at the higher prio with the earliest virtual deadline, it will execute in place of the less important currently running task only if all logical CPUs (including hyperthreaded cores / SMT threads) are busy. The scheduler will delay preemption as long as possible if there are unused logical CPUs.
If a task is marked idle priority policy, it cannot preempt at all even other idle policy marked tasks but rather use cooperative multitasking
Cooperative multitasking, also known as non-preemptive multitasking, is a style of computer multitasking in which the operating system never initiates a context switch from a running process to another process. Instead, in order to run multiple ...
.
Task placement, multiple cores
When the scheduler discovers a waking task on a non-unicore system, it will need to determine which logical CPU to run the task on. The scheduler favors most the idle hyperthreaded cores (or idle SMT threads) first on the same CPU that the task executed on, then the other idle core of a multicore CPU, then the other CPUs on the same NUMA node, then all busy hyperthreaded cores / SMT threads / logical CPUs to be preempted on the same NUMA
Nuclear mitotic apparatus protein 1 is a protein that in humans is encoded by the ''NUMA1'' gene.
Interactions
Nuclear mitotic apparatus protein 1 has been shown to interact with PIM1, Band 4.1, GPSM2
G-protein-signaling modulator 2, also call ...
node, then the other (remote) NUMA node and is ranked on a preference list. This special scan exists to minimize latency overhead resulting of migrating the task.
The preemption order is similar to the above paragraph. The preemption order is hyperthreaded core / SMT units on the same multicore first, then the other core in the multicore, then the other CPU on the same NUMA node. When it goes scanning for a task to preempt in the other remote NUMA node, the preemption is just any busy threads with lower to equal prio or later virtual deadline assuming that all logical CPUs (including hyperthreaded core / SMT threads) in the machine are all busy. The scheduler will have to scan for a suitable task with a lower or maybe equal priority policy task (with a later virtual deadline if necessary) to preempt and avoid logical CPUs with a task with a higher priority policy which it cannot preempt. Local preemption has a higher rank than scanning for a remote idle NUMA unit.
When a task is involuntary preempted at the time the CPU is slowed down as a result of kernel mediated CPU frequency scaling (aka CPU frequency governor), the task is specially marked "sticky" except those marked as realtime policy. Marked sticky indicates that the task still has unused time and the task is restricted executing to the same CPU. The task will be marked sticky whenever the CPU scaling governor has scaled the CPU at a slower speed. The idled stickied task will return to either executing at full Ghz speed by chance or to be rescheduled to execute on the best idle CPU that is not the same CPU that the task ran on. It is not desirable to migrate the task to other places but make it idle instead because of increased latency brought about of overhead to migrating the task to another CPU or NUMA node. This sticky feature was removed in the last iteration of BFS (v0.512) corresponding to Kolivas' patchset 4.8-ck1 and did not exist in MuQSS.
schedtool
A privileged user can change the priority policy of a process with the schedtool program or it is done by a program itself. The priority class can be manipulated at the code level with a syscall
In computing, a system call (commonly abbreviated to syscall) is the programmatic way in which a computer program requests a service from the operating system on which it is executed. This may include hardware-related services (for example, acc ...
like sched_setscheduler only available to root, which schedtool uses.
Benchmarks
In a contemporary study, the author compared the BFS to the CFS using the Linux kernel v3.6.2 and several performance-based endpoints. The purpose of this study was to evaluate the Completely Fair Scheduler (CFS) in the vanilla
Vanilla is a spice derived from orchids of the genus ''Vanilla (genus), Vanilla'', primarily obtained from pods of the Mexican species, flat-leaved vanilla (''Vanilla planifolia, V. planifolia'').
Pollination is required to make the p ...
Linux kernel and the BFS in the corresponding kernel patched with the ck1 patchset. Seven different machines were used to see if differences exist and, to what degree they scale using performance based metrics. Number of logical CPUs ranged from 1 to 16. These end-points were never factors in the primary design goals of the BFS. The results were encouraging.
Kernels patched with the ck1 patch set including the BFS outperformed the vanilla kernel using the CFS at nearly all the performance-based benchmarks tested. Further study with a larger test set could be conducted, but based on the small test set of 7 PCs evaluated, these increases in process queuing, efficiency/speed are, on the whole, independent of CPU type (mono, dual, quad, hyperthreaded, etc.), CPU architecture (32-bit and 64-bit) and of CPU multiplicity (mono or dual socket).
Moreover, on several "modern" CPUs, such as the Intel Core 2 Duo
Intel Core is a line of streamlined midrange consumer, workstation and enthusiast computer central processing units (CPUs) marketed by Intel Corporation. These processors displaced the existing mid- to high-end Pentium processors at the time o ...
and Core i7
The following is a list of Intel Core i7 brand microprocessors. Introduced in 2008, the Core i7 line of microprocessors are intended to be used by high-end users.
Desktop processors
Nehalem microarchitecture (1st generation)
"Bloomfield" ...
, that represent common workstations and laptops, BFS consistently outperformed the CFS in the vanilla kernel at all benchmarks. Efficiency and speed gains were small to moderate.
Adoption
BFS is the default scheduler for the following desktop Linux distributions:
* NimbleX and Sabayon Linux
Sabayon Linux or Sabayon (formerly ''RR4 Linux'' and ''RR64 Linux''), was an Italian Gentoo-based Linux distribution created by Fabio Erculiani and the Sabayon development team. Sabayon followed the " out of the box" philosophy, aiming to give ...
7
* PCLinuxOS 2010
* Zenwalk
Zenwalk GNU/Linux is a desktop-focused Linux distribution founded by Jean-Philippe Guillemin. It is based on Slackware with very few modifications at system level making it 100% compatible with Slackware. Zenwalk aims to be a modern and multi- ...
6.4
* GalliumOS 2.1
Additionally, BFS has been added to an experimental branch of Google
Google LLC () is an American Multinational corporation, multinational technology company focusing on Search Engine, search engine technology, online advertising, cloud computing, software, computer software, quantum computing, e-commerce, ar ...
's Android development repository. It was not included in the Froyo release after blind testing did not show an improved user experience.
MuQSS
BFS has been retired in favour of MuQSS, known formally as the Multiple Queue Skiplist Scheduler, a rewritten implementation of the same concept. The primary author abandoned work on MuQSS by the end of August 2021.
Theoretical design and efficiency
MuQSS uses a bidirectional static arrayed 8 level skip list
In computer science, a skip list (or skiplist) is a probabilistic data structure that allows \mathcal(\log n) average complexity for search as well as \mathcal(\log n) average complexity for insertion within an ordered sequence of n elements. T ...
and tasks are ordered by static priority ueues/nowiki> (referring to the scheduling policy) and a virtual deadline. 8 was chosen to fit the array in the cacheline. Doubly linked data structure design was chosen to speed up task removal. Removing a task takes only O(1) with a doubly skip list versus the original design by William Pugh which takes worst case.
Task insertion is . The next task for execution lookup is , where is the number of CPUs. The next task for execution is per runqueue, but the scheduler examines every other runqueues to maintain task fairness among CPUs, for latency or balancing (to maximize CPU usage and cache coherency on the same NUMA node over those that access across NUMA nodes), so ultimately . The max number of tasks it can handle are 64k tasks per runqueue per CPU. It uses multiple task runqueues in some configurations one runqueue per CPU, whereas its predecessor BFS only used one task runqueue for all CPUs.
Tasks are ordered as a gradient in the skip list in a way that realtime policy priority comes first and idle policy priority comes last. Normal and idle priority policy still get sorted by virtual deadline which uses nice
Nice ( , ; Niçard dialect, Niçard: , classical norm, or , nonstandard, ; it, Nizza ; lij, Nissa; grc, Νίκαια; la, Nicaea) is the prefecture of the Alpes-Maritimes departments of France, department in France. The Nice urban unit, agg ...
values. Realtime and Isochronous policy tasks are run in FIFO order ignoring nice values. New tasks with same key are placed in FIFO order meaning that newer tasks get placed at the end of the list (i.e. top most node vertically), and tasks at 0th level or at the front-bottom get execution first before those at nearest to the top vertically and those furthest away from the head node. The key used for inserted sorting is either the static priority or the virtual deadline.
The user can choose to share runqueues among multicore or have a runqueue per logical CPU. The speculation of sharing runqueues design was to reduce latency with a tradeoff of throughput.
A new behavior introduced by MuQSS was the use of the high resolution timer for below millisecond accuracy when timeslices were used up resulting in rescheduling tasks.
See also
* Fair-share scheduling
References
External links
Brain Fuck Scheduler FAQ
{{Linux kernel
Free software
Linux kernel process schedulers