RC 4000 Multiprogramming System
   HOME

TheInfoList



OR:

The RC 4000 Multiprogramming System (also termed Monitor or RC 4000 depending on reference) is a discontinued
operating system An operating system (OS) is system software that manages computer hardware, software resources, and provides common services for computer programs. Time-sharing operating systems schedule tasks for efficient use of the system and may also in ...
developed for the RC 4000
minicomputer A minicomputer, or colloquially mini, is a class of smaller general purpose computers that developed in the mid-1960s and sold at a much lower price than mainframe and mid-size computers from IBM and its direct competitors. In a 1970 survey, ...
in 1969. For clarity, this article mostly uses the term Monitor.


Overview

The RC 4000 Multiprogramming System is historically notable for being the first attempt to break down an operating system into a group of interacting programs communicating via a
message passing In computer science, message passing is a technique for invoking behavior (i.e., running a program) on a computer. The invoking program sends a message to a process (which may be an actor or object) and relies on that process and its supporting i ...
kernel Kernel may refer to: Computing * Kernel (operating system), the central component of most operating systems * Kernel (image processing), a matrix used for image convolution * Compute kernel, in GPGPU programming * Kernel method, in machine learnin ...
. RC 4000 was not widely used, but was highly influential, sparking the
microkernel In computer science, a microkernel (often abbreviated as μ-kernel) is the near-minimum amount of software that can provide the mechanisms needed to implement an operating system (OS). These mechanisms include low-level address space management, ...
concept that dominated operating system research through the 1970s and 1980s. Monitor was created largely by one programmer,
Per Brinch Hansen Per Brinch Hansen (13 November 1938 – 31 July 2007) was a Danish-American computer scientist known for his work in operating systems, concurrent programming and parallel and distributed computing. Biography Early life and education Per B ...
, who worked at
Regnecentralen Regnecentralen (RC) was the first Danish computer company, founded on October 12, 1955. Through the 1950s and 1960s, they designed a series of computers, originally for their own use, and later to be sold commercially. Descendants of these syste ...
where the RC 4000 was being designed. Leif Svalgaard participated in implementing and testing Monitor. Brinch Hansen found that no existing operating system was suited to the new machine, and was tired of having to adapt existing systems. He felt that a better solution was to build an underlying kernel, which he referred to as the ''nucleus'', that could be used to build up an operating system from interacting programs.
Unix Unix (; trademarked as UNIX) is a family of multitasking, multiuser computer operating systems that derive from the original AT&T Unix, whose development started in 1969 at the Bell Labs research center by Ken Thompson, Dennis Ritchie, and ot ...
, for instance, uses small interacting programs for many tasks, transferring data through a system called '' pipelines or pipes''. However, a large amount of fundamental code is integrated into the kernel, notably things like
file system In computing, file system or filesystem (often abbreviated to fs) is a method and data structure that the operating system uses to control how data is stored and retrieved. Without a file system, data placed in a storage medium would be one larg ...
s and program control. Monitor would relocate such code also, making almost the entire system a set of interacting programs, reducing the kernel (nucleus) to a communications and support system only. Monitor used a pipe-like system of shared memory as the basis of its
inter-process communication In computer science, inter-process communication or interprocess communication (IPC) refers specifically to the mechanisms an operating system provides to allow the processes to manage shared data. Typically, applications can use IPC, categori ...
(IPC). Data to be sent from one process to another was copied into an empty memory
data buffer In computer science, a data buffer (or just buffer) is a region of a memory used to temporarily store data while it is being moved from one place to another. Typically, the data is stored in a buffer as it is retrieved from an input device (such a ...
, and when the receiving program was ready, back out again. The buffer was then returned to the pool. Programs had a very simple application programming interface (
API An application programming interface (API) is a way for two or more computer programs to communicate with each other. It is a type of software interface, offering a service to other pieces of software. A document or standard that describes how ...
) for passing data, using an asynchronous set of four methods. Client applications send data with send message and could optionally block using wait answer. Servers used a mirroring set of calls, wait message and send answer. Note that messages had an implicit "return path" for every message sent, making the semantics more like a
remote procedure call In distributed computing, a remote procedure call (RPC) is when a computer program causes a procedure (subroutine) to execute in a different address space (commonly on another computer on a shared network), which is coded as if it were a normal (l ...
than Mach's completely
input/output In computing, input/output (I/O, or informally io or IO) is the communication between an information processing system, such as a computer, and the outside world, possibly a human or another information processing system. Inputs are the signals ...
(I/O) based system. Monitor divided the application space in two: ''internal processes'' were the execution of traditional programs, started on request, while ''external processes'' were effectively device drivers. External processes were handled outside of user space by the nucleus, although they could be started and stopped just like any other program. Internal processes were started in the context of the ''parent'' that launched them, so each user could effectively build up their own operating system by starting and stopping programs in their own context.
Scheduling A schedule or a timetable, as a basic time-management tool, consists of a list of times at which possible task (project management), tasks, events, or actions are intended to take place, or of a sequence of events in the chronological order ...
was left entirely to the programs, if required at all (in the 1960s,
computer multitasking In computing, multitasking is the concurrent execution of multiple tasks (also known as processes) over a certain period of time. New tasks can interrupt already started ones before they finish, instead of waiting for them to end. As a result ...
was a feature of debatable value). One user could start a session in a
pre-emptive multitasking In computing, preemption is the act of temporarily interrupting an executing task, with the intention of resuming it at a later time. This interrupt is done by an external scheduler with no assistance or cooperation from the task. This preempt ...
environment, while another might start in a single-user mode to run
batch processing Computerized batch processing is a method of running software programs called jobs in batches automatically. While users are required to submit the jobs, no other interaction by the user is required to process the batch. Batches may automatically ...
at higher speed.
Real-time Real-time or real time describes various operations in computing or other processes that must guarantee response times within a specified time (deadline), usually a relatively short time. A real-time process is generally one that happens in defined ...
scheduling could be supported by sending messages to a timer process that would only return at the appropriate time. These two areas have seen the vast majority of development since Monitor's release, driving newer designs to use hardware to support messaging, and supporting threads within applications to reduce launch times. For instance, Mach required a
memory management unit A memory management unit (MMU), sometimes called paged memory management unit (PMMU), is a computer hardware unit having all memory references passed through itself, primarily performing the translation of virtual memory addresses to physical ad ...
to improve messaging by using the
copy-on-write Copy-on-write (COW), sometimes referred to as implicit sharing or shadowing, is a resource-management technique used in computer programming to efficiently implement a "duplicate" or "copy" operation on modifiable resources. If a resource is dupl ...
protocol and mapping (instead of copying) data from process to process. Mach also used threading extensively, allowing the external programs, or ''servers'' in more modern terms, to easily start up new handlers for incoming requests. Still, Mach IPC was too slow to make the microkernel approach practically useful. This only changed when
Jochen Liedtke Jochen Liedtke (26 May 1953 – 10 June 2001) was a German computer scientist, noted for his work on microkernel operating systems, especially in creating the L4 microkernel family. Vita Education In the mid-1970s Liedtke studied for a di ...
's
L4 microkernel L4 is a family of second-generation microkernels, used to implement a variety of types of operating systems (OS), though mostly for Unix-like, ''Portable Operating System Interface'' (POSIX) compliant types. L4, like its predecessor microkernel ...
demonstrated IPC overheads reduced by an order-of-magnitude.


See also

*
THE multiprogramming system The THE multiprogramming system or THE OS was a computer operating system designed by a team led by Edsger W. Dijkstra, described in monographs in 1965-66 (Jun 14, 1965) and published in 1968. Dijkstra never named the system; "THE" is simply ...
*
Timeline of operating systems This article presents a timeline of events in the history of computer operating systems from 1951 to the current day. For a narrative explaining the overall developments, see the History of operating systems. 1950s * 1951 ** LEO I 'Lyons Electro ...


References

*
RC 4000 Software: Multiprogramming SystemRC 4000 Reference Manual
at bitsavers.org {{Microkernel Microkernel-based operating systems Microkernels 1969 software