Process-oriented Programming
   HOME





Process-oriented Programming
Process-oriented programming is a programming paradigm that separates the concerns of data structures and the concurrent processes that act upon them. The data structures in this case are typically persistent, complex, and large scale - the subject of general purpose applications, as opposed to specialized processing of specialized data sets seen in high productivity applications (HPC). The model allows the creation of large scale applications that partially share common data sets. Programs are functionally decomposed into parallel processes that create and act upon logically shared data. The paradigm was originally invented for parallel computers in the 1980s, especially computers built with transputer microprocessors by INMOS, or similar architectures. Occam was an early process-oriented language developed for the Transputer. Some derivations have evolved from the message passing paradigm of Occam to enable uniform efficiency when porting applications between distributed memory ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Programming Paradigm
A programming paradigm is a relatively high-level way to conceptualize and structure the implementation of a computer program. A programming language can be classified as supporting one or more paradigms. Paradigms are separated along and described by different dimensions of programming. Some paradigms are about implications of the execution model, such as allowing Side effect (computer science), side effects, or whether the sequence of operations is defined by the execution model. Other paradigms are about the way code is organized, such as grouping into units that include both state and behavior. Yet others are about Syntax (programming languages), syntax and Formal grammar, grammar. Some common programming paradigms include (shown in hierarchical relationship): * imperative programming, Imperative code directly controls Control flow, execution flow and state change, explicit statements that change a program state ** procedural programming, procedural organized as function (c ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Transputer
The transputer is a series of pioneering microprocessors from the 1980s, intended for parallel computing. To support this, each transputer had its own integrated memory and serial communication links to exchange data with other transputers. They were designed and produced by Inmos, a semiconductor device, semiconductor company based in Bristol, United Kingdom. For some time in the late 1980s, many considered the transputer to be the next great design for the future of computing. While the transputer did not achieve this expectation, the transputer architecture was highly influential in provoking new ideas in computer architecture, several of which have re-emerged in different forms in modern systems. Background In the early 1980s, conventional central processing units (CPUs) appeared to have reached a performance limit. Up to that time, manufacturing difficulties limited the amount of circuitry that could fit on a chip. Continued improvements in the integrated circuit#Manu ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

INMOS
Inmos International plc (trademark INMOS) and two operating subsidiaries, Inmos Limited (UK) and Inmos Corporation (US), was a British semiconductor company founded by Iann Barron, Richard Petritz, and Paul Schroeder in July 1978. Inmos Limited’s head office and design office were at Aztec West business park in Bristol, England. Products Inmos' first products were static RAM devices, followed by dynamic RAMs and EEPROMs. Despite early production difficulties, Inmos eventually captured around 60% of the world SRAM market. However, Barron's long-term aim was to produce an innovative microprocessor architecture intended for parallel processing, the ''transputer''. David May and Robert Milne were recruited to design this processor, which went into production in 1985 in the form of the T212 and T414 chips. The transputer achieved some success as the basis for several parallel supercomputers from companies such as Meiko (formed by ex-Inmos employees in 1985), Floating Poin ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Occam Programming Language
occam is a programming language which is concurrent and builds on the communicating sequential processes (CSP) process algebra, Inmos document 72 occ 45 03 and shares many of its features. It is named after philosopher William of Ockham after whom Occam's razor is named. Occam is an imperative procedural language (such as Pascal). It was developed by David May and others at Inmos (trademark INMOS), advised by Tony Hoare, as the native programming language for their transputer microprocessors, but implementations for other platforms are available. The most widely known version is occam 2; its programming manual was written by Steven Ericsson-Zenith and others at Inmos. Overview In the following examples indentation and formatting are critical for parsing the code: expressions are terminated by the end of the line, lists of expressions need to be on the same level of indentation. This feature, named the off-side rule, is also found in other languages such as Haskell and Pyth ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Message Passing
In computer science, message passing is a technique for invoking behavior (i.e., running a program) on a computer. The invoking program sends a message to a process (which may be an actor or object) and relies on that process and its supporting infrastructure to then select and run some appropriate code. Message passing differs from conventional programming where a process, subroutine, or function is directly invoked by name. Message passing is key to some models of concurrency and object-oriented programming. Message passing is ubiquitous in modern computer software. It is used as a way for the objects that make up a program to work with each other and as a means for objects and systems running on different computers (e.g., the Internet) to interact. Message passing may be implemented by various mechanisms, including channels. Overview Message passing is a technique for invoking behavior (i.e., running a program) on a computer. In contrast to the traditional technique of ca ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Distributed Memory
In computer science, distributed memory refers to a Multiprocessing, multiprocessor computer system in which each Central processing unit, processor has its own private Computer memory, memory. Computational tasks can only operate on local data, and if remote data are required, the computational task must communicate with one or more remote processors. In contrast, a Shared memory architecture, shared memory multiprocessor offers a single memory space used by all processors. Processors do not have to be aware where data resides, except that there may be performance penalties, and that race conditions are to be avoided. In a distributed memory system there is typically a processor, a memory, and some form of interconnection that allows programs on each processor to interact with each other. The interconnect can be organised with Network_topology#Point-to-point, point to point links or separate hardware can provide a switching network. The network topology is a key factor in deter ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




Ease Programming Language
Ease is a general purpose parallel programming language. It is designed by Steven Ericsson-Zenith, a researcher at Yale University, the Institute for Advanced Science & Engineering in Silicon Valley, California, the Ecole Nationale Supérieure des Mines de Paris, and the Pierre and Marie Curie University, the science department of the Sorbonne. The book ''Process Interaction Models'' is the Ease language specification. Ease combines the process constructs of communicating sequential processes (CSP) with logically shared data structures called ''contexts''. Contexts are parallel data types that are constructed by processes and provide a way for processes to interact. The language includes two process constructors. A ''cooperation'' includes an explicit barrier synchronization and is written: ::\parallel P() \parallel Q() ; If one process finishes before the other, then it will wait until the other processes are finished. A ''subordination'' creates a process that shares the '' ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Java (programming Language)
Java is a High-level programming language, high-level, General-purpose programming language, general-purpose, Memory safety, memory-safe, object-oriented programming, object-oriented programming language. It is intended to let programmers ''write once, run anywhere'' (Write once, run anywhere, WORA), meaning that compiler, compiled Java code can run on all platforms that support Java without the need to recompile. Java applications are typically compiled to Java bytecode, bytecode that can run on any Java virtual machine (JVM) regardless of the underlying computer architecture. The syntax (programming languages), syntax of Java is similar to C (programming language), C and C++, but has fewer low-level programming language, low-level facilities than either of them. The Java runtime provides dynamic capabilities (such as Reflective programming, reflection and runtime code modification) that are typically not available in traditional compiled languages. Java gained popularity sh ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Multi-core Processor
A multi-core processor (MCP) is a microprocessor on a single integrated circuit (IC) with two or more separate central processing units (CPUs), called ''cores'' to emphasize their multiplicity (for example, ''dual-core'' or ''quad-core''). Each core reads and executes Instruction set, program instructions, specifically ordinary Instruction set, CPU instructions (such as add, move data, and branch). However, the MCP can run instructions on separate cores at the same time, increasing overall speed for programs that support Multithreading (computer architecture), multithreading or other parallel computing techniques. Manufacturers typically integrate the cores onto a single IC Die (integrated circuit), die, known as a ''chip multiprocessor'' (CMP), or onto multiple dies in a single Chip carrier, chip package. As of 2024, the microprocessors used in almost all new personal computers are multi-core. A multi-core processor implements multiprocessing in a single physical package. Des ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Actor Model
The actor model in computer science is a mathematical model of concurrent computation that treats an ''actor'' as the basic building block of concurrent computation. In response to a message it receives, an actor can: make local decisions, create more actors, send more messages, and determine how to respond to the next message received. Actors may modify their own private state, but can only affect each other indirectly through messaging (removing the need for lock-based synchronization). The actor model originated in 1973. It has been used both as a framework for a theoretical understanding of computation and as the theoretical basis for several practical implementations of concurrent systems. The relationship of the model to other work is discussed in actor model and process calculi. History According to Carl Hewitt, unlike previous models of computation, the actor model was inspired by physics, including general relativity and quantum mechanics. It was also influenced ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Input Queue
In computer science, an input queue is a collection of processes in storage that are waiting to be brought into memory to run a program. Input queues are mainly used in Operating System Scheduling which is a technique for distributing resources among processes. Input queues not only apply to operating systems (OS), but may also be applied to scheduling inside networking devices. The purpose of scheduling is to ensure resources are being distributed fairly and effectively; therefore, it improves the performance of the system. Essentially, a queue is a collection which has data added in the rear position and removed from the front position. There are many different types of queues, and the ways they operate may be totally different. Operating systems use First-Come, First-Served queues, Shortest remaining time, Fixed-priority pre-emptive scheduling, round-robin scheduling and multilevel queue scheduling. Network devices use First-In-First-Out queue, Weighted fair queue, Priority qu ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]