Shared-memory
   HOME

TheInfoList



OR:

In
computer science Computer science is the study of computation, automation, and information. Computer science spans theoretical disciplines (such as algorithms, theory of computation, information theory, and automation) to Applied science, practical discipli ...
, shared memory is
memory Memory is the faculty of the mind by which data or information is encoded, stored, and retrieved when needed. It is the retention of information over time for the purpose of influencing future action. If past events could not be remembered, ...
that may be simultaneously accessed by multiple programs with an intent to provide communication among them or avoid redundant copies. Shared memory is an efficient means of passing data between programs. Depending on context, programs may run on a single processor or on multiple separate processors. Using memory for communication inside a single program, e.g. among its multiple threads, is also referred to as shared memory.


In hardware

In computer hardware, ''shared memory'' refers to a (typically large) block of
random access memory Random-access memory (RAM; ) is a form of computer memory that can be read and changed in any order, typically used to store working Data (computing), data and machine code. A Random access, random-access memory device allows data items to b ...
(RAM) that can be accessed by several different
central processing unit A central processing unit (CPU), also called a central processor, main processor or just processor, is the electronic circuitry that executes instructions comprising a computer program. The CPU performs basic arithmetic, logic, controlling, an ...
s (CPUs) in a multiprocessor computer system. Shared memory systems may use: *
uniform memory access Uniform memory access (UMA) is a shared memory architecture used in parallel computers. All the processors in the UMA model share the physical memory uniformly. In an UMA architecture, access time to a memory location is independent of which proces ...
(UMA): all the processors share the physical memory uniformly; * non-uniform memory access (NUMA): memory access time depends on the memory location relative to a processor; *
cache-only memory architecture Cache only memory architecture (COMA) is a computer memory organization for use in multiprocessors in which the local memories (typically DRAM) at each node are used as cache. This is in contrast to using the local memories as actual main memory, ...
(COMA): the local memories for the processors at each node is used as cache instead of as actual main memory. A shared memory system is relatively easy to program since all processors share a single view of data and the communication between processors can be as fast as memory accesses to the same location. The issue with shared memory systems is that many CPUs need fast access to memory and will likely cache memory, which has two complications: * access time degradation: when several processors try to access the same memory location it causes contention. Trying to access nearby memory locations may cause
false sharing In computer science, false sharing is a performance-degrading usage pattern that can arise in systems with distributed, coherent caches at the size of the smallest resource block managed by the caching mechanism. When a system participant attempts ...
. Shared memory computers cannot scale very well. Most of them have ten or fewer processors; * lack of data coherence: whenever one cache is updated with information that may be used by other processors, the change needs to be reflected to the other processors, otherwise the different processors will be working with incoherent data. Such
cache coherence In computer architecture, cache coherence is the uniformity of shared resource data that ends up stored in multiple local caches. When clients in a system maintain caches of a common memory resource, problems may arise with incoherent data, whi ...
protocols can, when they work well, provide extremely high-performance access to shared information between multiple processors. On the other hand, they can sometimes become overloaded and become a bottleneck to performance. Technologies like
crossbar switch In electronics and telecommunications, a crossbar switch (cross-point switch, matrix switch) is a collection of switches arranged in a matrix configuration. A crossbar switch has multiple input and output lines that form a crossed pattern of int ...
es, Omega networks,
HyperTransport HyperTransport (HT), formerly known as Lightning Data Transport, is a technology for interconnection of computer processors. It is a bidirectional serial/parallel high-bandwidth, low- latency point-to-point link that was introduced on April 2 ...
or front-side bus can be used to dampen the bottleneck-effects. In case of a Heterogeneous System Architecture (processor architecture that integrates different types of processors, such as
CPU A central processing unit (CPU), also called a central processor, main processor or just processor, is the electronic circuitry that executes instructions comprising a computer program. The CPU performs basic arithmetic, logic, controlling, and ...
s and GPUs, with shared memory), the memory management unit (MMU) of the CPU and the
input–output memory management unit In computing, an input–output memory management unit (IOMMU) is a memory management unit (MMU) connecting a direct-memory-access–capable (DMA-capable) I/O bus to the main memory. Like a traditional MMU, which translates CPU-visible virtual a ...
(IOMMU) of the GPU have to share certain characteristics, like a common address space. The alternatives to shared memory are distributed memory and distributed shared memory, each having a similar set of issues.


In software

In computer software, ''shared memory'' is either * a method of
inter-process communication In computer science, inter-process communication or interprocess communication (IPC) refers specifically to the mechanisms an operating system provides to allow the processes to manage shared data. Typically, applications can use IPC, categori ...
(IPC), i.e. a way of exchanging data between programs running at the same time. One
process A process is a series or set of activities that interact to produce a result; it may occur once-only or be recurrent or periodic. Things called a process include: Business and management *Business process, activities that produce a specific se ...
will create an area in RAM which other processes can access; * a method of conserving memory space by directing accesses to what would ordinarily be copies of a piece of data to a single instance instead, by using
virtual memory In computing, virtual memory, or virtual storage is a memory management technique that provides an "idealized abstraction of the storage resources that are actually available on a given machine" which "creates the illusion to users of a very l ...
mappings or with explicit support of the program in question. This is most often used for
shared libraries In computer science, a library is a collection of non-volatile resources used by computer programs, often for software development. These may include configuration data, documentation, help data, message templates, pre-written code and subr ...
and for
Execute in place In computer science, execute in place (XIP) is a method of executing programs directly from long-term storage rather than copying it into RAM. It is an extension of using shared memory to reduce the total amount of memory required. Its general ef ...
(XIP). Since both processes can access the shared memory area like regular working memory, this is a very fast way of communication (as opposed to other mechanisms of IPC such as named pipes, Unix domain sockets or CORBA). On the other hand, it is less scalable, as for example the communicating processes must be running on the same machine (of other IPC methods, only Internet domain sockets—not Unix domain sockets—can use a
computer network A computer network is a set of computers sharing resources located on or provided by network nodes. The computers use common communication protocols over digital interconnections to communicate with each other. These interconnections are ...
), and care must be taken to avoid issues if processes sharing memory are running on separate CPUs and the underlying architecture is not cache coherent. IPC by shared memory is used for example to transfer images between the application and the X server on Unix systems, or inside the IStream object returned by CoMarshalInterThreadInterfaceInStream in the COM libraries under
Windows Windows is a group of several proprietary graphical operating system families developed and marketed by Microsoft. Each family caters to a certain sector of the computing industry. For example, Windows NT for consumers, Windows Server for serv ...
.
Dynamic libraries In computing, a dynamic linker is the part of an operating system that loads and links the shared libraries needed by an executable when it is executed (at "run time"), by copying the content of libraries from persistent storage to RAM, fillin ...
are generally held in memory once and mapped to multiple processes, and only pages that had to be customized for the individual process (because a symbol resolved differently there) are duplicated, usually with a mechanism known as copy-on-write that transparently copies the page when a write is attempted, and then lets the write succeed on the private copy. Compared to multiple address space operating systems, memory sharing -- especially of sharing procedures or pointer-based structures -- is simpler in single address space operating systems.


Support on Unix-like systems

POSIX The Portable Operating System Interface (POSIX) is a family of standards specified by the IEEE Computer Society for maintaining compatibility between operating systems. POSIX defines both the system- and user-level application programming interf ...
provides a standardized API for using shared memory, ''POSIX Shared Memory''. This uses the function shm_open from sys/mman.h. POSIX interprocess communication (part of the POSIX:XSI Extension) includes the shared-memory functions shmat, shmctl, shmdt and shmget. Unix System V provides an API for shared memory as well. This uses shmget from sys/shm.h. BSD systems provide "anonymous mapped memory" which can be used by several processes. The shared memory created by shm_open is persistent. It stays in the system until explicitly removed by a process. This has a drawback in that if the process crashes and fails to clean up shared memory it will stay until system shutdown; that limitation is not present in an Android-specific implementation dubbed ashmem. POSIX also provides the
mmap In computing, mmap(2) is a POSIX-compliant Unix system call that maps files or devices into memory. It is a method of memory-mapped file I/O. It implements demand paging because file contents are not immediately read from disk and initially use no ...
API for mapping files into memory; a mapping can be shared, allowing the file's contents to be used as shared memory. Linux distributions based on the 2.6 kernel and later offer /dev/shm as shared memory in the form of a
RAM disk Ram, ram, or RAM may refer to: Animals * A male sheep * Ram cichlid, a freshwater tropical fish People * Ram (given name) * Ram (surname) * Ram (director) (Ramsubramaniam), an Indian Tamil film director * RAM (musician) (born 1974), Dutch * Ra ...
, more specifically as a world-writable directory (a directory in which every user of the system can create files) that is stored in memory. Both the RedHat and
Debian Debian (), also known as Debian GNU/Linux, is a Linux distribution composed of free and open-source software, developed by the community-supported Debian Project, which was established by Ian Murdock on August 16, 1993. The first version of D ...
based distributions include it by default. Support for this type of RAM disk is completely optional within the kernel configuration file.


Support on Windows

On Windows, one can use CreateFileMapping and MapViewOfFile functions to map a region of a file into memory in multiple processes.


Cross-platform support

Some C++ libraries provide a portable and object-oriented access to shared memory functionality. For example,
Boost Boost, boosted or boosting may refer to: Science, technology and mathematics * Boost, positive manifold pressure in turbocharged engines * Boost (C++ libraries), a set of free peer-reviewed portable C++ libraries * Boost (material), a material b ...
contains the Boost.Interprocess C++ Library and Qt provides the QSharedMemory class.


Programming language support

For programming languages with POSIX bindings (say, C/C++), shared memory regions can be created and accessed by calling the functions provided by the operating system. Other programming languages may have their own ways of using these operating facilities for similar effect. For example,
PHP PHP is a general-purpose scripting language geared toward web development. It was originally created by Danish-Canadian programmer Rasmus Lerdorf in 1993 and released in 1995. The PHP reference implementation is now produced by The PHP Group ...
provides an
API An application programming interface (API) is a way for two or more computer programs to communicate with each other. It is a type of software Interface (computing), interface, offering a service to other pieces of software. A document or standa ...
to create shared memory, similar to
POSIX The Portable Operating System Interface (POSIX) is a family of standards specified by the IEEE Computer Society for maintaining compatibility between operating systems. POSIX defines both the system- and user-level application programming interf ...
functions.Shared Memory Functions in PHP-API
/ref>


See also

* Distributed memory * Distributed shared memory *
Shared graphics memory In computer architecture, shared graphics memory refers to a design where the graphics chip does not have its own dedicated memory, and instead shares the main system RAM with the CPU and other components. This design is used with many integrate ...
* Heterogeneous System Architecture * Global variable *
Nano-threads In computer science nano-threads are highly optimized lightweight threads designed for use on shared memory In computer science, shared memory is memory that may be simultaneously accessed by multiple programs with an intent to provide communic ...
*
Execute in place In computer science, execute in place (XIP) is a method of executing programs directly from long-term storage rather than copying it into RAM. It is an extension of using shared memory to reduce the total amount of memory required. Its general ef ...
* Shared register * Shared snapshot objects * Von Neumann Architecture Bottleneck


References


External links


IPC:Shared Memory
by Dave Marshall
Shared Memory Introduction
Ch. 12 from book by Richard Stevens "UNIX Network Programming, Volume 2, Second Edition: Interprocess Communications".
SharedHashFile
An open source, shared memory hash table. {{Authority control Computer architecture Memory management Inter-process communication Concurrent computing Parallel computing Distributed computing architecture