OpenMP
   HOME

TheInfoList



OR:

OpenMP (Open Multi-Processing) is an
application programming interface An application programming interface (API) is a way for two or more computer programs to communicate with each other. It is a type of software interface, offering a service to other pieces of software. A document or standard that describes how t ...
(API) that supports multi-platform
shared-memory In computer science, shared memory is random-access memory, memory that may be simultaneously accessed by multiple programs with an intent to provide communication among them or avoid redundant copies. Shared memory is an efficient means of pass ...
multiprocessing Multiprocessing is the use of two or more central processing units (CPUs) within a single computer system. The term also refers to the ability of a system to support more than one processor or the ability to allocate tasks between them. There ar ...
programming in C,
C++ C++ (pronounced "C plus plus") is a high-level general-purpose programming language created by Danish computer scientist Bjarne Stroustrup as an extension of the C programming language, or "C with Classes". The language has expanded significan ...
, and Fortran, on many platforms, instruction-set architectures and
operating system An operating system (OS) is system software that manages computer hardware, software resources, and provides common services for computer programs. Time-sharing operating systems schedule tasks for efficient use of the system and may also in ...
s, including
Solaris Solaris may refer to: Arts and entertainment Literature, television and film * ''Solaris'' (novel), a 1961 science fiction novel by Stanisław Lem ** ''Solaris'' (1968 film), directed by Boris Nirenburg ** ''Solaris'' (1972 film), directed by ...
,
AIX Aix or AIX may refer to: Computing * AIX, a line of IBM computer operating systems *An Alternate Index, for a Virtual Storage Access Method Key Sequenced Data Set * Athens Internet Exchange, a European Internet exchange point Places Belgi ...
,
FreeBSD FreeBSD is a free and open-source Unix-like operating system descended from the Berkeley Software Distribution (BSD), which was based on Research Unix. The first version of FreeBSD was released in 1993. In 2005, FreeBSD was the most popular ...
,
HP-UX HP-UX (from "Hewlett Packard Unix") is Hewlett Packard Enterprise's proprietary implementation of the Unix operating system, based on Unix System V (initially System III) and first released in 1984. Current versions support HPE Integrity Ser ...
,
Linux Linux ( or ) is a family of open-source Unix-like operating systems based on the Linux kernel, an operating system kernel first released on September 17, 1991, by Linus Torvalds. Linux is typically packaged as a Linux distribution, which ...
,
macOS macOS (; previously OS X and originally Mac OS X) is a Unix operating system developed and marketed by Apple Inc. since 2001. It is the primary operating system for Apple's Mac computers. Within the market of desktop and lapt ...
, and Windows. It consists of a set of
compiler directive In computer programming, a directive or pragma (from "pragmatic") is a language construct that specifies how a compiler (or other translator) should process its input. Directives are not part of the grammar of a programming language, and may vary ...
s, library routines, and
environment variable An environment variable is a dynamic-named value that can affect the way running processes will behave on a computer. They are part of the environment in which a process runs. For example, a running process can query the value of the TEMP envi ...
s that influence run-time behavior. OpenMP is managed by the
nonprofit A nonprofit organization (NPO) or non-profit organisation, also known as a non-business entity, not-for-profit organization, or nonprofit institution, is a legal entity organized and operated for a collective, public or social benefit, in co ...
technology
consortium A consortium (plural: consortia) is an association of two or more individuals, companies, organizations or governments (or any combination of these entities) with the objective of participating in a common activity or pooling their resources for ...
''OpenMP Architecture Review Board'' (or ''OpenMP ARB''), jointly defined by a broad swath of leading computer hardware and software vendors, including
Arm In human anatomy, the arm refers to the upper limb in common usage, although academically the term specifically means the upper arm between the glenohumeral joint (shoulder joint) and the elbow joint. The distal part of the upper limb between th ...
,
AMD Advanced Micro Devices, Inc. (AMD) is an American multinational semiconductor company based in Santa Clara, California, that develops computer processors and related technologies for business and consumer markets. While it initially manufactur ...
, IBM,
Intel Intel Corporation is an American multinational corporation and technology company headquartered in Santa Clara, California. It is the world's largest semiconductor chip manufacturer by revenue, and is one of the developers of the x86 seri ...
,
Cray Cray Inc., a subsidiary of Hewlett Packard Enterprise, is an American supercomputer manufacturer headquartered in Seattle, Washington. It also manufactures systems for data storage and analytics. Several Cray supercomputer systems are listed ...
, HP,
Fujitsu is a Japanese multinational information and communications technology equipment and services corporation, established in 1935 and headquartered in Tokyo. Fujitsu is the world's sixth-largest IT services provider by annual revenue, and the la ...
,
Nvidia Nvidia CorporationOfficially written as NVIDIA and stylized in its logo as VIDIA with the lowercase "n" the same height as the uppercase "VIDIA"; formerly stylized as VIDIA with a large italicized lowercase "n" on products from the mid 1990s to ...
,
NEC is a Japanese multinational information technology and electronics corporation, headquartered in Minato, Tokyo. The company was known as the Nippon Electric Company, Limited, before rebranding in 1983 as NEC. It provides IT and network soluti ...
,
Red Hat Red Hat, Inc. is an American software company that provides open source software products to enterprises. Founded in 1993, Red Hat has its corporate headquarters in Raleigh, North Carolina, with other offices worldwide. Red Hat has become ass ...
,
Texas Instruments Texas Instruments Incorporated (TI) is an American technology company headquartered in Dallas, Texas, that designs and manufactures semiconductors and various integrated circuits, which it sells to electronics designers and manufacturers globall ...
, and
Oracle Corporation Oracle Corporation is an American multinational computer technology corporation headquartered in Austin, Texas. In 2020, Oracle was the third-largest software company in the world by revenue and market capitalization. The company sells da ...
. OpenMP uses a
portable Portable may refer to: General * Portable building, a manufactured structure that is built off site and moved in upon completion of site and utility work * Portable classroom, a temporary building installed on the grounds of a school to provide ...
, scalable model that gives
programmer A computer programmer, sometimes referred to as a software developer, a software engineer, a programmer or a coder, is a person who creates computer programs — often for larger computer software. A programmer is someone who writes/creates ...
s a simple and flexible interface for developing parallel applications for platforms ranging from the standard
desktop computer A desktop computer (often abbreviated desktop) is a personal computer designed for regular use at a single location on or near a desk due to its size and power requirements. The most common configuration has a case that houses the power supply ...
to the
supercomputer A supercomputer is a computer with a high level of performance as compared to a general-purpose computer. The performance of a supercomputer is commonly measured in floating-point operations per second ( FLOPS) instead of million instructions ...
. An application built with the hybrid model of
parallel programming Parallel computing is a type of computation in which many calculations or processes are carried out simultaneously. Large problems can often be divided into smaller ones, which can then be solved at the same time. There are several different for ...
can run on a
computer cluster A computer cluster is a set of computers that work together so that they can be viewed as a single system. Unlike grid computers, computer clusters have each node set to perform the same task, controlled and scheduled by software. The comp ...
using both OpenMP and Message Passing Interface (MPI), such that OpenMP is used for parallelism ''within'' a (multi-core) node while MPI is used for parallelism ''between'' nodes. There have also been efforts to run OpenMP on software distributed shared memory systems, to translate OpenMP into MPI and to extend OpenMP for non-shared memory systems.


Design

OpenMP is an implementation of multithreading, a method of parallelizing whereby a ''primary'' thread (a series of instructions executed consecutively) ''forks'' a specified number of ''sub''-threads and the system divides a task among them. The threads then run concurrently, with the
runtime environment In computer programming, a runtime system or runtime environment is a sub-system that exists both in the computer where a program is created, as well as in the computers where the program is intended to be run. The name comes from the compile t ...
allocating threads to different processors. The section of code that is meant to run in parallel is marked accordingly, with a compiler directive that will cause the threads to form before the section is executed. Each thread has an ''ID'' attached to it which can be obtained using a
function Function or functionality may refer to: Computing * Function key, a type of key on computer keyboards * Function model, a structured representation of processes in a system * Function object or functor or functionoid, a concept of object-oriente ...
(called omp_get_thread_num()). The thread ID is an integer, and the primary thread has an ID of ''0''. After the execution of the parallelized code, the threads ''join'' back into the primary thread, which continues onward to the end of the program. By default, each thread executes the parallelized section of code independently. ''Work-sharing constructs'' can be used to divide a task among the threads so that each thread executes its allocated part of the code. Both
task parallelism Task parallelism (also known as function parallelism and control parallelism) is a form of parallelization of computer code across multiple processors in parallel computing environments. Task parallelism focuses on distributing tasks—concurrent ...
and
data parallelism Data parallelism is parallelization across multiple processors in parallel computing environments. It focuses on distributing the data across different nodes, which operate on the data in parallel. It can be applied on regular data structures like ...
can be achieved using OpenMP in this way. The runtime environment allocates threads to processors depending on usage, machine load and other factors. The runtime environment can assign the number of threads based on
environment variable An environment variable is a dynamic-named value that can affect the way running processes will behave on a computer. They are part of the environment in which a process runs. For example, a running process can query the value of the TEMP envi ...
s, or the code can do so using functions. The OpenMP functions are included in a
header file Many programming languages and other computer files have a directive, often called include (sometimes copy or import), that causes the contents of the specified file to be inserted into the original file. These included files are called copybooks ...
labelled in C/
C++ C++ (pronounced "C plus plus") is a high-level general-purpose programming language created by Danish computer scientist Bjarne Stroustrup as an extension of the C programming language, or "C with Classes". The language has expanded significan ...
.


History

The OpenMP Architecture Review Board (ARB) published its first API specifications, OpenMP for Fortran 1.0, in October 1997. In October the following year they released the C/C++ standard. 2000 saw version 2.0 of the Fortran specifications with version 2.0 of the C/C++ specifications being released in 2002. Version 2.5 is a combined C/C++/Fortran specification that was released in 2005. Up to version 2.0, OpenMP primarily specified ways to parallelize highly regular loops, as they occur in matrix-oriented numerical programming, where the number of iterations of the loop is known at entry time. This was recognized as a limitation, and various task parallel extensions were added to implementations. In 2005, an effort to standardize task parallelism was formed, which published a proposal in 2007, taking inspiration from task parallelism features in
Cilk Cilk, Cilk++, Cilk Plus and OpenCilk are general-purpose programming languages designed for multithreaded parallel computing. They are based on the C and C++ programming languages, which they extend with constructs to express parallel loops ...
, X10 and
Chapel A chapel is a Christian place of prayer and worship that is usually relatively small. The term has several meanings. Firstly, smaller spaces inside a church that have their own altar are often called chapels; the Lady chapel is a common ty ...
. Version 3.0 was released in May 2008. Included in the new features in 3.0 is the concept of ''tasks'' and the ''task'' construct, significantly broadening the scope of OpenMP beyond the parallel loop constructs that made up most of OpenMP 2.0. Version 4.0 of the specification was released in July 2013. It adds or improves the following features: support for accelerators; atomics; error handling; thread affinity; tasking extensions; user defined reduction;
SIMD Single instruction, multiple data (SIMD) is a type of parallel processing in Flynn's taxonomy. SIMD can be internal (part of the hardware design) and it can be directly accessible through an instruction set architecture (ISA), but it shoul ...
support; Fortran 2003 support. The current version is 5.2, released in November 2021. Note that not all compilers (and OSes) support the full set of features for the latest version/s.


Core elements

The core elements of OpenMP are the constructs for thread creation, workload distribution (work sharing), data-environment management, thread synchronization, user-level runtime routines and environment variables. In C/C++, OpenMP uses #pragmas. The OpenMP specific pragmas are listed below.


Thread creation

The pragma ''omp parallel'' is used to fork additional threads to carry out the work enclosed in the construct in parallel. The original thread will be denoted as ''master thread'' with thread ID 0. Example (C program): Display "Hello, world." using multiple threads. #include #include int main(void) Use flag -fopenmp to compile using GCC: $ gcc -fopenmp hello.c -o hello -ldl Output on a computer with two cores, and thus two threads: Hello, world. Hello, world. However, the output may also be garbled because of the
race condition A race condition or race hazard is the condition of an electronics, software, or other system where the system's substantive behavior is dependent on the sequence or timing of other uncontrollable events. It becomes a bug when one or more of t ...
caused from the two threads sharing the
standard output In computer programming, standard streams are interconnected input and output communication channels between a computer program and its environment when it begins execution. The three input/output (I/O) connections are called standard input (stdin ...
. Hello, wHello, woorld. rld. Whether printf is atomic depends on the underlying implementation unlike C++'s std::cout.


Work-sharing constructs

Used to specify how to assign independent work to one or all of the threads. * ''omp for'' or ''omp do'': used to split up loop iterations among the threads, also called loop constructs. * ''sections'': assigning consecutive but independent code blocks to different threads * ''single'': specifying a code block that is executed by only one thread, a barrier is implied in the end * ''master'': similar to single, but the code block will be executed by the master thread only and no barrier implied in the end. Example: initialize the value of a large array in parallel, using each thread to do part of the work int main(int argc, char **argv) This example is
embarrassingly parallel In parallel computing, an embarrassingly parallel workload or problem (also called embarrassingly parallelizable, perfectly parallel, delightfully parallel or pleasingly parallel) is one where little or no effort is needed to separate the problem ...
, and depends only on the value of . The OpenMP flag tells the OpenMP system to split this task among its working threads. The threads will each receive a unique and private version of the variable. For instance, with two worker threads, one thread might be handed a version of that runs from 0 to 49999 while the second gets a version running from 50000 to 99999.


Variant directives

Variant directives is one of the major features introduced in OpenMP 5.0 specification to facilitate programmers to improve performance portability. They enable adaptation of OpenMP pragmas and user code at compile time. The specification defines traits to describe active OpenMP constructs, execution devices, and functionality provided by an implementation, context selectors based on the traits and user-defined conditions, and ''metadirective'' and ''declare directive'' directives for users to program the same code region with variant directives. * The ''metadirective'' is an executable directive that conditionally resolves to another directive at compile time by selecting from multiple directive variants based on traits that define an OpenMP condition or context. * The ''declare variant'' directive has similar functionality as ''metadirective'' but selects a function variant at the call-site based on context or user-defined conditions. The mechanism provided by the two variant directives for selecting variants is more convenient to use than the C/C++ preprocessing since it directly supports variant selection in OpenMP and allows an OpenMP compiler to analyze and determine the final directive from variants and context. // code adaptation using preprocessing directives int v1 v2 v3 #if defined(nvptx) #pragma omp target teams distribute parallel for map(to:v1,v2) map(from:v3) for (int i= 0; i< N; i++) v3 = v1 * v2 #else #pragma omp target parallel for map(to:v1,v2) map(from:v3) for (int i= 0; i< N; i++) v3 = v1 * v2 #endif // code adaptation using metadirective in OpenMP 5.0 int v1 v2 v3 #pragma omp target map(to:v1,v2) map(from:v3) #pragma omp metadirective \ when(device=: target teams distribute parallel for)\ default(target parallel for) for (int i= 0; i< N; i++) v3 = v1 * v2


Clauses

Since OpenMP is a shared memory programming model, most variables in OpenMP code are visible to all threads by default. But sometimes private variables are necessary to avoid
race condition A race condition or race hazard is the condition of an electronics, software, or other system where the system's substantive behavior is dependent on the sequence or timing of other uncontrollable events. It becomes a bug when one or more of t ...
s and there is a need to pass values between the sequential part and the parallel region (the code block executed in parallel), so data environment management is introduced as ''data sharing attribute clauses'' by appending them to the OpenMP directive. The different types of clauses are: ; Data sharing attribute clauses: * ''shared'': the data declared outside a parallel region is shared, which means visible and accessible by all threads simultaneously. By default, all variables in the work sharing region are shared except the loop iteration counter. * ''private'': the data declared within a parallel region is private to each thread, which means each thread will have a local copy and use it as a temporary variable. A private variable is not initialized and the value is not maintained for use outside the parallel region. By default, the loop iteration counters in the OpenMP loop constructs are private. * ''default'': allows the programmer to state that the default data scoping within a parallel region will be either ''shared'', or ''none'' for C/C++, or ''shared'', ''firstprivate'', ''private'', or ''none'' for Fortran. The ''none'' option forces the programmer to declare each variable in the parallel region using the data sharing attribute clauses. * ''firstprivate'': like ''private'' except initialized to original value. * ''lastprivate'': like ''private'' except original value is updated after construct. * ''reduction'': a safe way of joining work from all threads after construct. ; Synchronization clauses: * ''critical'': the enclosed code block will be executed by only one thread at a time, and not simultaneously executed by multiple threads. It is often used to protect shared data from
race condition A race condition or race hazard is the condition of an electronics, software, or other system where the system's substantive behavior is dependent on the sequence or timing of other uncontrollable events. It becomes a bug when one or more of t ...
s. * ''atomic'': the memory update (write, or read-modify-write) in the next instruction will be performed atomically. It does not make the entire statement atomic; only the memory update is atomic. A compiler might use special hardware instructions for better performance than when using ''critical''. * ''ordered'': the structured block is executed in the order in which iterations would be executed in a sequential loop * ''barrier'': each thread waits until all of the other threads of a team have reached this point. A work-sharing construct has an implicit barrier synchronization at the end. *''nowait'': specifies that threads completing assigned work can proceed without waiting for all threads in the team to finish. In the absence of this clause, threads encounter a barrier synchronization at the end of the work sharing construct. ; Scheduling clauses: *''schedule (type, chunk)'': This is useful if the work sharing construct is a do-loop or for-loop. The iterations in the work sharing construct are assigned to threads according to the scheduling method defined by this clause. The three types of scheduling are: #''static'': Here, all the threads are allocated iterations before they execute the loop iterations. The iterations are divided among threads equally by default. However, specifying an integer for the parameter ''chunk'' will allocate chunk number of contiguous iterations to a particular thread. #''dynamic'': Here, some of the iterations are allocated to a smaller number of threads. Once a particular thread finishes its allocated iteration, it returns to get another one from the iterations that are left. The parameter ''chunk'' defines the number of contiguous iterations that are allocated to a thread at a time. #''guided'': A large chunk of contiguous iterations are allocated to each thread dynamically (as above). The chunk size decreases exponentially with each successive allocation to a minimum size specified in the parameter ''chunk'' ; IF control: *''if'': This will cause the threads to parallelize the task only if a condition is met. Otherwise the code block executes serially. ; Initialization: * ''firstprivate'': the data is private to each thread, but initialized using the value of the variable using the same name from the master thread. * ''lastprivate'': the data is private to each thread. The value of this private data will be copied to a global variable using the same name outside the parallel region if current iteration is the last iteration in the parallelized loop. A variable can be both ''firstprivate'' and ''lastprivate''. * ''threadprivate'': The data is a global data, but it is private in each parallel region during the runtime. The difference between ''threadprivate'' and ''private'' is the global scope associated with threadprivate and the preserved value across parallel regions. ; Data copying: * ''copyin'': similar to ''firstprivate'' for ''private'' variables, ''threadprivate'' variables are not initialized, unless using ''copyin'' to pass the value from the corresponding global variables. No ''copyout'' is needed because the value of a threadprivate variable is maintained throughout the execution of the whole program. * ''copyprivate'': used with ''single'' to support the copying of data values from private objects on one thread (the ''single'' thread) to the corresponding objects on other threads in the team. ; Reduction: * ''reduction (operator , intrinsic : list)'': the variable has a local copy in each thread, but the values of the local copies will be summarized (reduced) into a global shared variable. This is very useful if a particular operation (specified in ''operator'' for this particular clause) on a variable runs iteratively, so that its value at a particular iteration depends on its value at a prior iteration. The steps that lead up to the operational increment are parallelized, but the threads updates the global variable in a thread safe manner. This would be required in parallelizing
numerical integration In analysis, numerical integration comprises a broad family of algorithms for calculating the numerical value of a definite integral, and by extension, the term is also sometimes used to describe the numerical solution of differential equations ...
of functions and
differential equation In mathematics, a differential equation is an equation that relates one or more unknown functions and their derivatives. In applications, the functions generally represent physical quantities, the derivatives represent their rates of change, an ...
s, as a common example. ; Others: * ''flush'': The value of this variable is restored from the register to the memory for using this value outside of a parallel part * ''master'': Executed only by the master thread (the thread which forked off all the others during the execution of the OpenMP directive). No implicit barrier; other team members (threads) not required to reach.


User-level runtime routines

Used to modify/check the number of threads, detect if the execution context is in a parallel region, how many processors in current system, set/unset locks, timing functions, etc


Environment variables

A method to alter the execution features of OpenMP applications. Used to control loop iterations scheduling, default number of threads, etc. For example, ''OMP_NUM_THREADS'' is used to specify number of threads for an application.


Implementations

OpenMP has been implemented in many commercial compilers. For instance, Visual C++ 2005, 2008, 2010, 2012 and 2013 support it (OpenMP 2.0, in Professional, Team System, Premium and Ultimate editions), as well as
Intel Parallel Studio Intel Parallel Studio XE was a software development product developed by Intel that facilitated native code development on Windows, macOS and Linux in C++ and Fortran for parallel computing. Parallel programming enables software programs to take ...
for various processors.
Oracle Solaris Studio Oracle Developer Studio, formerly named Oracle Solaris Studio, Sun Studio, Sun WorkShop, Forte Developer, and SunPro Compilers, is Oracle Corporation's flagship software development product for the Solaris and Linux operating systems. It inclu ...
compilers and tools support the latest OpenMP specifications with productivity enhancements for Solaris OS (UltraSPARC and x86/x64) and Linux platforms. The Fortran, C and C++ compilers from
The Portland Group PGI (formerly The Portland Group, Inc.) was a company that produced a set of commercially available Fortran, C and C++ compilers for high-performance computing systems. On July 29, 2013, Nvidia acquired The Portland Group, Inc.GCC has also supported OpenMP since version 4.2. Compilers with an implementation of OpenMP 3.0: * GCC 4.3.1 * Mercurium compiler * Intel Fortran and C/C++ versions 11.0 and 11.1 compilers, Intel C/C++ and Fortran Composer XE 2011 and Intel Parallel Studio. * IBM XL compiler * Sun Studio 12 update 1 has a full implementation of OpenMP 3.0 * Multi-Processor Computing Several compilers support OpenMP 3.1: * GCC 4.7 * Intel Fortran and C/C++ compilers 12.1 * IBM XL C/C++ compilers for AIX and Linux, V13.1 & IBM XL Fortran compilers for AIX and Linux, V14.1 * LLVM/Clang 3.7 * Absoft Fortran Compilers v. 19 for Windows, Mac OS X and Linux Compilers supporting OpenMP 4.0: * GCC 4.9.0 for C/C++, GCC 4.9.1 for Fortran * Intel Fortran and C/C++ compilers 15.0 * IBM XL C/C++ for Linux, V13.1 (partial) & XL Fortran for Linux, V15.1 (partial) * LLVM/Clang 3.7 (partial) Several Compilers supporting OpenMP 4.5: * GCC 6 for C/C++ * Intel Fortran and C/C++ compilers 17.0, 18.0, 19.0 *LLVM/Clang 12 Partial support for OpenMP 5.0: * GCC 9 for C/C++ * Intel Fortran and C/C++ compilers 19.1 * LLVM/Clang 12 Auto-parallelizing compilers that generates source code annotated with OpenMP directives: * iPat/OMP * Parallware * PLUTO *
ROSE (compiler framework) The ROSE compiler framework, developed at Lawrence Livermore National Laboratory (LLNL), is an open-source software compiler infrastructure to generate source-to-source analyzers and translators for multiple source languages including C (C89, ...
* S2P by KPIT Cummins Infosystems Ltd.
ComPar

PragFormer
Several profilers and debuggers expressly support OpenMP: * Intel
VTune VTune Profiler (formerly VTune Amplifier) is a performance analysis tool for x86 based machines running Linux or Microsoft Windows operating systems. Many features work on both Intel and AMD hardware, but advanced hardware-based sampling requires ...
Profiler - a profiler for the
x86 x86 (also known as 80x86 or the 8086 family) is a family of complex instruction set computer (CISC) instruction set architectures initially developed by Intel based on the Intel 8086 microprocessor and its 8088 variant. The 8086 was introd ...
CPU and Xe GPU architectures *
Intel Advisor Intel Advisor (also known as "Advisor XE", "Vectorization Advisor" or "Threading Advisor") is a design assistance and analysis tool for SIMD vectorization, threading, memory use, and GPU offload optimization. The tool supports C, C++, Data Parall ...
- a design assistance and analysis tool for OpenMP and MPI codes *
Allinea Distributed Debugging Tool Arm DDT is a commercial C, C++ and Fortran 90 debugger produced by Allinea Software now part of Arm of Warwick, United Kingdom. It is widely used for debugging parallel Message Passing Interface (MPI) and threaded (pthread or OpenMP) programs, i ...
(DDT) – debugger for OpenMP and MPI codes *
Allinea MAP Arm MAP, is an application profiler produced by Allinea Software now part of Arm. of Warwick, United Kingdom, for profiling the performance of C, C++, Fortran 90 and Python software. It is widely used for its multithreaded and multiprocess capa ...
– profiler for OpenMP and MPI codes * TotalView - debugger from
Rogue Wave Software Rogue Wave Software was an American software development company based in Louisville, Colorado. It provided cross-platform software development tools and embedded components for parallel, data-intensive, and other high-performance computing (HPC ...
for OpenMP, MPI and serial codes * ompP – profiler for OpenMP * VAMPIR – profiler for OpenMP and MPI code


Pros and cons

Pros: * Portable multithreading code (in C/C++ and other languages, one typically has to call platform-specific primitives in order to get multithreading). * Simple: need not deal with message passing as MPI does. * Data layout and decomposition is handled automatically by directives. * Scalability comparable to MPI on shared-memory systems. * Incremental parallelism: can work on one part of the program at one time, no dramatic change to code is needed. * Unified code for both serial and parallel applications: OpenMP constructs are treated as comments when sequential compilers are used. * Original (serial) code statements need not, in general, be modified when parallelized with OpenMP. This reduces the chance of inadvertently introducing bugs. * Both
coarse-grained Granularity (also called graininess), the condition of existing in granules or grains, refers to the extent to which a material or system is composed of distinguishable pieces. It can either refer to the extent to which a larger entity is sub ...
and
fine-grained Granularity (also called graininess), the condition of existing in granules or grains, refers to the extent to which a material or system is composed of distinguishable pieces. It can either refer to the extent to which a larger entity is sub ...
parallelism are possible. * In irregular multi-physics applications which do not adhere solely to the
SPMD In computing, single program, multiple data (SPMD) is a technique employed to achieve parallelism; it is a subcategory of MIMD. Tasks are split up and run simultaneously on multiple processors with different input in order to obtain results fast ...
mode of computation, as encountered in tightly coupled fluid-particulate systems, the flexibility of OpenMP can have a big performance advantage over MPI. * Can be used on various accelerators such as
GPGPU General-purpose computing on graphics processing units (GPGPU, or less often GPGP) is the use of a graphics processing unit (GPU), which typically handles computation only for computer graphics, to perform computation in applications traditiona ...
and
FPGAs A field-programmable gate array (FPGA) is an integrated circuit designed to be configured by a customer or a designer after manufacturinghence the term '' field-programmable''. The FPGA configuration is generally specified using a hardware de ...
. Cons: * Risk of introducing difficult to debug synchronization bugs and
race condition A race condition or race hazard is the condition of an electronics, software, or other system where the system's substantive behavior is dependent on the sequence or timing of other uncontrollable events. It becomes a bug when one or more of t ...
s. * only runs efficiently in shared-memory multiprocessor platforms (see however Intel'
Cluster OpenMP
and other
distributed shared memory In computer science, distributed shared memory (DSM) is a form of memory architecture where physically separated memories can be addressed as a single shared address space. The term "shared" does not mean that there is a single centralized memor ...
platforms). * Requires a compiler that supports OpenMP. * Scalability is limited by memory architecture. * No support for
compare-and-swap In computer science, compare-and-swap (CAS) is an atomic instruction used in multithreading to achieve synchronization. It compares the contents of a memory location with a given value and, only if they are the same, modifies the contents of tha ...
. * Reliable error handling is missing. * Lacks fine-grained mechanisms to control thread-processor mapping. * High chance of accidentally writing false sharing code.


Performance expectations

One might expect to get an ''N'' times
speedup In computer architecture, speedup is a number that measures the relative performance of two systems processing the same problem. More technically, it is the improvement in speed of execution of a task executed on two similar architectures with d ...
when running a program parallelized using OpenMP on a ''N'' processor platform. However, this seldom occurs for these reasons: * When a dependency exists, a process must wait until the data it depends on is computed. * When multiple processes share a non-parallel proof resource (like a file to write in), their requests are executed sequentially. Therefore, each thread must wait until the other thread releases the resource. * A large part of the program may not be parallelized by OpenMP, which means that the theoretical upper limit of speedup is limited according to
Amdahl's law In computer architecture, Amdahl's law (or Amdahl's argument) is a formula which gives the theoretical speedup in latency of the execution of a task at fixed workload that can be expected of a system whose resources are improved. It states tha ...
. * N processors in a
symmetric multiprocessing Symmetric multiprocessing or shared-memory multiprocessing (SMP) involves a multiprocessor computer hardware and software architecture where two or more identical processors are connected to a single, shared main memory, have full access to all ...
(SMP) may have N times the computation power, but the
memory bandwidth Memory bandwidth is the rate at which data can be read from or stored into a semiconductor memory by a central processing unit, processor. Memory bandwidth is usually expressed in units of bytes per second, bytes/second, though this can vary for ...
usually does not scale up N times. Quite often, the original memory path is shared by multiple processors and performance degradation may be observed when they compete for the shared memory bandwidth. * Many other common problems affecting the final speedup in
parallel computing Parallel computing is a type of computation in which many calculations or processes are carried out simultaneously. Large problems can often be divided into smaller ones, which can then be solved at the same time. There are several different fo ...
also apply to OpenMP, like load balancing and synchronization overhead. * Compiler optimisation may not be as effective when invoking OpenMP. This can commonly lead to a single-threaded OpenMP program running slower than the same code compiled without an OpenMP flag (which will be fully serial).


Thread affinity

Some vendors recommend setting the
processor affinity Processor affinity, or CPU pinning or "cache affinity", enables the binding and unbinding of a process or a thread to a central processing unit (CPU) or a range of CPUs, so that the process or thread will execute only on the designated CPU or CPU ...
on OpenMP threads to associate them with particular processor cores. This minimizes thread migration and context-switching cost among cores. It also improves the data locality and reduces the cache-coherency traffic among the cores (or processors).


Benchmarks

A variety of benchmarks has been developed to demonstrate the use of OpenMP, test its performance and evaluate correctness. Simple examples * OmpSCR: OpenMP Source Code Repository Performance benchmarks include: * NAS Parallel Benchmark * Barcelona OpenMP Task Suite a collection of applications that allow to test OpenMP tasking implementations. * SPEC series ** SPEC OMP 2012 ** The SPEC ACCEL benchmark suite testing OpenMP 4 target offloading API ** The SPEChpc 2002 benchmark * CORAL benchmarks * Exascale Proxy Applications * Rodinia focusing on accelerators. * Problem Based Benchmark Suite Correctness benchmarks include: * OpenMP Validation Suite * OpenMP Validation and Verification Testsuite * DataRaceBench is a benchmark suite designed to systematically and quantitatively evaluate the effectiveness of OpenMP data race detection tools. * AutoParBench is a benchmark suite to evaluate compilers and tools which can automatically insert OpenMP directives.


See also

*
Concurrency (computer science) In computer science, concurrency is the ability of different parts or units of a program, algorithm, or problem to be executed out-of-order or in partial order, without affecting the outcome. This allows for parallel execution of the concurr ...
*
Heterogeneous System Architecture Heterogeneous System Architecture (HSA) is a cross-vendor set of specifications that allow for the integration of central processing units and graphics processors on the same bus, with shared memory and tasks. The HSA is being developed by the HSA ...
*
Parallel programming model In computing, a parallel programming model is an abstraction of parallel computer architecture, with which it is convenient to express algorithms and their composition in programs. The value of a programming model can be judged on its ''generality ...
*
POSIX Threads POSIX Threads, commonly known as pthreads, is an execution model that exists independently from a language, as well as a parallel execution model. It allows a program to control multiple different flows of work that overlap in time. Each flow o ...
*
Unified Parallel C Unified Parallel C (UPC) is an extension of the C programming language designed for high-performance computing on large-scale parallel machines, including those with a common global address space ( SMP and NUMA) and those with distributed memory ...
*
Bulk synchronous parallel The bulk synchronous parallel (BSP) abstract computer is a bridging model for designing parallel algorithms. It is similar to the parallel random access machine (PRAM) model, but unlike PRAM, BSP does not take communication and synchronization fo ...
*
Partitioned global address space In computer science, partitioned global address space (PGAS) is a parallel programming model paradigm. PGAS is typified by communication operations involving a global memory address space abstraction that is logically partitioned, where a portion ...
* SequenceL


References


Further reading

* Quinn Michael J, Parallel Programming in C with MPI and OpenMP McGraw-Hill Inc. 2004. * R. Chandra, R. Menon, L. Dagum, D. Kohr, D. Maydan, J. McDonald, Parallel Programming in OpenMP. Morgan Kaufmann, 2000. * R. Eigenmann (Editor), M. Voss (Editor), OpenMP Shared Memory Parallel Programming: International Workshop on OpenMP Applications and Tools, WOMPAT 2001, West Lafayette, IN, USA, July 30–31, 2001. (Lecture Notes in Computer Science). Springer 2001. * B. Chapman, G. Jost, R. van der Pas, D.J. Kuck (foreword), Using OpenMP: Portable Shared Memory Parallel Programming. The MIT Press (October 31, 2007). * Parallel Processing via MPI & OpenMP, M. Firuziaan, O. Nommensen. Linux Enterprise, 10/2002
MSDN Magazine article on OpenMP

SC08 OpenMP Tutorial
(PDF) – Hands-On Introduction to OpenMP, Mattson and Meadows, from SC08 (Austin)
OpenMP Specifications

Parallel Programming in Fortran 95 using OpenMP
(PDF)


External links

* {{Parallel computing Application programming interfaces Articles with example Fortran code C programming language family Fortran Parallel computing