In
computer programming, explicit parallelism is the representation
of concurrent computations by means of primitives
in the form of special-purpose directives or function calls. Most parallel primitives are related to process synchronization, communication or task partitioning. As they seldom contribute to actually carry out the
intended computation of the program, their computational cost is often considered
as
parallelization overhead.
The advantage of explicit
parallel programming is the absolute programmer
control over the parallel execution. A skilled
parallel programmer takes advantage of explicit parallelism to produce
very efficient code. However, programming with explicit parallelism is often difficult, especially for
non computing specialists, because of the extra work involved in planning
the task division and synchronization of concurrent processes.
In some instances, explicit parallelism may be avoided with the use of an optimizing compiler that automatically extracts the parallelism inherent to computations (see
implicit parallelism).
Programming with explicit parallelism
*
Occam (programming language)
*
Erlang (programming language)
*
Message Passing Interface
*
Parallel Virtual Machine
*
Ease programming language
*
Ada programming language
*
Java programming language
*
JavaSpaces
{{DEFAULTSORT:Explicit Parallelism
Computer programming