HOME

TheInfoList



OR:

Data parallelism is parallelization across multiple processors in
parallel computing Parallel computing is a type of computation in which many calculations or processes are carried out simultaneously. Large problems can often be divided into smaller ones, which can then be solved at the same time. There are several different fo ...
environments. It focuses on distributing the data across different nodes, which operate on the data in parallel. It can be applied on regular data structures like arrays and matrices by working on each element in parallel. It contrasts to
task parallelism Task parallelism (also known as function parallelism and control parallelism) is a form of parallelization of computer code across multiple processors in parallel computing environments. Task parallelism focuses on distributing tasks—concurrent ...
as another form of parallelism. A data parallel job on an array of ''n'' elements can be divided equally among all the processors. Let us assume we want to sum all the elements of the given array and the time for a single addition operation is Ta time units. In the case of sequential execution, the time taken by the process will be ''n''×Ta time units as it sums up all the elements of an array. On the other hand, if we execute this job as a data parallel job on 4 processors the time taken would reduce to (''n''/4)×Ta + merging overhead time units. Parallel execution results in a speedup of 4 over sequential execution. One important thing to note is that the locality of data references plays an important part in evaluating the performance of a data parallel programming model. Locality of data depends on the memory accesses performed by the program as well as the size of the cache.


History

Exploitation of the concept of data parallelism started in 1960s with the development of Solomon machine. The Solomon machine, also called a
vector processor In computing, a vector processor or array processor is a central processing unit (CPU) that implements an instruction set where its instructions are designed to operate efficiently and effectively on large one-dimensional arrays of data called ...
, was developed to expedite the performance of mathematical operations by working on a large data array (operating on multiple data in consecutive time steps).
Concurrency Concurrent means happening at the same time. Concurrency, concurrent, or concurrence may refer to: Law * Concurrence, in jurisprudence, the need to prove both ''actus reus'' and ''mens rea'' * Concurring opinion (also called a "concurrence"), a ...
of data operations was also exploited by operating on multiple data at the same time using a single instruction. These processors were called 'array processors'. In the 1980s, the term was introduced to describe this programming style, which was widely used to program
Connection Machine A Connection Machine (CM) is a member of a series of massively parallel supercomputers that grew out of doctoral research on alternatives to the traditional von Neumann architecture of computers by Danny Hillis at Massachusetts Institute of Techno ...
s in data parallel languages like C*. Today, data parallelism is best exemplified in
graphics processing unit A graphics processing unit (GPU) is a specialized electronic circuit designed to manipulate and alter memory to accelerate the creation of images in a frame buffer intended for output to a display device. GPUs are used in embedded systems, mobi ...
s (GPUs), which use both the techniques of operating on multiple data in space and time using a single instruction.


Description

In a multiprocessor system executing a single set of instructions (
SIMD Single instruction, multiple data (SIMD) is a type of parallel processing in Flynn's taxonomy. SIMD can be internal (part of the hardware design) and it can be directly accessible through an instruction set architecture (ISA), but it should ...
), data parallelism is achieved when each processor performs the same task on different distributed data. In some situations, a single execution thread controls operations on all the data. In others, different threads control the operation, but they execute the same code. For instance, consider
matrix multiplication In mathematics, particularly in linear algebra, matrix multiplication is a binary operation that produces a matrix from two matrices. For matrix multiplication, the number of columns in the first matrix must be equal to the number of rows in the s ...
and addition in a sequential manner as discussed in the example.


Example

Below is the sequential pseudo-code for multiplication and addition of two matrices where the result is stored in the matrix . The pseudo-code for multiplication calculates the dot product of two matrices , and stores the result into the output matrix . If the following programs were executed sequentially, the time taken to calculate the result would be of the O(n^3)(assuming row lengths and column lengths of both matrices are n) and O(n)for multiplication and addition respectively. // Matrix multiplication for (i = 0; i < row_length_A; i++) // Array addition for (i = 0; i < n; i++) We can exploit data parallelism in the preceding code to execute it faster as the arithmetic is loop independent. Parallelization of the matrix multiplication code is achieved by using
OpenMP OpenMP (Open Multi-Processing) is an application programming interface (API) that supports multi-platform shared-memory multiprocessing programming in C, C++, and Fortran, on many platforms, instruction-set architectures and operating syste ...
. An OpenMP directive, "omp parallel for" instructs the compiler to execute the code in the for loop in parallel. For multiplication, we can divide matrix A and B into blocks along rows and columns respectively. This allows us to calculate every element in matrix C individually thereby making the task parallel. For example: ''A x ndot B x k' can be finished in O(n) instead of O(m*n*k) when executed in parallel using ''m*k'' processors. // Matrix multiplication in parallel #pragma omp parallel for schedule(dynamic,1) collapse(2) for (i = 0; i < row_length_A; i++) It can be observed from the example that a lot of processors will be required as the matrix sizes keep on increasing. Keeping the execution time low is the priority but as the matrix size increases, we are faced with other constraints like complexity of such a system and its associated costs. Therefore, constraining the number of processors in the system, we can still apply the same principle and divide the data into bigger chunks to calculate the product of two matrices. For addition of arrays in a data parallel implementation, let's assume a more modest system with two
central processing unit A central processing unit (CPU), also called a central processor, main processor or just processor, is the electronic circuitry that executes instructions comprising a computer program. The CPU performs basic arithmetic, logic, controlling, an ...
s (CPU) A and B, CPU A could add all elements from the top half of the arrays, while CPU B could add all elements from the bottom half of the arrays. Since the two processors work in parallel, the job of performing array addition would take one half the time of performing the same operation in serial using one CPU alone. The program expressed in
pseudocode In computer science, pseudocode is a plain language description of the steps in an algorithm or another system. Pseudocode often uses structural conventions of a normal programming language, but is intended for human reading rather than machine re ...
below—which applies some arbitrary operation, foo, on every element in the array d—illustrates data parallelism:Some input data (e.g. when d.length evaluates to 1 and round rounds towards zero his is just an example, there are no requirements on what type of rounding is used will lead to lower_limit being greater than upper_limit, it's assumed that the loop will exit immediately (i.e. zero iterations will occur) when this happens. if CPU = "a" then lower_limit := 1 upper_limit := round(d.length / 2) else if CPU = "b" then lower_limit := round(d.length / 2) + 1 upper_limit := d.length for i from lower_limit to upper_limit by 1 do foo(d In an
SPMD In computing, single program, multiple data (SPMD) is a technique employed to achieve parallelism; it is a subcategory of MIMD. Tasks are split up and run simultaneously on multiple processors with different input in order to obtain results fast ...
system executed on 2 processor system, both CPUs will execute the code. Data parallelism emphasizes the distributed (parallel) nature of the data, as opposed to the processing (task parallelism). Most real programs fall somewhere on a continuum between task parallelism and data parallelism.


Steps to parallelization

The process of parallelizing a sequential program can be broken down into four discrete steps.


Data parallelism vs. task parallelism


Data parallelism vs. model parallelism


Mixed data and task parallelism

Data and task parallelism, can be simultaneously implemented by combining them together for the same application. This is called Mixed data and task parallelism. Mixed parallelism requires sophisticated scheduling algorithms and software support. It is the best kind of parallelism when communication is slow and number of processors is large. Mixed data and task parallelism has many applications. It is particularly used in the following applications: # Mixed data and task parallelism finds applications in the global climate modeling. Large data parallel computations are performed by creating grids of data representing earth's atmosphere and oceans and task parallelism is employed for simulating the function and model of the physical processes. # In timing based
circuit simulation Electronic circuit simulation uses mathematical models to replicate the behavior of an actual electronic device or circuit. Simulation software allows for modeling of circuit operation and is an invaluable analysis tool. Due to its highly accurat ...
. The data is divided among different sub-circuits and parallelism is achieved with orchestration from the tasks.


Data parallel programming environments

A variety of data parallel programming environments are available today, most widely used of which are: # Message Passing Interface: It is a cross-platform message passing programming interface for parallel computers. It defines the semantics of library functions to allow users to write portable message passing programs in C, C++ and Fortran. # Open Multi Processing (Open MP): It's an Application Programming Interface (API) which supports
shared memory In computer science, shared memory is memory that may be simultaneously accessed by multiple programs with an intent to provide communication among them or avoid redundant copies. Shared memory is an efficient means of passing data between progr ...
programming models on multiple platforms of multiprocessor systems . #
CUDA CUDA (or Compute Unified Device Architecture) is a parallel computing platform and application programming interface (API) that allows software to use certain types of graphics processing units (GPUs) for general purpose processing, an approach ca ...
and
OpenACC OpenACC (for ''open accelerators'') is a programming standard for parallel computing developed by Cray, CAPS, Nvidia and PGI. The standard is designed to simplify parallel programming of heterogeneous CPU/GPU systems. As in OpenMP, the programm ...
: CUDA and OpenACC (respectively) are parallel computing API platforms designed to allow a software engineer to utilize GPU's computational units for general purpose processing. #
Threading Building Blocks oneAPI Threading Building Blocks (oneTBB; formerly Threading Building Blocks or TBB), is a C++ template library developed by Intel for parallel programming on multi-core processors. Using TBB, a computation is broken down into tasks that can ru ...
and
RaftLib RaftLib is a portable parallel processing system that aims to provide extreme performance while increasing programmer productivity. It enables a programmer to assemble a massively parallel program (both local and distributed) using simple iostream ...
: Both open source programming environments that enable mixed data/task parallelism in C/C++ environments across heterogeneous resources.


Applications

Data parallelism finds its applications in a variety of fields ranging from physics, chemistry, biology, material sciences to signal processing. Sciences imply data parallelism for simulating models like molecular dynamics, sequence analysis of genome data and other physical phenomenon. Driving forces in signal processing for data parallelism are video encoding, image and graphics processing, wireless communications to name a few.


See also

*
Active message An Active message (in computing) is a messaging object capable of performing processing on its own. It is a lightweight messaging protocol used to optimize network communications with an emphasis on reducing latency by removing software overheads ...
*
Instruction level parallelism Instruction-level parallelism (ILP) is the parallel or simultaneous execution of a sequence of instructions in a computer program. More specifically ILP refers to the average number of instructions run per step of this parallel execution. Disc ...
*
Scalable parallelism Software is said to exhibit scalable parallelism if it can make use of additional processors to solve larger problems, i.e. this term refers to software for which Gustafson's law holds. Consider a program whose execution time is dominated by one or ...
*
Thread level parallelism Task parallelism (also known as function parallelism and control parallelism) is a form of parallelization of computer code across multiple processors in parallel computing environments. Task parallelism focuses on distributing tasks—concurrent ...
*
Parallel programming model In computing, a parallel programming model is an abstraction of parallel computer architecture, with which it is convenient to express algorithms and their composition in programs. The value of a programming model can be judged on its ''generality ...


Notes


References

* Hillis, W. Daniel and Steele, Guy L.
Data Parallel Algorithms
Communications of the ACM ''Communications of the ACM'' is the monthly journal of the Association for Computing Machinery (ACM). It was established in 1958, with Saul Rosen as its first managing editor. It is sent to all ACM members. Articles are intended for readers with ...
December 1986 * Blelloch, Guy E, Vector Models for Data-Parallel Computing MIT Press 1990. {{Parallel Computing Parallel computing Articles with example pseudocode