HOME

TheInfoList



OR:

In
parallel computing Parallel computing is a type of computing, computation in which many calculations or Process (computing), processes are carried out simultaneously. Large problems can often be divided into smaller ones, which can then be solved at the same time. ...
, granularity (or
grain A grain is a small, hard, dry fruit (caryopsis) – with or without an attached husk, hull layer – harvested for human or animal consumption. A grain crop is a grain-producing plant. The two main types of commercial grain crops are cereals and ...
size) of a task is a measure of the amount of work (or
computation A computation is any type of arithmetic or non-arithmetic calculation that is well-defined. Common examples of computation are mathematical equation solving and the execution of computer algorithms. Mechanical or electronic devices (or, hist ...
) which is performed by that task. Another definition of granularity takes into account the communication overhead between multiple processors or processing elements. It defines granularity as the ratio of computation time to communication time, wherein computation time is the time required to perform the computation of a task and communication time is the time required to exchange data between processors. . If is the computation time and denotes the communication time, then the granularity of a task can be calculated as: ::G=\frac Granularity is usually measured in terms of the number of instructions which are
executed Capital punishment, also known as the death penalty and formerly called judicial homicide, is the state-sanctioned killing of a person as punishment for actual or supposed misconduct. The sentence (law), sentence ordering that an offender b ...
in a particular task. Alternately, granularity can also be specified in terms of the execution time of a program, combining the computation time and communication time.


Types of parallelism

Depending on the amount of work which is performed by a parallel task, parallelism can be classified into three categories: fine-grained, medium-grained and coarse-grained parallelism.


Fine-grained parallelism

In fine-grained parallelism, a program is broken down to a large number of small tasks. These tasks are assigned individually to many processors. The amount of work associated with a parallel task is low and the work is evenly distributed among the processors. Hence, fine-grained parallelism facilitates load balancing. As each task processes less data, the number of processors required to perform the complete processing is high. This in turn, increases the communication and synchronization overhead. Fine-grained parallelism is best exploited in architectures which support fast communication. Shared memory architecture which has a low communication overhead is most suitable for fine-grained parallelism. It is difficult for programmers to detect parallelism in a program, therefore, it is usually the compilers' responsibility to detect fine-grained parallelism. An example of a fine-grained system (from outside the parallel computing domain) is the system of
neurons A neuron (American English), neurone (British English), or nerve cell, is an membrane potential#Cell excitability, excitable cell (biology), cell that fires electric signals called action potentials across a neural network (biology), neural net ...
in our
brain The brain is an organ (biology), organ that serves as the center of the nervous system in all vertebrate and most invertebrate animals. It consists of nervous tissue and is typically located in the head (cephalization), usually near organs for ...
. Connection Machine (CM-2) and J-Machine are examples of fine-grain parallel computers that have grain size in the range of 4-5 μs.


Coarse-grained parallelism

In coarse-grained parallelism, a program is split into large tasks. Due to this, a large amount of computation takes place in processors. This might result in load imbalance, wherein certain tasks process the bulk of the data while others might be idle. Further, coarse-grained parallelism fails to exploit the parallelism in the program as most of the computation is performed sequentially on a processor. The advantage of this type of parallelism is low communication and synchronization overhead. Message-passing architecture takes a long time to communicate data among processes which makes it suitable for coarse-grained parallelism. Cray Y-MP is an example of coarse-grained parallel computer which has a grain size of about 20s.


Medium-grained parallelism

Medium-grained parallelism is used relatively to fine-grained and coarse-grained parallelism. Medium-grained parallelism is a compromise between fine-grained and coarse-grained parallelism, where we have task size and communication time greater than fine-grained parallelism and lower than coarse-grained parallelism. Most general-purpose parallel computers fall in this category. Intel iPSC is an example of medium-grained parallel computer which has a grain size of about 10ms.


Example

Consider a stack of 20 images with size 10x10 pixels that need to be processed, assuming that each of the 100 pixels can be processed independently of each other. Processing 1 pixel takes 1 clock cycle. Fine-grained parallelism: Each pixel will be processed individually by one processor at a time. Assuming there are 100 processors that are responsible for processing the image, the 100 processors can process one 10x10 image in a single clock cycle. With 20 processors, it would take 5 clock cycles per image. Each processor can be utilized for 100% of its available time but the result of each pixel-computation needs to be communicated and aggregated at the end of each image processing which can cause a lot of overhead (100 communications per image = 2000 total). Medium-grained parallelism: The images are split into quarters. Each quarter will be processed individually by one processor at a time taking 25 clock cycles (for 5x5 pixels). Assuming there are 20 processors that are responsible for processing the stack of 20 images, 5 images can be processed in parallel with 4 processors working on each image. If 100 processors were available, 80 could process the stack in parallel taking 25 clock cycles while 20 processors sit idle without any work assigned to them. Once the four quarters have been processed, the results must be aggregated (4 communications per image = 80 total). Coarse-grained parallelism: A full image is processed by a single processor taking 100 clock cycles. In this case only 20 processors can be used at a time, completing the work in 100 clock cycles without any communication. The decision on which approach is best depends on the workload and available processing units. The goal should be to maximize parallelization (split work into enough units to evenly distribute it across most available processors) while minimizing communication overhead (ratio of time spend on communication vs time spend on computation). In our example, if the number of pictures to process is high compared to the number of workers, it does not make sense to break images down into smaller units since each worker will receive enough load. If the number of pictures is small compared to the number of workers, some workers might sit idle and waste computation time. However, this only is a problem if processing a single image takes a long time. If the processing is very fast then splitting the work into smaller units might make the total operation slower since the time lost to communication is more than the time gained through parallelization.


Levels of parallelism

Granularity is closely tied to the level of processing. A program can be broken down into 4 levels of parallelism - # Instruction level. # Loop level # Sub-routine level and # Program-level The highest amount of parallelism is achieved at instruction level, followed by loop-level parallelism. At instruction and loop level, fine-grained parallelism is achieved. Typical grain size at instruction-level is 20 instructions, while the grain-size at loop-level is 500 instructions. At the sub-routine (or procedure) level the grain size is typically a few thousand instructions. Medium-grained parallelism is achieved at sub-routine level. At program-level, parallel execution of programs takes place. Granularity can be in the range of tens of thousands of instructions. Coarse-grained parallelism is used at this level. The below table shows the relationship between levels of parallelism, grain size and degree of parallelism


Impact of granularity on performance

Granularity affects the performance of parallel computers. Using fine grains or small tasks results in more parallelism and hence increases the
speedup In computer architecture, speedup is a number that measures the relative performance of two systems processing the same problem. More technically, it is the improvement in speed of execution of a task executed on two similar architectures with ...
. However, synchronization overhead, scheduling strategies etc. can negatively impact the performance of fine-grained tasks. Increasing parallelism alone cannot give the best performance. In order to reduce the communication overhead, granularity can be increased. Coarse grained tasks have less communication overhead but they often cause load imbalance. Hence optimal performance is achieved between the two extremes of fine-grained and coarse-grained parallelism. Various studies have proposed their solution to help determine the best granularity to aid parallel processing. Finding the best grain size, depends on a number of factors and varies greatly from problem-to-problem.


See also

*
Instruction-level parallelism Instruction-level parallelism (ILP) is the Parallel computing, parallel or simultaneous execution of a sequence of Instruction set, instructions in a computer program. More specifically, ILP refers to the average number of instructions run per st ...
*
Data Parallelism Data parallelism is parallelization across multiple processors in parallel computing environments. It focuses on distributing the data across different nodes, which operate on the data in parallel. It can be applied on regular data structures like ...


Citations

{{Parallel computing Analysis of parallel algorithms