Scalable parallelism
   HOME

TheInfoList



OR:

Software is said to exhibit scalable parallelism if it can make use of additional processors to solve larger problems, i.e. this term refers to software for which Gustafson's law holds. Consider a program whose execution time is dominated by one or more loops, each of that updates every element of an array --- for example, the following
finite difference A finite difference is a mathematical expression of the form . If a finite difference is divided by , one gets a difference quotient. The approximation of derivatives by finite differences plays a central role in finite difference methods for t ...
heat equation In mathematics and physics, the heat equation is a certain partial differential equation. Solutions of the heat equation are sometimes known as caloric functions. The theory of the heat equation was first developed by Joseph Fourier in 1822 for ...
stencil calculation: for t := 0 to T do for i := 1 to N-1 do new(i) := (A(i-1) + A(i) + A(i) + A(i+1)) * .25 // explicit forward-difference with R = 0.25 end for i := 1 to N-1 do A(i) := new(i) end end In the above code, we can execute all iterations of each "i" loop concurrently, i.e., turn each into a parallel loop. In such cases, it is often possible to make effective use of twice as many processors for a problem of array size 2N as for a problem of array size N. As in this example, scalable parallelism is typically a form of data parallelism. This form of parallelism is often the target of automatic parallelization of loops. Distributed computing systems and non-uniform memory access architectures are typically the most easily scaled to large numbers of processors, and thus would seem a natural target for software that exhibits scalable parallelism. However, applications with scalable parallelism may not have parallelism of sufficiently coarse grain to run effectively on such systems (unless the software is
embarrassingly parallel In parallel computing, an embarrassingly parallel workload or problem (also called embarrassingly parallelizable, perfectly parallel, delightfully parallel or pleasingly parallel) is one where little or no effort is needed to separate the problem ...
). In our example above, the second "i" loop is embarrassingly parallel, but in the first loop each iteration requires results produced in several prior iterations. Thus, for the first loop, parallelization may involve extensive communication or synchronization among processors, and thus only result in a net speedup if such interactions have very low overhead, or if the code can be transformed to resolve this issue (i.e., by combined
scalable locality Computer software is said to exhibit scalable localityDavid Wonnacott. ''Achieving Scalable Locality with Time Skewing.'' International Journal of Parallel Programming 30.3 (2002) if it can continue to make use of processors that out-pace their me ...
/scalable parallelism optimization).


Languages

*
Ateji PX Ateji PX is an object-oriented programming language extension for Java. It is intended to facilliate parallel computing on multi-core processors, GPU, Grid and Cloud. Ateji PX can be integrated with the Eclipse IDE, requires minimal learning of ...
an extension of Java making Scalable Parallelism possible on the Java Virtual Machine (JVM) *
BMDFM Binary Modular Dataflow Machine (BMDFM) is a software package that enables running an application in parallel on shared memory symmetric multiprocessing (SMP) computers using the multiple processors to speed up the execution of single applicatio ...
Binary Modular DataFlow Machine *
SequenceL SequenceL is a general purpose functional programming language and auto-parallelizing (Parallel computing) compiler and tool set, whose primary design objectives are performance on multi-core processor hardware, ease of programming, platform porta ...
is a general purpose functional programming language, whose primary design objectives are performance on multicore hardware, ease of programming, and code clarity/readability


References


External links

* {{Cite web, title=Demystify Scalable Parallelism with Intel Threading Building Block's Generic Parallel Algorithms , url=http://www.devx.com/cplus/Article/32935 Analysis of parallel algorithms Articles with example pseudocode