HOME

TheInfoList



OR:

In
computer science Computer science is the study of computation, automation, and information. Computer science spans theoretical disciplines (such as algorithms, theory of computation, information theory, and automation) to practical disciplines (includin ...
, a parallel algorithm, as opposed to a traditional
serial algorithm Serial may refer to: Arts, entertainment, and media The presentation of works in sequential segments * Serial (literature), serialised literature in print * Serial (publishing), periodical publications and newspapers * Serial (radio and televisio ...
, is an
algorithm In mathematics and computer science, an algorithm () is a finite sequence of rigorous instructions, typically used to solve a class of specific problems or to perform a computation. Algorithms are used as specifications for performing ...
which can do multiple operations in a given time. It has been a tradition of
computer science Computer science is the study of computation, automation, and information. Computer science spans theoretical disciplines (such as algorithms, theory of computation, information theory, and automation) to practical disciplines (includin ...
to describe serial algorithms in
abstract machine An abstract machine is a computer science theoretical model that allows for a detailed and precise analysis of how a computer system functions. It is analogous to a mathematical function in that it receives inputs and produces outputs based on p ...
models, often the one known as random-access machine. Similarly, many computer science researchers have used a so-called parallel random-access machine (PRAM) as a parallel abstract machine (shared-memory). Many parallel algorithms are executed concurrently – though in general concurrent algorithms are a distinct concept – and thus these concepts are often conflated, with which aspect of an algorithm is parallel and which is concurrent not being clearly distinguished. Further, non-parallel, non-concurrent algorithms are often referred to as " sequential algorithms", by contrast with concurrent algorithms.


Parallelizability

Algorithms vary significantly in how parallelizable they are, ranging from easily parallelizable to completely unparallelizable. Further, a given problem may accommodate different algorithms, which may be more or less parallelizable. Some problems are easy to divide up into pieces in this way – these are called '' embarrassingly parallel problems.'' Examples include many algorithms to solve
Rubik's Cube The Rubik's Cube is a 3-D combination puzzle originally invented in 1974 by Hungarian sculptor and professor of architecture Ernő Rubik. Originally called the Magic Cube, the puzzle was licensed by Rubik to be sold by Pentangle Puzzles in t ...
s and find values which result in a given
hash Hash, hashes, hash mark, or hashing may refer to: Substances * Hash (food), a coarse mixture of ingredients * Hash, a nickname for hashish, a cannabis product Hash mark *Hash mark (sports), a marking on hockey rinks and gridiron football field ...
. Some problems cannot be split up into parallel portions, as they require the results from a preceding step to effectively carry on with the next step – these are called s. Examples include iterative numerical methods, such as
Newton's method In numerical analysis, Newton's method, also known as the Newton–Raphson method, named after Isaac Newton and Joseph Raphson, is a root-finding algorithm which produces successively better approximations to the roots (or zeroes) of a real ...
, iterative solutions to the
three-body problem In physics and classical mechanics, the three-body problem is the problem of taking the initial positions and velocities (or momenta) of three point masses and solving for their subsequent motion according to Newton's laws of motion and Newton's ...
, and most of the available algorithms to compute pi (π). Some sequential algorithms can be converted into parallel algorithms using automatic parallelization.


Motivation

Parallel algorithms on individual devices have become more common since the early 2000s because of substantial improvements in
multiprocessing Multiprocessing is the use of two or more central processing units (CPUs) within a single computer system. The term also refers to the ability of a system to support more than one processor or the ability to allocate tasks between them. There ar ...
systems and the rise of multi-core processors. Up until the end of 2004, single-core processor performance rapidly increased via
frequency scaling In computer architecture, frequency scaling (also known as frequency ramping) is the technique of increasing a processor's frequency so as to enhance the performance of the system containing the processor in question. Frequency ramping was the d ...
, and thus it was easier to construct a computer with a single fast core than one with many slower cores with the same throughput, so multicore systems were of more limited use. Since 2004 however, frequency scaling hit a wall, and thus multicore systems have become more widespread, making parallel algorithms of more general use.


Issues


Communication

The cost or complexity of serial algorithms is estimated in terms of the space (memory) and time (processor cycles) that they take. Parallel algorithms need to optimize one more resource, the communication between different processors. There are two ways parallel processors communicate, shared memory or message passing. Shared memory processing needs additional locking for the data, imposes the overhead of additional processor and bus cycles, and also serializes some portion of the algorithm. Message passing processing uses channels and message boxes but this communication adds transfer overhead on the bus, additional memory need for queues and message boxes and latency in the messages. Designs of parallel processors use special buses like crossbar so that the communication overhead will be small but it is the parallel algorithm that decides the volume of the traffic. If the communication overhead of additional processors outweighs the benefit of adding another processor, one encounters parallel slowdown.


Load balancing

Another problem with parallel algorithms is ensuring that they are suitably load balanced, by ensuring that ''load'' (overall work) is balanced, rather than input size being balanced. For example, checking all numbers from one to a hundred thousand for primality is easy to split among processors; however, if the numbers are simply divided out evenly (1–1,000, 1,001–2,000, etc.), the amount of work will be unbalanced, as smaller numbers are easier to process by this algorithm (easier to test for primality), and thus some processors will get more work to do than the others, which will sit idle until the loaded processors complete.


Distributed algorithms

A subtype of parallel algorithms, '' distributed algorithms'', are algorithms designed to work in cluster computing and
distributed computing A distributed system is a system whose components are located on different networked computers, which communicate and coordinate their actions by passing messages to one another from any system. Distributed computing is a field of computer sci ...
environments, where additional concerns beyond the scope of "classical" parallel algorithms need to be addressed.


See also

* Multiple-agent system (MAS) *
Parallel algorithms for matrix multiplication Because matrix multiplication is such a central operation in many numerical algorithms, much work has been invested in making matrix multiplication algorithms efficient. Applications of matrix multiplication in computational problems are found in ...
* Parallel algorithms for minimum spanning trees * Parallel computing *
Parareal Parareal is a parallel algorithm from numerical analysis and used for the solution of initial value problems. It was introduced in 2001 by Lions, Maday and Turinici. Since then, it has become one of the most widely studied parallel-in-time integrat ...


References


External links


Designing and Building Parallel Programs
US Argonne National Laboratory {{Parallel computing Parallel computing Concurrent algorithms Distributed algorithms