Massively Parallel Computing
   HOME

TheInfoList



OR:

Massively parallel is the term for using a large number of computer processors (or separate computers) to simultaneously perform a set of coordinated computations
in parallel Two-terminal components and electrical networks can be connected in series or parallel. The resulting electrical network will have two terminals, and itself can participate in a series or parallel topology. Whether a two-terminal "object" is an ...
. GPUs are massively parallel architecture with tens of thousands of threads. One approach is
grid computing Grid computing is the use of widely distributed computer resources to reach a common goal. A computing grid can be thought of as a distributed system with non-interactive workloads that involve many files. Grid computing is distinguished from co ...
, where the processing power of many computers in distributed, diverse
administrative domain An administrative domain is a service provider holding a security repository permitting to easily authenticate and authorize clients with credentials. This particularly applies to computer network security. This concept is captured by the 'AdminD ...
s is opportunistically used whenever a computer is available.''Grid computing: experiment management, tool integration, and scientific workflows'' by Radu Prodan, Thomas Fahringer 2007 pages 1–4 An example is BOINC, a volunteer-based, opportunistic grid system, whereby the grid provides power only on a best effort basis.''Parallel and Distributed Computational Intelligence'' by Francisco Fernández de Vega 2010 pages 65–68 Another approach is grouping many processors in close proximity to each other, as in a
computer cluster A computer cluster is a set of computers that work together so that they can be viewed as a single system. Unlike grid computers, computer clusters have each node set to perform the same task, controlled and scheduled by software. The comp ...
. In such a centralized system the speed and flexibility of the interconnect becomes very important, and modern supercomputers have used various approaches ranging from enhanced InfiniBand systems to three-dimensional torus interconnects.Knight, Will: "IBM creates world's most powerful computer", ''NewScientist.com news service'', June 2007 The term also applies to massively parallel processor arrays (MPPAs), a type of integrated circuit with an array of hundreds or thousands of
central processing units A central processing unit (CPU), also called a central processor, main processor or just processor, is the electronic circuitry that executes instructions comprising a computer program. The CPU performs basic arithmetic, logic, controlling, an ...
(CPUs) and random-access memory (RAM) banks. These processors pass work to one another through a reconfigurable interconnect of channels. By harnessing many processors working in parallel, an MPPA chip can accomplish more demanding tasks than conventional chips. MPPAs are based on a software parallel programming model for developing high-performance embedded system applications. Goodyear MPP was an early implementation of a massively parallel computer architecture. MPP architectures are the second most common
supercomputer A supercomputer is a computer with a high level of performance as compared to a general-purpose computer. The performance of a supercomputer is commonly measured in floating-point operations per second ( FLOPS) instead of million instructions ...
implementations after clusters, as of November 2013. Data warehouse appliances such as Teradata, Netezza or Microsoft's PDW commonly implement an MPP architecture to handle the processing of very large amounts of data in parallel.


See also

*
Multiprocessing Multiprocessing is the use of two or more central processing units (CPUs) within a single computer system. The term also refers to the ability of a system to support more than one processor or the ability to allocate tasks between them. There ar ...
* Embarrassingly parallel *
Parallel computing Parallel computing is a type of computation in which many calculations or processes are carried out simultaneously. Large problems can often be divided into smaller ones, which can then be solved at the same time. There are several different fo ...
* Process-oriented programming * Shared-nothing architecture (SN) *
Symmetric multiprocessing Symmetric multiprocessing or shared-memory multiprocessing (SMP) involves a multiprocessor computer hardware and software architecture where two or more identical processors are connected to a single, shared main memory, have full access to all ...
(SMP) * Connection Machine * Cellular automaton * CUDA framework * Manycore processor * Vector processor


References

{{Parallel computing Parallel computing Supercomputing