HOME

TheInfoList



OR:

Duncan's taxonomy is a classification of computer architectures, proposed by Ralph Duncan in 1990.Duncan, Ralph, "A Survey of Parallel Computer Architectures", IEEE Computer. February 1990, pp. 5-16. Duncan suggested modifications to
Flynn's taxonomy Flynn's taxonomy is a classification of computer architectures, proposed by Michael J. Flynn in 1966 and extended in 1972. The classification system has stuck, and it has been used as a tool in the design of modern processors and their functionalit ...
Flynn, M.J., "Very High Speed Computing Systems", Proc. IEEE. Vol. 54, 1966, pp.1901-1909. to include pipelined vector processes.Introduction to Parallel Algorithms
/ref>


Taxonomy

The taxonomy was developed during 1988-1990 and was first published in 1990. Its original categories are indicated below.


Synchronous architectures

This category includes all the parallel architectures that coordinate concurrent execution in lockstep fashion and do so via mechanisms such as global clocks, central control units or vector unit controllers. Further subdivision of this category is made primarily on the basis of the synchronization mechanism.


Pipelined vector processors

''Pipelined
vector processor In computing, a vector processor or array processor is a central processing unit (CPU) that implements an instruction set where its instructions are designed to operate efficiently and effectively on large one-dimensional arrays of data called ...
s'' are characterized by pipelined functional units that accept a sequential stream of array or ''vector'' elements, such that different stages in a filled
pipeline A pipeline is a system of Pipe (fluid conveyance), pipes for long-distance transportation of a liquid or gas, typically to a market area for consumption. The latest data from 2014 gives a total of slightly less than of pipeline in 120 countries ...
are processing different elements of the vector at a given time.Hwang, K., ed., ''Tutorial Supercomputers: Design and Applications.'' Computer Society Press, Los Alamitos, California, 1984, esp. chapters 1 and 2. Parallelism is provided both by the pipelining in individual functional units described above, as well as by operating multiple units of this kind in parallel and by ''chaining'' the output of one unit into another unit as input. Vector architectures that stream vector elements into functional units from special vector registers are termed ''register-to-register'' architectures, while those that feed functional units from special memory buffers are designated as '' memory-to-memory'' architectures. Early examples of register-to-register architectures from the 1960s and early 1970s include the
Cray-1 The Cray-1 was a supercomputer designed, manufactured and marketed by Cray Research. Announced in 1975, the first Cray-1 system was installed at Los Alamos National Laboratory in 1976. Eventually, eighty Cray-1s were sold, making it one of the ...
Russell, R.M., "The CRAY-1 Computer System," Comm. ACM, Jan. 1978, pp. 63-72. and Fujitsu VP-200, while the
Control Data Corporation Control Data Corporation (CDC) was a mainframe and supercomputer company that in the 1960s was one of the nine major U.S. computer companies, which group included IBM, the Burroughs Corporation, and the Digital Equipment Corporation (DEC), the N ...
STAR-100, CDC 205 and the
Texas Instruments Texas Instruments Incorporated (TI) is an American multinational semiconductor company headquartered in Dallas, Texas. It is one of the top 10 semiconductor companies worldwide based on sales volume. The company's focus is on developing analog ...
Advanced Scientific Computer are early examples of memory-to-memory vector architectures.Watson, W.J., ''The ASC: a Highly Modular Flexible Super Computer Architecture,'' Proc. AFIPS Fall Joint Computer Conference, 1972, pp. 221-228. The late 1980s and early 1990s saw the introduction of vector architectures, such as the Cray Y-MP/4 and Nippon Electric Corporation SX-3 that supported 4-10 vector processors with a shared memory (see NEC SX architecture).


SIMD

This scheme uses the ''
SIMD Single instruction, multiple data (SIMD) is a type of parallel computer, parallel processing in Flynn's taxonomy. SIMD describes computers with multiple processing elements that perform the same operation on multiple data points simultaneousl ...
'' (single instruction stream, multiple data stream) category from
Flynn's taxonomy Flynn's taxonomy is a classification of computer architectures, proposed by Michael J. Flynn in 1966 and extended in 1972. The classification system has stuck, and it has been used as a tool in the design of modern processors and their functionalit ...
as a root class for ''processor array'' and ''associative memory'' subclasses. SIMD architecturesMichael Jurczyk and Thomas Schwederski,"SIMD-Processing: Concepts and Systems", pp. 649-679 in Parallel and Distributed Computing Handbook, A. Zomaya, ed., McGraw-Hill, 1996. are characterized by having a control unit broadcast a common instruction to all processing elements, which execute that instruction in lockstep on diverse operands from local data. Common features include the ability for individual processors to disable an instruction and the ability to propagate instruction results to immediate neighbors over an interconnection network.


=Processor array

=


=Associative memory

=


Systolic array

''Systolic arrays'', proposed during the 1980s,Kung, H.T., "Why Systolic Arrays?", Computer, Vol. 15, No. 1, Jan. 1982, pp. 37-46. are multiprocessors in which data and partial results are rhythmically pumped from processor to processor through a regular, local interconnection network. Systolic architectures use a global clock and explicit timing delays to synchronize data flow from processor to processor. Each processor in a systolic system executes an invariant sequence of instructions before data and results are pulsed to neighboring processors.


MIMD architectures

Based on Flynn's multiple-instruction-multiple-data streams terminology, this category spans a wide spectrum of architectures in which processors execute multiple instruction sequences on (potentially) dissimilar data streams without strict synchronization. Although both instruction and data streams can be different for each processor, they need not be. Thus, MIMD architectures can run identical programs that are in various stages at any given time, run unique instruction and data streams on each processor or execute a combination of each these scenarios. This category is subdivided further primarily on the basis of memory organization.


Distributed memory


Shared memory


MIMD-paradigm architectures

The MIMD-based paradigms category subsumes systems in which a specific programming or execution paradigm is at least as fundamental to the architectural design as structural considerations are. Thus, the design of ''dataflow architectures'' and ''reduction machines'' is as much the product of supporting their distinctive execution paradigm as it is a product of connecting processors and memories in MIMD fashion. The category's subdivisions are defined by these paradigms.


MIMD/SIMD hybrid


Dataflow machine


Reduction machine


Wavefront array


References

* C Xavier and S S Iyengar, Introduction to Parallel Programming {{refend Computer architecture