Superscalar Architecture
   HOME

TheInfoList



OR:

A superscalar processor is a
CPU A central processing unit (CPU), also called a central processor, main processor or just processor, is the electronic circuitry that executes instructions comprising a computer program. The CPU performs basic arithmetic, logic, controlling, and ...
that implements a form of parallelism called instruction-level parallelism within a single processor. In contrast to a scalar processor, which can execute at most one single instruction per clock cycle, a superscalar processor can execute more than one instruction during a clock cycle by simultaneously dispatching multiple instructions to different execution units on the processor. It therefore allows more throughput (the number of instructions that can be executed in a unit of time) than would otherwise be possible at a given clock rate. Each execution unit is not a separate processor (or a core if the processor is a multi-core processor), but an execution resource within a single CPU such as an
arithmetic logic unit In computing, an arithmetic logic unit (ALU) is a Combinational logic, combinational digital circuit that performs arithmetic and bitwise operations on integer binary numbers. This is in contrast to a floating-point unit (FPU), which operates on ...
. In Flynn's taxonomy, a single-core superscalar processor is classified as an
SISD SISD can refer to: * Single instruction, single data, a computer processor architecture * CCL5, an 8kDa protein also using the symbol SISD * Sixteen-segment display * Several school districts in Texas. See List of school districts in Texas - S * S ...
processor (single instruction stream, single data stream), though a single-core superscalar processor that supports short vector operations could be classified as SIMD (single instruction stream, multiple data streams). A
multi-core A multi-core processor is a microprocessor on a single integrated circuit with two or more separate processing units, called cores, each of which reads and executes program instructions. The instructions are ordinary CPU instructions (such a ...
superscalar processor is classified as an MIMD processor (multiple instruction streams, multiple data streams). While a superscalar CPU is typically also pipelined, superscalar and pipelining execution are considered different performance enhancement techniques. The former executes multiple instructions in parallel by using multiple execution units, whereas the latter executes multiple instructions in the same execution unit in parallel by dividing the execution unit into different phases. The superscalar technique is traditionally associated with several identifying characteristics (within a given CPU): * Instructions are issued from a sequential instruction stream * The CPU dynamically checks for data dependencies between instructions at run time (versus software checking at compile time) * The CPU can execute multiple instructions per clock cycle


History

Seymour Cray's CDC 6600 from 1964 is often mentioned as the first superscalar design. The 1967 IBM System/360 Model 91 was another superscalar mainframe. The Motorola MC88100 (1988), the Intel i960CA (1989) and the AMD 29000-series 29050 (1990) microprocessors were the first commercial single-chip superscalar microprocessors.
RISC In computer engineering, a reduced instruction set computer (RISC) is a computer designed to simplify the individual instructions given to the computer to accomplish tasks. Compared to the instructions given to a complex instruction set comput ...
microprocessors like these were the first to have superscalar execution, because RISC architectures free transistors and die area which can be used to include multiple execution units (this was why RISC designs were faster than CISC designs through the 1980s and into the 1990s). Except for CPUs used in low-power applications, embedded systems, and battery-powered devices, essentially all general-purpose CPUs developed since about 1998 are superscalar. The P5 Pentium was the first superscalar x86 processor; the Nx586, P6 Pentium Pro and AMD K5 were among the first designs which decode x86-instructions asynchronously into dynamic
microcode In processor design, microcode (μcode) is a technique that interposes a layer of computer organization between the central processing unit (CPU) hardware and the programmer-visible instruction set architecture of a computer. Microcode is a laye ...
-like '' micro-op'' sequences prior to actual execution on a superscalar
microarchitecture In computer engineering, microarchitecture, also called computer organization and sometimes abbreviated as µarch or uarch, is the way a given instruction set architecture (ISA) is implemented in a particular processor. A given ISA may be impl ...
; this opened up for dynamic scheduling of buffered ''partial'' instructions and enabled more parallelism to be extracted compared to the more rigid methods used in the simpler P5 Pentium; it also simplified speculative execution and allowed higher clock frequencies compared to designs such as the advanced Cyrix 6x86.


Scalar to superscalar

The simplest processors are scalar processors. Each instruction executed by a scalar processor typically manipulates one or two data items at a time. By contrast, each instruction executed by a vector processor operates simultaneously on many data items. An analogy is the difference between
scalar Scalar may refer to: *Scalar (mathematics), an element of a field, which is used to define a vector space, usually the field of real numbers * Scalar (physics), a physical quantity that can be described by a single element of a number field such ...
and vector arithmetic. A superscalar processor is a mixture of the two. Each instruction processes one data item, but there are multiple execution units within each CPU thus multiple instructions can be processing separate data items concurrently. Superscalar CPU design emphasizes improving the instruction dispatcher accuracy, and allowing it to keep the multiple execution units in use at all times. This has become increasingly important as the number of units has increased. While early superscalar CPUs would have two
ALU ALU, Alu or alu may refer to: Computing and science ;Computing *Arithmetic logic unit, a digital electronic circuit ;Biology * Alu sequence, a type of short stretch of DNA *'' Arthrobacter luteus'', a bacterium Organizations * Abraham Lincoln ...
s and a single
FPU FPU may stand for: Universities * Florida Polytechnic University, in Lakeland, Florida, United States * Franklin Pierce University, in New Hampshire, United States * Fresno Pacific University, in California, United States * Fukui Prefectural Univ ...
, a later design such as the PowerPC 970 includes four ALUs, two FPUs, and two SIMD units. If the dispatcher is ineffective at keeping all of these units fed with instructions, the performance of the system will be no better than that of a simpler, cheaper design. A superscalar processor usually sustains an execution rate in excess of one instruction per machine cycle. But merely processing multiple instructions concurrently does not make an architecture superscalar, since pipelined, multiprocessor or
multi-core A multi-core processor is a microprocessor on a single integrated circuit with two or more separate processing units, called cores, each of which reads and executes program instructions. The instructions are ordinary CPU instructions (such a ...
architectures also achieve that, but with different methods. In a superscalar CPU the dispatcher reads instructions from memory and decides which ones can be run in parallel, dispatching each to one of the several execution units contained inside a single CPU. Therefore, a superscalar processor can be envisioned having multiple parallel pipelines, each of which is processing instructions simultaneously from a single instruction thread.


Limitations

Available performance improvement from superscalar techniques is limited by three key areas: * The degree of intrinsic parallelism in the instruction stream (instructions requiring the same computational resources from the CPU) * The complexity and time cost of dependency checking logic and register renaming circuitry * The branch instruction processing Existing binary executable programs have varying degrees of intrinsic parallelism. In some cases instructions are not dependent on each other and can be executed simultaneously. In other cases they are inter-dependent: one instruction impacts either resources or results of the other. The instructions a = b + c; d = e + f can be run in parallel because none of the results depend on other calculations. However, the instructions a = b + c; b = e + f might not be runnable in parallel, depending on the order in which the instructions complete while they move through the units. Although the instruction stream may contain no inter-instruction dependencies, a superscalar CPU must nonetheless check for that possibility, since there is no assurance otherwise and failure to detect a dependency would produce incorrect results. No matter how advanced the semiconductor process or how fast the switching speed, this places a practical limit on how many instructions can be simultaneously dispatched. While process advances will allow ever greater numbers of execution units (e.g. ALUs), the burden of checking instruction dependencies grows rapidly, as does the complexity of register renaming circuitry to mitigate some dependencies. Collectively the power consumption, complexity and gate delay costs limit the achievable superscalar speedup. However even given infinitely fast dependency checking logic on an otherwise conventional superscalar CPU, if the instruction stream itself has many dependencies, this would also limit the possible speedup. Thus the degree of intrinsic parallelism in the code stream forms a second limitation.


Alternatives

Collectively, these limits drive investigation into alternative architectural changes such as very long instruction word (VLIW), explicitly parallel instruction computing (EPIC),
simultaneous multithreading Simultaneous multithreading (SMT) is a technique for improving the overall efficiency of superscalar CPUs with hardware multithreading. SMT permits multiple independent threads of execution to better use the resources provided by modern process ...
(SMT), and multi-core computing. With VLIW, the burdensome task of dependency checking by
hardware logic Electronic hardware consists of interconnected electronic components which perform analog circuit, analog or Digital electronics, logic operations on received and locally stored information to produce as output or store resulting new information ...
at run time is removed and delegated to the compiler. Explicitly parallel instruction computing (EPIC) is like VLIW with extra cache prefetching instructions. Simultaneous multithreading (SMT) is a technique for improving the overall efficiency of superscalar processors. SMT permits multiple independent threads of execution to better utilize the resources provided by modern processor architectures. Superscalar processors differ from multi-core processors in that the several execution units are not entire processors. A single processor is composed of finer-grained execution units such as the
ALU ALU, Alu or alu may refer to: Computing and science ;Computing *Arithmetic logic unit, a digital electronic circuit ;Biology * Alu sequence, a type of short stretch of DNA *'' Arthrobacter luteus'', a bacterium Organizations * Abraham Lincoln ...
, integer multiplier, integer shifter,
FPU FPU may stand for: Universities * Florida Polytechnic University, in Lakeland, Florida, United States * Franklin Pierce University, in New Hampshire, United States * Fresno Pacific University, in California, United States * Fukui Prefectural Univ ...
, etc. There may be multiple versions of each execution unit to enable execution of many instructions in parallel. This differs from a multi-core processor that concurrently processes instructions from ''multiple'' threads, one thread per processing unit (called "core"). It also differs from a
pipelined processor In computer engineering, instruction pipelining or ILP is a technique for implementing instruction-level parallelism within a single processor. Pipelining attempts to keep every part of the processor busy with some instruction by dividing incom ...
, where the multiple instructions can concurrently be in various stages of execution, assembly-line fashion. The various alternative techniques are not mutually exclusive—they can be (and frequently are) combined in a single processor. Thus a multicore CPU is possible where each core is an independent processor containing multiple parallel pipelines, each pipeline being superscalar. Some processors also include vector capability.


See also

*
Eager execution In a programming language, an evaluation strategy is a set of rules for evaluating expressions. The term is often used to refer to the more specific notion of a ''parameter-passing strategy'' that defines the kind of value that is passed to the f ...
* Hyper-threading *
Simultaneous multithreading Simultaneous multithreading (SMT) is a technique for improving the overall efficiency of superscalar CPUs with hardware multithreading. SMT permits multiple independent threads of execution to better use the resources provided by modern process ...
*
Out-of-order execution In computer engineering, out-of-order execution (or more formally dynamic execution) is a paradigm used in most high-performance central processing units to make use of instruction cycles that would otherwise be wasted. In this paradigm, a proce ...
*
Shelving buffer A shelving buffer is a technique used in computer processors to increase the efficiency of superscalar processors. It allows for multiple instructions to be dispatched at once regardless of the data dependencies between those instructions. This a ...
* Speculative execution * Software lockout, a multiprocessor issue similar to logic dependencies on superscalars *
Super-threading Temporal multithreading is one of the two main forms of multithreading that can be implemented on computer processor hardware, the other being simultaneous multithreading. The distinguishing difference between the two forms is the maximum number ...


References

*
Mike Johnson James Michael Johnson (born January 30, 1972) is an American lawyer and politician serving as the List of speakers of the United States House of Representatives, 56th speaker of the United States House of Representatives since October 25, 20 ...
, ''Superscalar Microprocessor Design'', Prentice-Hall, 1991, * Sorin Cotofana, Stamatis Vassiliadis, "On the Design Complexity of the Issue Logic of Superscalar Machines",
EUROMICRO EUROMICRO is a non-profit organization. History It was founded in 1973 by Rodnay Zaks and co-founded by Reiner Hartenstein and a few other colleagues in response to emerging microprocessor technology (workstations, PCs etc. that were to be networ ...
1998: 10277-10284 * Steven McGeady, "The i960CA SuperScalar Implementation of the 80960 Architecture", IEEE 1990, pp. 232–240 * Steven McGeady, et al., "Performance Enhancements in the Superscalar i960MM Embedded Microprocessor," ''ACM Proceedings of the 1991 Conference on Computer Architecture (Compcon)'', 1991, pp. 4–7


External links


Eager Execution / Dual Path / Multiple Path
By Mark Smotherman {{Parallel computing Classes of computers Computer architecture Parallel computing