Bit-level parallelism is a form of
parallel computing
Parallel computing is a type of computation in which many calculations or processes are carried out simultaneously. Large problems can often be divided into smaller ones, which can then be solved at the same time. There are several different fo ...
based on increasing
processor word size. Increasing the word size reduces the number of instructions the processor must execute in order to perform an operation on variables whose sizes are greater than the length of the word. (For example, consider a case where an
8-bit
In computer architecture, 8-bit Integer (computer science), integers or other Data (computing), data units are those that are 8 bits wide (1 octet (computing), octet). Also, 8-bit central processing unit (CPU) and arithmetic logic unit (ALU) arc ...
processor must add two
16-bit
16-bit microcomputers are microcomputers that use 16-bit microprocessors.
A 16-bit register can store 216 different values. The range of integer values that can be stored in 16 bits depends on the integer representation used. With the two mos ...
integers. The processor must first add the 8 lower-order bits from each integer, then add the 8 higher-order bits, requiring two instructions to complete a single operation. A 16-bit processor would be able to complete the operation with single instruction.)
Originally, all electronic computers were serial (single-bit) computers. The first electronic computer that was not a
serial computer—the first bit-parallel computer—was the 16-bit
Whirlwind from 1951.
From the advent of
very-large-scale integration
Very large-scale integration (VLSI) is the process of creating an integrated circuit (IC) by combining millions or billions of MOS transistors onto a single chip. VLSI began in the 1970s when MOS integrated circuit (Metal Oxide Semiconductor) c ...
(VLSI) computer chip fabrication technology in the 1970s until about 1986, advancements in computer architecture were done by increasing bit-level parallelism,
[David E. Culler, Jaswinder Pal Singh, Anoop Gupta. Parallel Computer Architecture - A Hardware/Software Approach. Morgan Kaufmann Publishers, 1999. , pg 15] as
4-bit microprocessors were replaced by
8-bit
In computer architecture, 8-bit Integer (computer science), integers or other Data (computing), data units are those that are 8 bits wide (1 octet (computing), octet). Also, 8-bit central processing unit (CPU) and arithmetic logic unit (ALU) arc ...
, then
16-bit
16-bit microcomputers are microcomputers that use 16-bit microprocessors.
A 16-bit register can store 216 different values. The range of integer values that can be stored in 16 bits depends on the integer representation used. With the two mos ...
, then
32-bit
In computer architecture, 32-bit computing refers to computer systems with a processor, memory, and other major system components that operate on data in 32-bit units. Compared to smaller bit widths, 32-bit computers can perform large calculation ...
microprocessors. This trend generally came to an end with the introduction of 32-bit processors, which were a standard in general purpose computing for two decades. 64 bit architectures were introduced to the mainstream with the eponymous Nintendo 64 (1996), but beyond this introduction stayed uncommon until the advent of
x86-64 architectures around the year 2003, and 2014 for mobile devices with the ARMv8-A instruction set.
On 32-bit processors, external
data bus width continues to increase. For example,
DDR1 SDRAM
Double Data Rate Synchronous Dynamic Random-Access Memory (DDR SDRAM) is a double data rate (DDR) synchronous dynamic random-access memory (SDRAM) class of memory integrated circuits used in computers. DDR SDRAM, also retroactively called DDR1 ...
transfers 128 bits per clock cycle.
DDR2 SDRAM
Double Data Rate 2 Synchronous Dynamic Random-Access Memory (DDR2 SDRAM) is a double data rate (DDR) synchronous dynamic random-access memory (SDRAM) interface. It superseded the original DDR SDRAM specification, and was itself superseded by DDR3 ...
transfers a minimum of 256 bits per burst.
See also
*
Single Instruction, Multiple Data
Single instruction, multiple data (SIMD) is a type of parallel processing in Flynn's taxonomy. SIMD can be internal (part of the hardware design) and it can be directly accessible through an instruction set architecture (ISA), but it should ...
(SIMD)
*
SIMD Within A Register
References
{{Parallel computing
Parallel computing