Latency Oriented Processor Architecture
   HOME

TheInfoList



OR:

Latency oriented processor architecture is the
microarchitecture In computer engineering, microarchitecture, also called computer organization and sometimes abbreviated as µarch or uarch, is the way a given instruction set architecture (ISA) is implemented in a particular processor. A given ISA may be impl ...
of a microprocessor designed to serve a serial computing
thread Thread may refer to: Objects * Thread (yarn), a kind of thin yarn used for sewing ** Thread (unit of measurement), a cotton yarn measure * Screw thread, a helical ridge on a cylindrical fastener Arts and entertainment * ''Thread'' (film), 2016 ...
with a low latency. This is typical of most central processing units (CPU) being developed since the 1970s. These architectures, in general, aim to execute as many instructions as possible belonging to a single serial thread, in a given window of time; however, the time to execute a single instruction completely from fetch to retire stages may vary from a few cycles to even a few hundred cycles in some cases. Latency oriented processor architectures are the opposite of throughput-oriented processors which concern themselves more with the total throughput of the system, rather than the service latencies for all individual threads that they work on.


Flynn's taxonomy

Typically, latency oriented processor architectures execute a single task operating on a single data stream, and so they are
SISD SISD can refer to: * Single instruction, single data, a computer processor architecture * CCL5, an 8kDa protein also using the symbol SISD * Sixteen-segment display * Several school districts in Texas. See List of school districts in Texas - S * S ...
under Flynn's taxonomy. Latency oriented processor architectures might also include SIMD instruction set extensions such as Intel MMX and SSE; even though these extensions operate on large data sets, their primary goal is to reduce overall latency.


Implementation techniques

There are many architectural techniques employed to reduce the overall latency for a single computing task. These typically involve adding additional hardware in the
pipeline Pipeline may refer to: Electronics, computers and computing * Pipeline (computing), a chain of data-processing stages or a CPU optimization found on ** Instruction pipelining, a technique for implementing instruction-level parallelism within a s ...
to serve instructions as soon as they are fetched from memory or instruction cache. A notable characteristic of these architectures is that a significant area of the chip is used up in parts other than the
Execution Units In computer engineering, an execution unit (E-unit or EU) is a part of the central processing unit (CPU) that performs the operations and calculations as instructed by the computer program. It may have its own internal control sequence unit (not ...
themselves. This is because the intent is to bring down the time required to complete a 'typical' task in a computing environment. A typical computing task is a serial set of instructions, where there is a high dependency on results produced by the previous instructions of the same task. Hence, it makes sense that the microprocessor will be spending its time doing many other tasks other than the calculations required by the individual instructions themselves. If the
hazards A hazard is a potential source of harm. Substances, events, or circumstances can constitute hazards when their nature would allow them, even just theoretically, to cause damage to health, life, property, or any other interest of value. The probabi ...
encountered during computation are not resolved quickly, then latency for the thread increases. This is because hazards stall execution of subsequent instructions and, depending upon the pipeline implementation, may either stall progress completely until the dependency is resolved or lead to an avalanche of more hazards in future instructions; further exacerbating execution time for the thread. The design space of micro-architectural techniques is very large. Below are some of the most commonly employed techniques to reduce the overall latency for a thread.


Instruction set architecture (ISA)

Most architectures today use shorter and simpler instructions, like the load/store architecture, which help in optimizing the instruction pipeline for faster execution. Instructions are usually all of the same size which also helps in optimizing the instruction fetch logic. Such an ISA is called a
RISC In computer engineering, a reduced instruction set computer (RISC) is a computer designed to simplify the individual instructions given to the computer to accomplish tasks. Compared to the instructions given to a complex instruction set comput ...
architecture.


Instruction pipelining

Pipelining overlaps execution of multiple instructions from the same executing thread in order to increase clock frequency or to increase the number of instructions that complete per unit time; thereby reducing the overall execution time for a thread. Instead of waiting for a single instruction to complete all its execution stages, multiple instructions are processed simultaneously, at their respective stages inside the pipeline.


Register-renaming

This technique is used to effectively increase the total register file size than that specified in the ISA to programmers, and to eliminate false dependencies. Suppose we have two consecutive instructions which reference the same register. The first reads the register while the second writes to it. To maintain correctness of the program, it is essential to make sure that the second instruction does not write to the register before the first can read its original value. This is an example of a Write-After-Read (WAR) dependency. To eliminate this dependency, the pipeline would 'rename' the instruction internally by assigning it to an internal register. The instruction is therefore allowed to execute and results produced by it will now be immediately available to all subsequent instructions, even though the actual destination register intended by the program will be written to later. Similarly if both the instructions simply meant to write to the same register Write-After-Write (WAW), the pipeline would rename them and ensure that their results are available to future instructions without the need to serialize their execution.


Memory organization

The different levels of memory, which includes caches,
main memory Computer data storage is a technology consisting of computer components and recording media that are used to retain digital data. It is a core function and fundamental component of computers. The central processing unit (CPU) of a computer ...
and non-volatile storage like hard disks (where the program instructions and data reside), are designed to exploit
spatial locality In physics, the principle of locality states that an object is influenced directly only by its immediate surroundings. A theory that includes the principle of locality is said to be a "local theory". This is an alternative to the concept of ins ...
and
temporal locality In computer science, locality of reference, also known as the principle of locality, is the tendency of a processor to access the same set of memory locations repetitively over a short period of time. There are two basic types of reference localit ...
to reduce the total
memory access time Column Address Strobe (CAS) latency, or CL, is the delay in clock cycles between the READ command and the moment data is available. In asynchronous DRAM, the interval is specified in nanoseconds (absolute time). In synchronous DRAM, the interva ...
. The less time the processor spends waiting for data to be fetched from memory, the lower number of instructions consume pipeline resources while just sitting idle and doing no useful work. The instruction pipeline will be completely stalled if all its internal buffers (for example reservation stations) are filled to their respective capacities. Hence, if instructions consume fewer idle cycles while inside the pipeline, there is a greater chance of exploiting Instruction level parallelism (ILP) as the fetch logic can pull in greater number of instructions from the cache/memory per unit time.


Speculative execution

A major cause for pipeline stalls are control flow dependencies, i.e. when the outcome of a branch instruction is not known in advance (which is usually the case). Many architectures today use branch predictor components to guess the outcome of a branch. Execution continues along the predicted path for the program but instructions are tagged as speculative. If the guess turns out to be correct, then the instructions are allowed to complete successfully and to update their results back to register file/memory. If the guess was incorrect, then all speculative instructions are flushed from the pipeline and execution (re)starts along the actual correct path for the program. By maintaining a high prediction accuracy, the pipeline is able to significantly increase throughput for the executing thread.


Out-of-order execution

Not all instructions in a thread take the same amount of time to execute. Superscalar pipelines usually have multiple possible paths for instructions depending upon current state and the instruction type itself. Hence, to increase instructions per cycle (IPC) the pipeline allows execution of instructions out-of-order so that instructions later in the program are not stalled due to an instruction which will take longer to complete. All instructions are registered in a re-order buffer when they are fetched by the pipeline and allowed to retire (i.e. write back their results) in the order of the original program so as to maintain correctness.


Superscalar execution

A super-scalar instruction pipeline pulls in multiple instructions in every clock cycle, as opposed to a simple scalar pipeline. This increases Instruction level parallelism (ILP) as many times as the number of instructions fetched in each cycle, except when the pipeline is stalled due to data or control flow dependencies. Even though the retire rate of superscalar pipelines is usually less than their fetch rate, the overall number of instructions executed per unit time (> 1) is generally greater than a scalar pipeline.


Contrast with throughput oriented processor architectures

In contrast, a throughput oriented processor architecture is designed to maximize the amount of 'useful work' done in a significant window of time. Useful work refers to large calculations on a significant amount of data. They do this by parallelizing the work load so that many calculations can be performed simultaneously. The calculations may belong to a single task or a limited number of multiple tasks. The total time required to complete 1 execution is significantly larger than that of a latency oriented processor architecture, however, the total time to complete a large set of calculations is significantly reduced. Latency is often sacrificed in order to achieve a higher throughput per cycle. As a result, a latency oriented processor may complete a single calculation significantly faster than a throughput-oriented processor; however, the throughput-oriented processor could be partway through hundreds of such computations by the time the latency oriented processor completes 1 calculation. Latency oriented processors expend a substantial chip area on sophisticated control structures like branch prediction, data forwarding,
re-order buffer A re-order buffer (ROB) is a hardware unit used in an extension to the Tomasulo algorithm to support out-of-order and speculative instruction execution. The extension forces instructions to be committed in-order. The buffer is a circular buffer ...
, large register files and caches in each processor. These structures help reduce operational latency and memory-access time per instruction, and make results available as soon as possible. Throughput oriented architectures on the other hand, usually have a multitude of processors with much smaller caches and simpler control logic. This helps to efficiently utilize the memory bandwidth and increase total the number of total number of execution units on the same chip area. GPUs are a typical example of throughput oriented processor architectures.


Notes


References

{{Reflist Microprocessors