Processor Consistency is one of the
consistency model
In computer science, a consistency model specifies a contract between the programmer and a system, wherein the system guarantees that if the programmer follows the rules for operations on memory, memory will be consistent and the results of rea ...
s used in the domain of
concurrent computing
Concurrent computing is a form of computing in which several computations are executed '' concurrently''—during overlapping time periods—instead of ''sequentially—''with one completing before the next starts.
This is a property of a sys ...
(e.g. in
distributed shared memory
In computer science, distributed shared memory (DSM) is a form of memory architecture where physically separated memories can be addressed as a single shared address space. The term "shared" does not mean that there is a single centralized memo ...
,
distributed transaction A distributed transaction is a database transaction in which two or more network hosts are involved. Usually, hosts provide transactional resources, while the transaction manager is responsible for creating and managing a global transaction that enc ...
s, etc.).
A system exhibits Processor Consistency if the order in which other processors see the writes from any individual processor is the same as the order they were issued. Because of this, Processor Consistency is only applicable to systems with multiple processors. It is weaker than the
Causal Consistency Causal consistency is one of the major memory consistency models. In concurrent programming, where concurrent processes are accessing a shared memory, a consistency model restricts which accesses are legal. This is useful for defining correct data ...
model because it does not require writes from ''all'' processors to be seen in the same order, but stronger than the
PRAM Consistency model because it requires
Cache Coherence
In computer architecture, cache coherence is the uniformity of shared resource data that ends up stored in multiple local caches. When clients in a system maintain caches of a common memory resource, problems may arise with incoherent data, whi ...
.
Another difference between Causal Consistency and Processor Consistency is that Processor Consistency removes the requirements for loads to wait for stores to complete, and for Write
Atomicity.
Processor Consistency is also stronger than
Cache Consistency
In computer architecture, cache coherence is the uniformity of shared resource data that ends up stored in multiple local caches. When clients in a system maintain caches of a common memory resource, problems may arise with incoherent data, wh ...
because Processor Consistency requires all writes by a processor to be seen in order, not just writes to the same memory location.
Examples of Processor Consistency
In Example 1 to the right, the simple system follows Processor Consistency, as all the writes by each processor are seen in the order they occurred in by the other processors, and the transactions are coherent.
Example 2 is ''not'' Processor Consistent, as the writes by P1 and P3 are seen out of order by P2 and P4 respectively.
Example 3 is Processor Consistent and ''not'' Causally Consistent because in P3: for Causal Consistency it should be since W(x)2 in P1 causally precedes W(y)3 in P2.
Example 4 is ''not'' Processor Consistent because in P2: for Processor Consistency it should be because W(x)2 is the latest write to x preceding W(y)3 in P1.
This example Cache Consistent because P2 sees writes to individual memory locations in the order they were issued in P1.
Processor Consistency vs. Sequential Consistency
Processor Consistency (PC) relaxes the ordering between older stores and younger loads that is enforced in
Sequential consistency
Sequential consistency is a consistency model used in the domain of concurrent computing
Concurrent computing is a form of computing in which several computations are executed '' concurrently''—during overlapping time periods—instead of ...
(SC).
This allows loads to be issued to the cache and potentially complete before older stores, meaning that stores can be queued in a write buffer without the need for load speculation to be implemented (the loads can continue freely).
In this regard, PC performs better than SC because recovery techniques for failed speculations aren’t necessary, which means fewer pipeline flushes.
The
prefetching
Prefetching in computer science is a technique for speeding up fetch operations by beginning a fetch operation whose result is expected to be needed soon. Usually this is before it is ''known'' to be needed, so there is a risk of wasting time by p ...
optimization that SC systems employ is also applicable to PC systems.
''Prefetching'' is the act of fetching data in advance for upcoming loads and stores before it is actually needed, to cut down on load/store latency. Since PC reduces load latency by allowing loads to be re-ordered before corresponding stores, the need for prefetching is somewhat reduced, as the prefetched data will be used more for stores than for loads.
Programmer’s Intuition
In terms of how well a PC system follows a programmer’s intuition, it turns out that in properly synchronized systems, the outcomes of PC and SC are the same.
Programmer’s intuition is essentially how the programmer expects the instructions to execute, usually in what is referred to as “program order.” Program order in a multiprocessor system is the execution of instructions resulting in the same outcome as a sequential execution. The fact that PC and SC both follow this expectation is a direct consequence of the fact that corresponding loads and stores in PC systems are still ordered with respect to each other.
For example, in lock synchronization, the only operation whose behavior is not fully defined by PC is the lock-acquire store, where subsequent loads are in the critical section and their order affects the outcome.
This operation, however, is usually implemented with a store conditional or atomic instruction, so that if the operation fails it will be repeated later and all the younger loads will also be repeated.
All loads occurring before this store are still ordered with respect to the loads occurring in the critical section, and as such all the older loads have to complete before loads in the critical section can run.
Processor Consistency vs Other Relaxed Consistency Models
Processor consistency, while weaker than sequential consistency, is still in most cases a stronger consistency model than is needed. This is due to the number of synchronization points inherent to programs that run on multiprocessor systems.
This means that no data races can occur (a data race being multiple simultaneous accesses to memory location where at least one access is a write).
With this in mind, it is clear to see that a model could allow for reorganization of all memory operations, as long as no operation crosses a synchronization point
and one does, called Weak Ordering. However, weak ordering does impose some of the same restrictions as processor consistency, namely that the system must remain coherent and thus all writes to the same memory location must be seen by all processors in the same order.
Similar to weak ordering, the release consistency model allows reordering of all memory operations, but it gets even more specific and breaks down synchronization operations to allow more relaxation of reorders.
Both of these models assume proper synchronization of code and in some cases hardware synchronization support, and so processor consistency is a safer model to adhere to if one is unsure about the reliability of the programs to be run using the model.
Similarity to SPARC V8 TSO, IBM-370, and x86-TSO Memory Models
One of the main components of processor consistency is that if a write followed by a read is allowed to execute out of program order. This essentially results in the hiding of write latency when loads are allowed to go ahead of stores. Since many applications function correctly with this structure, systems that implement this type of relaxed ordering typically appear sequentially consistent. Two other models that conform to this specification are the
SPARC
SPARC (Scalable Processor Architecture) is a reduced instruction set computer (RISC) instruction set architecture originally developed by Sun Microsystems. Its design was strongly influenced by the experimental Berkeley RISC system developed ...
V8 TSO (Total Store Ordering) and the IBM-370.
The IBM-370 model follows the specification of allowing a write followed by a read to execute out of program order, with a few exceptions. The first is that if the operations are to the same location, they must be in program order. The second is that if either operation is part of a serialization instruction or there is a serialization instruction between the two operations, then the operations must execute in program order.
This model is perhaps the strictest of the three models being considered, as the TSO model removes one of the exceptions mentioned.
The SPARC V8 TSO model is very similar to the IBM-370 model with the key difference that it allows operations to the same location to complete out of program order. With this, it is possible that a load returns a store that occurred that is “out of date” in terms of program order.
These models are similar to processor consistency, but whereas these models only have one copy of memory, processor consistency has no such restriction. This suggests a system in which each processor has its own memory, which emphasizes upon processor consistency the “coherence requirement.
"
The x86-TSO model has a number of different definitions. The total store model, as the name suggests, is very similar to the SPARC V8. The other definition is based on local write buffers. The differences in the x86 and SPARC TSO models is in the omission of some instructions and inclusion of others, but the models themselves are very similar.
The write buffer definition utilizes various states and locks to determine whether a particular value can be read/written to. In addition, this particular model for the x86 architecture is not plagued by the issues of previous (weaker consistency) models, and provides a more intuitive base for programmers to build upon.
See also
*
Serializability
In concurrency control of databases,Philip A. Bernstein, Vassos Hadzilacos, Nathan Goodman (1987)''Concurrency Control and Recovery in Database Systems''(free PDF download), Addison Wesley Publishing Company, Gerhard Weikum, Gottfried Vossen (200 ...
References
{{reflist
Consistency models