Eventual Consistency
   HOME
*





Eventual Consistency
Eventual consistency is a consistency model used in distributed computing to achieve high availability that informally guarantees that, if no new updates are made to a given data item, eventually all accesses to that item will return the last updated value. Eventual consistency, also called optimistic replication, is widely deployed in distributed systems and has origins in early mobile computing projects. A system that has achieved eventual consistency is often said to have converged, or achieved replica convergence. Eventual consistency is a weak guarantee – most stronger models, like linearizability, are trivially eventually consistent. Eventually-consistent services are often classified as providing BASE semantics (basically-available, soft-state, eventual consistency), in contrast to traditional ACID (atomicity, consistency, isolation, durability). In chemistry, a base is the opposite of an acid, which helps in remembering the acronym. According to the same resource, thes ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Consistency Model
In computer science, a consistency model specifies a contract between the programmer and a system, wherein the system guarantees that if the programmer follows the rules for operations on memory, memory will be consistent and the results of reading, writing, or updating memory will be predictable. Consistency models are used in distributed systems like distributed shared memory systems or distributed data stores (such as filesystems, databases, optimistic replication systems or web caching). Consistency is different from coherence, which occurs in systems that are cached or cache-less, and is consistency of data with respect to all processors. Coherence deals with maintaining a global order in which writes to a single location or single variable are seen by all processors. Consistency deals with the ordering of operations to multiple locations with respect to all processors. High level languages, such as C++ and Java, maintain the consistency contract by translating memory operat ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Distributed Computing
A distributed system is a system whose components are located on different computer network, networked computers, which communicate and coordinate their actions by message passing, passing messages to one another from any system. Distributed computing is a field of computer science that studies distributed systems. The components of a distributed system interact with one another in order to achieve a common goal. Three significant challenges of distributed systems are: maintaining concurrency of components, overcoming the clock synchronization, lack of a global clock, and managing the independent failure of components. When a component of one system fails, the entire system does not fail. Examples of distributed systems vary from service-oriented architecture, SOA-based systems to massively multiplayer online games to peer-to-peer, peer-to-peer applications. A computer program that runs within a distributed system is called a distributed program, and ''distributed programming' ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


High Availability
High availability (HA) is a characteristic of a system which aims to ensure an agreed level of operational performance, usually uptime, for a higher than normal period. Modernization has resulted in an increased reliance on these systems. For example, hospitals and data centers require high availability of their systems to perform routine daily activities. Availability refers to the ability of the user community to obtain a service or good, access the system, whether to submit new work, update or alter existing work, or collect the results of previous work. If a user cannot access the system, it is – from the user's point of view – ''unavailable''. Generally, the term ''downtime'' is used to refer to periods when a system is unavailable. Principles There are three principles of systems design in reliability engineering which can help achieve high availability. # Elimination of single points of failure. This means adding or building redundancy into the system so that ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Optimistic Replication
Optimistic replication, also known as lazy replication, is a strategy for replication, in which replicas are allowed to diverge. Traditional pessimistic replication systems try to guarantee from the beginning that all of the replicas are identical to each other, as if there was only a single copy of the data all along. Optimistic replication does away with this in favor of eventual consistency, meaning that replicas are guaranteed to converge only when the system has been quiesced for a period of time. As a result, there is no longer a need to wait for all of the copies to be synchronized when updating data, which helps concurrency and parallelism. The trade-off is that different replicas may require explicit reconciliation later on, which might then prove difficult or even insoluble. Algorithms An optimistic replication algorithm consists of five elements: # Operation submission: Users submit operations at independent sites. # Propagation: Each site shares the operations it ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Linearizability
In concurrent programming, an operation (or set of operations) is linearizable if it consists of an ordered list of invocation and response events (event), that may be extended by adding response events such that: # The extended list can be re-expressed as a sequential history (is serializable). # That sequential history is a subset of the original unextended list. Informally, this means that the unmodified list of events is linearizable if and only if its invocations were serializable, but some of the responses of the serial schedule have yet to return. In a concurrent system, processes can access a shared object at the same time. Because multiple processes are accessing a single object, there may arise a situation in which while one process is accessing the object, another process changes its contents. Making a system linearizable is one solution to this problem. In a linearizable system, although operations overlap on a shared object, each operation appears to take place i ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


ACID
In computer science, ACID ( atomicity, consistency, isolation, durability) is a set of properties of database transactions intended to guarantee data validity despite errors, power failures, and other mishaps. In the context of databases, a sequence of database operations that satisfies the ACID properties (which can be perceived as a single logical operation on the data) is called a ''transaction''. For example, a transfer of funds from one bank account to another, even involving multiple changes such as debiting one account and crediting another, is a single transaction. In 1983, Andreas Reuter and Theo Härder coined the acronym ''ACID'', building on earlier work by Jim Gray who named atomicity, consistency, and durability, but not isolation, when characterizing the transaction concept. These four properties are the major guarantees of the transaction paradigm, which has influenced many aspects of development in database systems. According to Gray and Reuter, the IBM Informa ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Liveness
Properties of an execution of a computer program —particularly for concurrent and distributed systems— have long been formulated by giving ''safety properties'' ("bad things don't happen") and ''liveness properties'' ("good things do happen"). A simple example will illustrate safety and liveness. A program is totally correct with respect to a precondition P and postcondition Q if any execution started in a state satisfying P terminates in a state satisfying Q. Total correctness is a conjunction of a safety property and a liveness property: * The safety property prohibits these "bad things": executions that start in a state satisfying P and terminate in a final state that does not satisfy Q. For a program C, this safety property is usually written using the Hoare triple \ C \. * The liveness property, the "good thing", is that execution that starts in a state satisfying P terminates. Note that a ''bad thing'' is discrete, since it happens at a particular place during execution. ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Safety (distributed Computing)
Properties of an execution of a computer program —particularly for concurrent and distributed systems— have long been formulated by giving ''safety properties'' ("bad things don't happen") and ''liveness properties'' ("good things do happen"). A simple example will illustrate safety and liveness. A program is totally correct with respect to a precondition P and postcondition Q if any execution started in a state satisfying P terminates in a state satisfying Q. Total correctness is a conjunction of a safety property and a liveness property: * The safety property prohibits these "bad things": executions that start in a state satisfying P and terminate in a final state that does not satisfy Q. For a program C, this safety property is usually written using the Hoare triple \ C \. * The liveness property, the "good thing", is that execution that starts in a state satisfying P terminates. Note that a ''bad thing'' is discrete, since it happens at a particular place during execution. ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Lamport Timestamps
The Lamport timestamp algorithm is a simple logical clock algorithm used to determine the order of events in a distributed computer system. As different nodes or processes will typically not be perfectly synchronized, this algorithm is used to provide a partial ordering of events with minimal overhead, and conceptually provide a starting point for the more advanced vector clock method. The algorithm is named after its creator, Leslie Lamport. Distributed algorithms such as resource synchronization often depend on some method of ordering events to function. For example, consider a system with two processes and a disk. The processes send messages to each other, and also send messages to the disk requesting access. The disk grants access in the order the messages were ''received''. For example process A sends a message to the disk requesting write access, and then sends a read instruction message to process B. Process B receives the message, and as a result sends its own read request m ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




Vector Clock
A vector clock is a data structure used for determining the partial ordering of events in a distributed system and detecting causality violations. Just as in Lamport timestamps, inter-process messages contain the state of the sending process's logical clock. A vector clock of a system of ''N'' processes is an array/vector of ''N'' logical clocks, one clock per process; a local "largest possible values" copy of the global clock-array is kept in each process. Denote VC_i as the vector clock maintained by process i, the clock updates proceed as follows: * Initially all clocks are zero. * Each time a process experiences an internal event, it increments its own logical clock in the vector by one. For instance, upon an event at process i, it updates VC_ \leftarrow VC_ + 1. * Each time a process sends a message, it increments its own logical clock in the vector by one (as in the bullet above, but not twice for the same event) and then the message piggybacks a copy of its own vector. * E ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Safety (distributed Computing)
Properties of an execution of a computer program —particularly for concurrent and distributed systems— have long been formulated by giving ''safety properties'' ("bad things don't happen") and ''liveness properties'' ("good things do happen"). A simple example will illustrate safety and liveness. A program is totally correct with respect to a precondition P and postcondition Q if any execution started in a state satisfying P terminates in a state satisfying Q. Total correctness is a conjunction of a safety property and a liveness property: * The safety property prohibits these "bad things": executions that start in a state satisfying P and terminate in a final state that does not satisfy Q. For a program C, this safety property is usually written using the Hoare triple \ C \. * The liveness property, the "good thing", is that execution that starts in a state satisfying P terminates. Note that a ''bad thing'' is discrete, since it happens at a particular place during execution. ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]