Eventual consistency is a consistency model used in distributed computing to achieve high availability that informally guarantees that, if no new updates are made to a given data item, eventually all accesses to that item will return the last updated value.[1] Eventual consistency, also called optimistic replication,[2] is widely deployed in distributed systems and has origins in early mobile computing projects.[3] A system that has achieved eventual consistency is often said to have converged, or achieved replica convergence.[4] Eventual consistency is a weak guarantee – most stronger models, like linearizability, are trivially eventually consistent.

Eventually-consistent services are often classified as providing BASE semantics (basically-available, soft-state, eventual consistency), in contrast to traditional ACID (atomicity, consistency, isolation, durability).[5][6] In chemistry, a base is the opposite of an acid, which helps in remembering the acronym.[7] According to the same resource, these are the rough definitions of each term in BASE:

  • Basically available: reading and writing operations are available as much as possible (using all nodes of a database cluster), but might not be consistent (the write might not persist after conflicts are reconciled, and the read might not get the latest write)
  • Soft-state: without consistency guarantees, after some amount of time, we only have some probability of knowing the state, since it might not yet have converged
  • Eventually consistent: If we execute some writes and then the system functions long enough, we can know the state of the data; any further reads of that data item will return the same value

Eventual consistency is sometimes criticized[8] as increasing the complexity of distributed software applications. This is partly because eventual consistency is purely a liveness guarantee (reads eventually return the same value) and does not guarantee safety: an eventually consistent system can return any value before it converges.

Conflict resolution

edit

In order to ensure replica convergence, a system must reconcile differences between multiple copies of distributed data. This consists of two parts:

  • exchanging versions or updates of data between servers (often known as anti-entropy);[9] and
  • choosing an appropriate final state when concurrent updates have occurred, called reconciliation.

The most appropriate approach to reconciliation depends on the application. A widespread approach is "last writer wins".[1] Another is to invoke a user-specified conflict handler.[4] Timestamps and vector clocks are often used to detect concurrency between updates. Some people use "first writer wins" in situations where "last writer wins" is unacceptable.[10]

Reconciliation of concurrent writes must occur sometime before the next read, and can be scheduled at different instants:[3][11]

  • Read repair: The correction is done when a read finds an inconsistency. This slows down the read operation.
  • Write repair: The correction takes place during a write operation, slowing down the write operation.
  • Asynchronous repair: The correction is not part of a read or write operation.

Strong eventual consistency

edit

Whereas eventual consistency is only a liveness guarantee (updates will be observed eventually), strong eventual consistency (SEC) adds the safety guarantee that any two nodes that have received the same (unordered) set of updates will be in the same state. If, furthermore, the system is monotonic, the application will never suffer rollbacks. A common approach to ensure SEC is conflict-free replicated data types.[12]

See also

edit

References

edit
  1. ^ a b Vogels, W. (2009). "Eventually consistent". Communications of the ACM. 52: 40–44. doi:10.1145/1435417.1435432.
  2. ^ Vogels, W. (2008). "Eventually Consistent". Queue. 6 (6): 14–19. doi:10.1145/1466443.1466448.
  3. ^ a b Terry, D. B.; Theimer, M. M.; Petersen, K.; Demers, A. J.; Spreitzer, M. J.; Hauser, C. H. (1995). "Managing update conflicts in Bayou, a weakly connected replicated storage system". Proceedings of the fifteenth ACM symposium on Operating systems principles - SOSP '95. p. 172. CiteSeerX 10.1.1.12.7323. doi:10.1145/224056.224070. ISBN 978-0897917155. S2CID 7834967.
  4. ^ a b Petersen, K.; Spreitzer, M. J.; Terry, D. B.; Theimer, M. M.; Demers, A. J. (1997). "Flexible update propagation for weakly consistent replication". ACM SIGOPS Operating Systems Review. 31 (5): 288. CiteSeerX 10.1.1.17.555. doi:10.1145/269005.266711.
  5. ^ Pritchett, D. (2008). "Base: An Acid Alternative". Queue. 6 (3): 48–55. doi:10.1145/1394127.1394128.
  6. ^ Bailis, P.; Ghodsi, A. (2013). "Eventual Consistency Today: Limitations, Extensions, and Beyond". Queue. 11 (3): 20. doi:10.1145/2460276.2462076.
  7. ^ Roe, Charles (March 2012). "ACID vs. BASE: The Shifting pH of Database Transaction Processing". DATAVERSITY. DATAVERSITY Education, LLC. Retrieved 29 August 2019.
  8. ^ HYaniv Pessach (2013), Distributed Storage (Distributed Storage: Concepts, Algorithms, and Implementations ed.), Amazon, OL 25423189M, Systems using Eventual Consistency result in decreased system load and increased system availability but result in increased cognitive complexity for users and developers
  9. ^ Demers, A.; Greene, D.; Hauser, C.; Irish, W.; Larson, J. (1987). "Epidemic algorithms for replicated database maintenance". Proceedings of the sixth annual ACM Symposium on Principles of distributed computing - PODC '87. p. 1. doi:10.1145/41840.41841. ISBN 978-0-89791-239-6. S2CID 1889203.
  10. ^ Rockford Lhotka. "Concurrency techniques" Archived 2018-05-11 at the Wayback Machine. 2003.
  11. ^ Olivier Mallassi (2010-06-09). "Let's play with Cassandra… (Part 1/3)". OCTO Talks!. Retrieved 2011-03-23. Of course, at a given time, chances are high that each node has its own version of the data. Conflict resolution is made during the read requests (called read-repair) and the current version of Cassandra does not provide a Vector Clock conflict resolution mechanisms [sic] (should be available in the version 0.7). Conflict resolution is so based on timestamp (the one set when you insert the row or the column): the higher timestamp win[s] and the node you are reading the data [from] is responsible for that. This is an important point because the timestamp is specified by the client, at the moment the column is inserted. Thus, all Cassandra clients' [sic] need to be synchronized...
  12. ^ Shapiro, Marc; Preguiça, Nuno; Baquero, Carlos; Zawirski, Marek (2011-10-10). "Conflict-free replicated data types". SSS'11 Proceedings of the 13th International Conference on Stabilization, Safety, and the Security of Distributed Systems. Springer-Verlag Berlin, Heidelberg: 386–400.

Further reading

edit