Thumbnail
Access Restriction
Subscribed

Author Greene, Dan ♦ Sturgis, Howard ♦ Shenker, Scott ♦ Demers, Alan ♦ Swinehart, Dan ♦ Terry, Doug ♦ Houser, Carl ♦ Irish, Wes ♦ Larson, John
Source ACM Digital Library
Content type Text
Publisher Association for Computing Machinery (ACM)
File Format PDF
Language English
Subject Domain (in DDC) Computer science, information & general works ♦ Data processing & computer science
Abstract When a database is replicated at many sites, maintaining mutual consistency among the sites in the face of updates is a significant problem. This paper describes several randomized algorithms for distributing updates and driving the replicas toward consistency. The algorithms are very simple and require few guarantees from the underlying communication system, yet they ensure that the effect of every update is eventually reflected in all replicas. The cost and performance of the algorithms are tuned by choosing appropriate distributions in the randomization step. The algorithms are closely analogous to epidemics, and the epidemiology literature aids in understanding their behavior. One of the algorithms has been implemented in the Clearinghouse servers of the Xerox Corporate Internet. solving long-standing problems of high traffic and database inconsistency.
Description Affiliation: Xerox Palo Alto Center, Palo Alto, NM (Larson, John) || Xerox Palo Alto Center, Palo Alto, NM (Demers, Alan; Greene, Dan; Houser, Carl; Irish, Wes; Shenker, Scott; Sturgis, Howard; Swinehart, Dan; Terry, Doug)
Age Range 18 to 22 years ♦ above 22 year
Educational Use Research
Education Level UG and PG
Learning Resource Type Article
Publisher Date 1975-04-01
Publisher Place New York
Journal ACM SIGOPS Operating Systems Review (OPSR)
Volume Number 22
Issue Number 1
Page Count 25
Starting Page 8
Ending Page 32


Open content in new tab

   Open content in new tab
Source: ACM Digital Library