### Why web sites are lost (and how they're sometimes found)Why web sites are lost (and how they're sometimes found)

Access Restriction
Subscribed

 Author Nelson, Michael L. ♦ McCown, Frank ♦ Marshall, Catherine C. Source ACM Digital Library Content type Text Publisher Association for Computing Machinery (ACM) File Format PDF Language English
 Abstract Introduction The web is in constant flux---new pages and Web sites appear daily, and old pages and sites disappear almost as quickly. One study estimates that about two percent of the Web disappears from its current location every $week.^{2}$ Although Web users have become accustomed to seeing the infamous "404 Not Found" page, they are more taken aback when they own, are responsible for, or have come to rely on the missing material. Web archivists like those at the Internet Archive have responded to the Web's transience by archiving as much of it as possible, hoping to preserve snapshots of the Web for future $generations.^{3}$ Search engines have also responded by offering pages that have been cached as a result of the indexing process. These straightforward archiving and caching efforts have been used by the public in unintended ways: individuals and organizations have used them to restore their own lost Web $sites.^{5}$ To automate recovering lost Web sites, we created a Web-repository crawler named Warrick that restores lost resources from the holdings of four Web repositories: Internet Archive, Google, Live Search (now Bing), and $Yahoo;^{6}$ we refer to these Web repositories collectively as the Web Infrastructure (WI). We call this after-loss recovery Lazy Preservation (see the sidebar for more information). Warrick can only recover what is accessible to the WI, namely the crawlable Web. There are numerous resources that cannot be found in the WI: password protected content, pages without incoming links or protected by the robots exclusion protocol, and content hidden behind Flash or JavaScript interfaces. Most importantly, WI crawlers do not have access to the server-side components (for example, scripts, configuration files, databases, among others) of a Web site. Nevertheless, upon Warrick's public release in 2005, we received many inquiries about its usage and collected a handful of anecdotes about the Web sites individuals and organizations had lost and wanted to recover. Were these Web sites representative? What types of Web resources were people losing? Given the inherent limitations of the WI, were Warrick users recovering enough material to reconstruct the site? Were these losses changing their behavior, or was the availability of cached material reinforcing a "lazy" approach to preservation? We constructed an online survey to explore these questions and conducted a set of in-depth interviews with survey respondents to clarify the results. Potential participants were solicited by us or the Internet Archive, or they found a link to the survey from the Warrick Web site. A total of 52 participants completed the survey regarding 55 lost Web sites, and seven of the participants allowed us to follow-up with telephone or instant messaging interviews. Participants were divided into two groups: 1. Personal loss: Those who had lost (and tried to recover) a Web site that they had personally created, maintained or owned (34 participants who lost 37 Web sites). 2. Third party: Those who had recovered someone else's lost Web site (18 participants who recovered 18 Web sites). Description Affiliation: Harding University, Searcy, AR (McCown, Frank) || Microsoft Research, Silicon Valley (Marshall, Catherine C.) || Old Dominion University (Nelson, Michael L.) Age Range 18 to 22 years ♦ above 22 year Educational Use Research Education Level UG and PG Learning Resource Type Article Publisher Date 2005-08-01 Publisher Place New York Journal Communications of the ACM (CACM) Volume Number 52 Issue Number 11 Page Count 5 Starting Page 141 Ending Page 145

#### Open content in new tab

Source: ACM Digital Library