Web archiving
Web archiving is the process of collecting portions of the World Wide Web to ensure the information is preserved in an archive for future researchers, historians, and the public. Web archivists typically employ web crawlers for automated capture due to the massive size and amount of information on the Web. The largest web archiving organization based on a bulk crawling approach is the Wayback Machine, which strives to maintain an archive of the entire Web.
"Web archive" redirects here. For other uses, see Web archive (disambiguation).
The growing portion of human culture created and recorded on the web makes it inevitable that more and more libraries and archives will have to face the challenges of web archiving.[1] National libraries, national archives and various consortia of organizations are also involved in archiving culturally important Web content.
Commercial web archiving software and services are also available to organizations who need to archive their own web content for corporate heritage, regulatory, or legal purposes.
History and development[edit]
While curation and organization of the web has been prevalent since the mid- to late-1990s, one of the first large-scale web archiving project was the Internet Archive, a non-profit organization created by Brewster Kahle in 1996.[2] The Internet Archive released its own search engine for viewing archived web content, the Wayback Machine, in 2001.[2] As of 2018, the Internet Archive was home to 40 petabytes of data.[3] The Internet Archive also developed many of its own tools for collecting and storing its data, including PetaBox for storing the large amounts of data efficiently and safely, and Heritrix, a web crawler developed in conjunction with the Nordic national libraries.[2] Other projects launched around the same time included a web archiving project by the National Library of Canada, Australia's Pandora, Tasmanian web archives and Sweden's Kulturarw3.[4][5]
From 2001 to 2010, the International Web Archiving Workshop (IWAW) provided a platform to share experiences and exchange ideas.[6][7] The International Internet Preservation Consortium (IIPC), established in 2003, has facilitated international collaboration in developing standards and open source tools for the creation of web archives.[8]
The now-defunct Internet Memory Foundation was founded in 2004 and founded by the European Commission in order to archive the web in Europe.[2] This project developed and released many open source tools, such as "rich media capturing, temporal coherence analysis, spam assessment, and terminology evolution detection."[2] The data from the foundation is now housed by the Internet Archive, but not currently publicly accessible.[9]
Despite the fact that there is no centralized responsibility for its preservation, web content is rapidly becoming the official record. For example, in 2017, the United States Department of Justice affirmed that the government treats the President's tweets as official statements.[10]
Laws[edit]
In 2017 the Financial Industry Regulatory Authority, Inc. (FINRA), a United States financial regulatory organization, released a notice stating all the business doing digital communications are required to keep a record. This includes website data, social media posts, and messages.[17] Some copyright laws may inhibit Web archiving. For instance, academic archiving by Sci-Hub falls outside the bounds of contemporary copyright law. The site provides enduring access to academic works including those that do not have an open access license and thereby contributes to the archival of scientific research which may otherwise be lost.[18][19]