Web archiving
   HOME

TheInfoList



OR:

Web archiving is the process of collecting, preserving, and providing access to material from the World Wide Web. The aim is to ensure that information is preserved in an archival format for research and the public. Web archivists typically employ automated web crawlers to capturing the massive amount of information on the Web. A widely known web archive service is the Wayback Machine, run by the
Internet Archive The Internet Archive is an American 501(c)(3) organization, non-profit organization founded in 1996 by Brewster Kahle that runs a digital library website, archive.org. It provides free access to collections of digitized media including web ...
. The growing portion of human culture created and recorded on the web makes it inevitable that more and more libraries and archives will have to face the challenges of web archiving. National libraries, national archives, and various consortia of organizations are also involved in archiving Web content to prevent its loss. Commercial web archiving software and services are also available to organizations that need to archive their own web content for corporate heritage, regulatory, or legal purposes.


History and development

While curation and organization of the web has been prevalent since the mid- to late-1990s, one of the first large-scale web archiving projects was the
Internet Archive The Internet Archive is an American 501(c)(3) organization, non-profit organization founded in 1996 by Brewster Kahle that runs a digital library website, archive.org. It provides free access to collections of digitized media including web ...
, a non-profit organization created by Brewster Kahle in 1996. The Internet Archive released its own search engine for viewing archived web content, the Wayback Machine, in 2001. As of 2018, the Internet Archive was home to 40 petabytes of data. The Internet Archive also developed many of its own tools for collecting and storing its data, including PetaBox for storing large amounts of data efficiently and safely, and Heritrix, a web crawler developed in conjunction with the Nordic national libraries. Other projects launched around the same time included a web archiving project by the National Library of Canada, Australia's Pandora, Tasmanian web archives and Sweden's Kulturarw3. From 2001 the International Web Archiving Workshop (IWAW) provided a platform to share experiences and exchange ideas. The International Internet Preservation Consortium (IIPC), established in 2003, has facilitated international collaboration in developing standards and open source tools for the creation of web archives. The now-defunct Internet Memory Foundation was founded in 2004 and founded by the
European Commission The European Commission (EC) is the primary Executive (government), executive arm of the European Union (EU). It operates as a cabinet government, with a number of European Commissioner, members of the Commission (directorial system, informall ...
in order to archive the web in Europe. This project developed and released many open source tools, such as "rich media capturing, temporal coherence analysis, spam assessment, and terminology evolution detection." The data from the foundation is now housed by the Internet Archive, but not currently publicly accessible. Despite the fact that there is no centralized responsibility for its preservation, web content is rapidly becoming the official record. For example, in 2017, the United States Department of Justice affirmed that the government treats the President's tweets as official statements.


Methods of collection

Web archivists generally archive various types of web content including
HTML Hypertext Markup Language (HTML) is the standard markup language for documents designed to be displayed in a web browser. It defines the content and structure of web content. It is often assisted by technologies such as Cascading Style Sheets ( ...
web pages, style sheets, JavaScript, images, and
video Video is an Electronics, electronic medium for the recording, copying, playback, broadcasting, and display of moving picture, moving image, visual Media (communication), media. Video was first developed for mechanical television systems, whi ...
. They also archive metadata about the collected resources such as access time, MIME type, and content length. This metadata is useful in establishing authenticity and provenance of the archived collection.


Transactional archiving

Transactional archiving is an event-driven approach, which collects the actual transactions which take place between a web server and a
web browser A web browser, often shortened to browser, is an application for accessing websites. When a user requests a web page from a particular website, the browser retrieves its files from a web server and then displays the page on the user's scr ...
. It is primarily used as a means of preserving evidence of the content which was actually viewed on a particular
website A website (also written as a web site) is any web page whose content is identified by a common domain name and is published on at least one web server. Websites are typically dedicated to a particular topic or purpose, such as news, educatio ...
, on a given date. This may be particularly important for organizations which need to comply with legal or regulatory requirements for disclosing and retaining information. A transactional archiving system typically operates by intercepting every
HTTP HTTP (Hypertext Transfer Protocol) is an application layer protocol in the Internet protocol suite model for distributed, collaborative, hypermedia information systems. HTTP is the foundation of data communication for the World Wide Web, wher ...
request to, and response from, the web server, filtering each response to eliminate duplicate content, and permanently storing the responses as bitstreams.


Crawlers

Web archives which rely on web crawling as their primary means of collecting the Web are influenced by the difficulties of web crawling: * The robots exclusion protocol may request crawlers not access portions of a website. Some web archivists may ignore the request and crawl those portions anyway. * Large portions of a website may be hidden in the Deep Web. For example, the results page behind a web form can lie in the Deep Web if crawlers cannot follow a link to the results page. * Crawler traps (e.g., calendars) may cause a crawler to download an infinite number of pages, so crawlers are usually configured to limit the number of dynamic pages they crawl. * Most of the archiving tools do not capture the page as it is. It is observed that ad banners and images are often missed while archiving. However, it is important to note that a native format web archive, i.e., a fully browsable web archive, with working links, media, etc., is only really possible using crawler technology. The Web is so large that crawling a significant portion of it takes a large number of technical resources. Also, the Web is changing so fast that portions of a website may suffer modifications before a crawler has even finished crawling it.


Laws

In 2017 the Financial Industry Regulatory Authority, Inc. (FINRA), a United States financial regulatory organization, released a notice stating all the businesses doing digital communications are required to keep a record. This includes website data, social media posts, and messages. Some copyright laws may inhibit Web archiving. For instance, academic archiving by Sci-Hub falls outside the bounds of contemporary copyright law. The site provides enduring access to academic works including those that do not have an open access license and thereby contributes to the archival of scientific research which may otherwise be lost.


See also

* Anna's Archive * Archive site * Archive Team * archive.today (formerly archive.is) * Collective memory *
Common Crawl Common Crawl is a nonprofit organization, nonprofit 501(c) organization#501.28c.29.283.29, 501(c)(3) organization that web crawler, crawls the web and freely provides its archives and datasets to the public. Common Crawl's Web archiving, web arch ...
* Digital hoarding * Digital preservation *
Digital library A digital library (also called an online library, an internet library, a digital repository, a library without walls, or a digital collection) is an online database of digital resources that can include text, still images, audio, video, digital ...
* Ghost Archive * Google Cache * List of Web archiving initiatives * Memento Project * Minerva Initiative * Mirror website * National Digital Information Infrastructure and Preservation Program (NDIIPP) * National Digital Library Program (NDLP) * PADICAT * PageFreezer * Pandora Archive * UK Web Archive * Virtual artifact * Wayback Machine * Web crawling * WebCite * Webrecorder


General bibliography

* * * * * * * * * *


References


External links


International Internet Preservation Consortium (IIPC)
€”International consortium whose mission is to acquire, preserve, and make accessible knowledge and information from the Internet for future generations

* ttps://www.loc.gov/webarchiving/ Library of Congress—Web Archiving
Data Hoarding non-profit organization
{{DEFAULTSORT:Web Archiving Web archiving Internet Archive projects Collections care Internet properties established in 1996 Conservation and restoration of cultural heritage Digital preservation Digital Library project Museology