HOME

TheInfoList



OR:

Web archiving is the process of collecting, preserving, and providing access to material from the
World Wide Web The World Wide Web (WWW or simply the Web) is an information system that enables Content (media), content sharing over the Internet through user-friendly ways meant to appeal to users beyond Information technology, IT specialists and hobbyis ...
. The aim is to ensure that information is preserved in an archival format for research and the public. Web archivists typically employ automated
web crawler Web crawler, sometimes called a spider or spiderbot and often shortened to crawler, is an Internet bot that systematically browses the World Wide Web and that is typically operated by search engines for the purpose of Web indexing (''web spider ...
s to capturing the massive amount of information on the Web. A widely known web archive service is the
Wayback Machine The Wayback Machine is a digital archive of the World Wide Web founded by Internet Archive, an American nonprofit organization based in San Francisco, California. Launched for public access in 2001, the service allows users to go "back in ...
, run by the
Internet Archive The Internet Archive is an American 501(c)(3) organization, non-profit organization founded in 1996 by Brewster Kahle that runs a digital library website, archive.org. It provides free access to collections of digitized media including web ...
. The growing portion of human culture created and recorded on the web makes it inevitable that more and more libraries and archives will have to face the challenges of web archiving.
National libraries A national library is a library established by a government as a country's preeminent repository of information. Unlike public libraries, these rarely allow citizens to borrow books. Often, they include numerous rare, valuable, or significant ...
,
national archive National archives are the archives of a country. The concept evolved in various nations at the dawn of modernity based on the impact of nationalism upon bureaucratic processes of paperwork retention. Conceptual development From the Middle Ages i ...
s, and various consortia of organizations are also involved in archiving Web content to prevent its loss. Commercial web archiving software and services are also available to organizations that need to archive their own web content for corporate heritage, regulatory, or legal purposes.


History and development

While curation and organization of the web has been prevalent since the mid- to late-1990s, one of the first large-scale web archiving projects was the
Internet Archive The Internet Archive is an American 501(c)(3) organization, non-profit organization founded in 1996 by Brewster Kahle that runs a digital library website, archive.org. It provides free access to collections of digitized media including web ...
, a non-profit organization created by Brewster Kahle in 1996. The Internet Archive released its own search engine for viewing archived web content, the
Wayback Machine The Wayback Machine is a digital archive of the World Wide Web founded by Internet Archive, an American nonprofit organization based in San Francisco, California. Launched for public access in 2001, the service allows users to go "back in ...
, in 2001. As of 2018, the Internet Archive was home to 40 petabytes of data. The Internet Archive also developed many of its own tools for collecting and storing its data, including PetaBox for storing large amounts of data efficiently and safely, and Heritrix, a web crawler developed in conjunction with the Nordic national libraries. Other projects launched around the same time included a web archiving project by the
National Library of Canada Library and Archives Canada (LAC; ) is the federal institution tasked with acquiring, preserving, and providing accessibility to the documentary heritage of Canada. The national archive and library is the 16th largest library in the world. T ...
, Australia's
Pandora In Greek mythology, Pandora was the first human woman created by Hephaestus on the instructions of Zeus. As Hesiod related it, each god cooperated by giving her unique gifts. Her other name—inscribed against her figure on a white-ground '' ky ...
, Tasmanian web archives and Sweden's Kulturarw3. From 2001 the International Web Archiving Workshop (IWAW) provided a platform to share experiences and exchange ideas. The International Internet Preservation Consortium (IIPC), established in 2003, has facilitated international collaboration in developing standards and open source tools for the creation of web archives. The now-defunct
Internet Memory Foundation The Internet Memory Foundation (formerly the European Archive Foundation) was a non-profit foundation whose purpose was archiving content of the World Wide Web. It hosted projects and research that included the preservation and protection of d ...
was founded in 2004 and founded by the
European Commission The European Commission (EC) is the primary Executive (government), executive arm of the European Union (EU). It operates as a cabinet government, with a number of European Commissioner, members of the Commission (directorial system, informall ...
in order to archive the web in Europe. This project developed and released many open source tools, such as "rich media capturing, temporal coherence analysis, spam assessment, and terminology evolution detection." The data from the foundation is now housed by the Internet Archive, but not currently publicly accessible. Despite the fact that there is no centralized responsibility for its preservation, web content is rapidly becoming the official record. For example, in 2017, the United States Department of Justice affirmed that the government treats the President's tweets as official statements.


Methods of collection

Web archivists generally archive various types of web content including
HTML Hypertext Markup Language (HTML) is the standard markup language for documents designed to be displayed in a web browser. It defines the content and structure of web content. It is often assisted by technologies such as Cascading Style Sheets ( ...
web pages, style sheets,
JavaScript JavaScript (), often abbreviated as JS, is a programming language and core technology of the World Wide Web, alongside HTML and CSS. Ninety-nine percent of websites use JavaScript on the client side for webpage behavior. Web browsers have ...
,
images An image or picture is a visual representation. An image can be two-dimensional, such as a drawing, painting, or photograph, or three-dimensional, such as a carving or sculpture. Images may be displayed through other media, including a project ...
, and
video Video is an Electronics, electronic medium for the recording, copying, playback, broadcasting, and display of moving picture, moving image, visual Media (communication), media. Video was first developed for mechanical television systems, whi ...
. They also archive
metadata Metadata (or metainformation) is "data that provides information about other data", but not the content of the data itself, such as the text of a message or the image itself. There are many distinct types of metadata, including: * Descriptive ...
about the collected resources such as access time,
MIME type In information and communications technology, a media type, content type or MIME type is a two-part identifier for file formats and content formats. Their purpose is comparable to filename extensions and uniform type identifiers, in that they ide ...
, and content length. This metadata is useful in establishing authenticity and
provenance Provenance () is the chronology of the ownership, custody or location of a historical object. The term was originally mostly used in relation to works of art, but is now used in similar senses in a wide range of fields, including archaeology, p ...
of the archived collection.


Transactional archiving

Transactional archiving is an event-driven approach, which collects the actual transactions which take place between a
web server A web server is computer software and underlying Computer hardware, hardware that accepts requests via Hypertext Transfer Protocol, HTTP (the network protocol created to distribute web content) or its secure variant HTTPS. A user agent, co ...
and a
web browser A web browser, often shortened to browser, is an application for accessing websites. When a user requests a web page from a particular website, the browser retrieves its files from a web server and then displays the page on the user's scr ...
. It is primarily used as a means of preserving evidence of the content which was actually viewed on a particular
website A website (also written as a web site) is any web page whose content is identified by a common domain name and is published on at least one web server. Websites are typically dedicated to a particular topic or purpose, such as news, educatio ...
, on a given date. This may be particularly important for organizations which need to comply with legal or regulatory requirements for disclosing and retaining information. A transactional archiving system typically operates by intercepting every
HTTP HTTP (Hypertext Transfer Protocol) is an application layer protocol in the Internet protocol suite model for distributed, collaborative, hypermedia information systems. HTTP is the foundation of data communication for the World Wide Web, wher ...
request to, and response from, the web server, filtering each response to eliminate duplicate content, and permanently storing the responses as bitstreams.


Crawlers

Web archives which rely on web crawling as their primary means of collecting the Web are influenced by the difficulties of web crawling: * The
robots exclusion protocol robots.txt is the filename used for implementing the Robots Exclusion Protocol, a standard used by websites to indicate to visiting web crawlers and other web robots which portions of the website they are allowed to visit. The standard, dev ...
may request crawlers not access portions of a website. Some web archivists may ignore the request and crawl those portions anyway. * Large portions of a website may be hidden in the Deep Web. For example, the results page behind a web form can lie in the Deep Web if crawlers cannot follow a link to the results page. * Crawler traps (e.g., calendars) may cause a crawler to download an infinite number of pages, so crawlers are usually configured to limit the number of dynamic pages they crawl. * Most of the archiving tools do not capture the page as it is. It is observed that ad banners and images are often missed while archiving. However, it is important to note that a native format web archive, i.e., a fully browsable web archive, with working links, media, etc., is only really possible using crawler technology. The Web is so large that crawling a significant portion of it takes a large number of technical resources. Also, the Web is changing so fast that portions of a website may suffer modifications before a crawler has even finished crawling it.


Laws

In 2017 the Financial Industry Regulatory Authority, Inc. (FINRA), a United States financial regulatory organization, released a notice stating all the businesses doing digital communications are required to keep a record. This includes website data, social media posts, and messages. Some
copyright law A copyright is a type of intellectual property that gives its owner the exclusive legal right to copy, distribute, adapt, display, and perform a creative work, usually for a limited time. The creative work may be in a literary, artistic, e ...
s may inhibit Web archiving. For instance, academic archiving by
Sci-Hub Sci-Hub is a library website that provides free access to millions of research papers, regardless of copyright, by bypassing publishers' paywalls in various ways. Unlike Library Genesis, it does not provide access to books. Sci-Hub was found ...
falls outside the bounds of contemporary copyright law. The site provides enduring access to academic works including those that do not have an
open access Open access (OA) is a set of principles and a range of practices through which nominally copyrightable publications are delivered to readers free of access charges or other barriers. With open access strictly defined (according to the 2001 de ...
license and thereby contributes to the archival of scientific research which may otherwise be lost.


See also

*
Anna's Archive Anna's Archive is an open source search engine for shadow library, shadow libraries that was launched by Anna shortly after law enforcement efforts to Z-Library#United States, shut down Z-Library in 2022. The site aggregates records from major ...
* Archive site * Archive Team * archive.today (formerly archive.is) *
Collective memory Collective memory is the shared pool of memories, knowledge and information of a social group that is significantly associated with the group's identity. The English phrase "collective memory" and the equivalent French phrase "la mémoire collect ...
*
Common Crawl Common Crawl is a nonprofit organization, nonprofit 501(c) organization#501.28c.29.283.29, 501(c)(3) organization that web crawler, crawls the web and freely provides its archives and datasets to the public. Common Crawl's Web archiving, web arch ...
*
Digital hoarding Digital hoarding (also known as e-hoarding, e-clutter, data hoarding, digital pack-rattery or cyber hoarding) is defined by researchers as an emerging sub-type of hoarding disorder characterized by individuals collecting excessive digital materia ...
*
Digital preservation In library science, library and archival science, digital preservation is a formal process to ensure that digital information of continuing value remains accessible and usable in the long term. It involves planning, resource allocation, and appli ...
*
Digital library A digital library (also called an online library, an internet library, a digital repository, a library without walls, or a digital collection) is an online database of digital resources that can include text, still images, audio, video, digital ...
* Ghost Archive * Google Cache *
List of Web archiving initiatives This article contains a list of Web archiving initiatives worldwide. For easier reading, the information is divided in three tables: web archiving initiatives, archived data, and access methods. Some of these initiatives may or may not make u ...
*
Memento Project Memento is a United States ''National Digital Information Infrastructure and Preservation Program (NDIIPP)''–funded project aimed at making Web archiving, Web-archived content more readily discoverable and accessible to the public. Technical ...
* Minerva Initiative *
Mirror website Mirror sites or mirrors are replicas of other websites. The concept of mirroring applies to network services accessible through any protocol, such as HTTP or FTP. Such sites have different URLs than the original site, but host identical or near-id ...
*
National Digital Information Infrastructure and Preservation Program The National Digital Information Infrastructure and Preservation Program (NDIIPP) of the United States was an archival program led by the Library of Congress to preserve and provide access to digital resources. The program convened several workin ...
(NDIIPP) *
National Digital Library Program The National Digital Library Program (NDLP) is a project by the United States Library of Congress to assemble a digital library of reproductions of primary source materials to support the study of the history and culture of the United States. ...
(NDLP) * PADICAT * PageFreezer * Pandora Archive * UK Web Archive *
Virtual artifact A virtual artifact (VA) is an immaterial object that exists in the human mind or in a digital environment, for example the Internet, intranet, virtual reality, cyberspace, etc. Background The term "virtual artifact" has been used in a variety of ...
*
Wayback Machine The Wayback Machine is a digital archive of the World Wide Web founded by Internet Archive, an American nonprofit organization based in San Francisco, California. Launched for public access in 2001, the service allows users to go "back in ...
*
Web crawling Web crawler, sometimes called a spider or spiderbot and often shortened to crawler, is an Internet bot that systematically browses the World Wide Web and that is typically operated by search engines for the purpose of Web indexing (''web spider ...
*
WebCite WebCite is an intermittently available archive site, originally designed to digitally preserve scientific and educationally important material on the web by taking snapshots of Internet contents as they existed at the time when a blogger or ...
*
Webrecorder Webrecorder is an American technology company founded by Ilya Kreymer that builds open source web archiving tools and maintains the WACZ file format. History In 2016 Rhizome was awarded a $600,000 USD multi-year grant from the Andrew W. Mello ...


General bibliography

* * * * * * * * * *


References


External links


International Internet Preservation Consortium (IIPC)
��International consortium whose mission is to acquire, preserve, and make accessible knowledge and information from the Internet for future generations

* ttps://www.loc.gov/webarchiving/ Library of Congress—Web Archiving
Data Hoarding non-profit organization
{{DEFAULTSORT:Web Archiving Web archiving Internet Archive projects Collections care Internet properties established in 1996 Conservation and restoration of cultural heritage Digital preservation Digital Library project Museology