Distributed web crawling is a
distributed computing
Distributed computing is a field of computer science that studies distributed systems, defined as computer systems whose inter-communicating components are located on different networked computers.
The components of a distributed system commu ...
technique whereby
Internet
The Internet (or internet) is the Global network, global system of interconnected computer networks that uses the Internet protocol suite (TCP/IP) to communicate between networks and devices. It is a internetworking, network of networks ...
search engines
Search engines, including web search engines, selection-based search engines, metasearch engines, desktop search tools, and web portals and vertical market websites have a search facility for online databases.
By content/topic
Gene ...
employ many computers to
index
Index (: indexes or indices) may refer to:
Arts, entertainment, and media Fictional entities
* Index (''A Certain Magical Index''), a character in the light novel series ''A Certain Magical Index''
* The Index, an item on the Halo Array in the ...
the Internet via
web crawling
Web crawler, sometimes called a spider or spiderbot and often shortened to crawler, is an Internet bot that systematically browses the World Wide Web and that is typically operated by search engines for the purpose of Web indexing (''web spider ...
. Such systems may allow for users to voluntarily offer their own computing and bandwidth resources towards crawling web pages. By spreading the load of these tasks across many computers, costs that would otherwise be spent on maintaining large computing clusters are avoided.
Types
Cho
and Garcia-Molina studied two types of policies:
Dynamic assignment
With this type of policy, a central server assigns new URLs to different crawlers dynamically. This allows the central server to, for instance, dynamically balance the load of each crawler.
With dynamic assignment, typically the systems can also add or remove downloader processes. The central server may become the bottleneck, so most of the workload must be transferred to the distributed crawling processes for large crawls.
There are two configurations of crawling architectures with dynamic assignments that have been described by Shkapenyuk and Suel:
* A small crawler configuration, in which there is a central
DNS resolver and central queues per Web site, and distributed downloaders.
* A large crawler configuration, in which the DNS resolver and the queues are also distributed.
Static assignment
With this type of policy, there is a fixed rule stated from the beginning of the crawl that defines how to assign new URLs to the crawlers.
For static assignment, a hashing function can be used to transform URLs (or, even better, complete website names) into a number that corresponds to the index of the corresponding crawling process. As there are external links that will go from a Web site assigned to one crawling process to a website assigned to a different crawling process, some exchange of URLs must occur.
To reduce the overhead due to the exchange of URLs between crawling processes, the exchange should be done in batch, several URLs at a time, and the most cited URLs in the collection should be known by all crawling processes before the crawl (e.g.: using data from a previous crawl).
Implementations
As of 2003, most modern commercial search engines use this technique.
Google
Google LLC (, ) is an American multinational corporation and technology company focusing on online advertising, search engine technology, cloud computing, computer software, quantum computing, e-commerce, consumer electronics, and artificial ...
and
Yahoo
Yahoo (, styled yahoo''!'' in its logo) is an American web portal that provides the search engine Yahoo Search and related services including My Yahoo, Yahoo Mail, Yahoo News, Yahoo Finance, Yahoo Sports, y!entertainment, yahoo!life, an ...
use thousands of individual computers to crawl the Web.
Newer projects are attempting to use a less structured, more ''ad hoc'' form of collaboration by enlisting volunteers to join the effort using, in many cases, their home or personal computers.
LookSmart
LookSmart is an American search advertising, content management, online media, and technology company. It provides search, machine learning and chatbot technologies as well as pay-per-click and contextual advertising services.
LookSmart a ...
is the largest search engine to use this technique, which powers its
Grub distributed web-crawling project. Wikia (now known as
Fandom
A fandom is a subculture composed of Fan (person), fans characterized by a feeling of camaraderie with others who share a common interest. Fans typically are interested in even minor details of the objects of their fandom and spend a significan ...
) acquired Grub from LookSmart in 2007.
This solution uses computers that are connected to the
Internet
The Internet (or internet) is the Global network, global system of interconnected computer networks that uses the Internet protocol suite (TCP/IP) to communicate between networks and devices. It is a internetworking, network of networks ...
to crawl
Internet addresses in the background. Upon downloading crawled web pages, they are compressed and sent back, together with a status flag (e.g. changed, new, down, redirected) to the powerful central servers. The servers, which manage a large database, send out new URLs to clients for testing.
Drawbacks
According to the
FAQ about
Nutch
Apache Nutch is a highly extensible and scalable Open-source license, open source web crawler software project.
Features
Nutch is coded entirely in the Java (programming language), Java programming language, but data is written in language-ind ...
, an open-source search engine website, the savings in bandwidth by distributed web crawling are not significant, since "A successful search engine requires more bandwidth to upload query result pages than its crawler needs to download pages...".
See also
*
Distributed computing
Distributed computing is a field of computer science that studies distributed systems, defined as computer systems whose inter-communicating components are located on different networked computers.
The components of a distributed system commu ...
*
Web crawler
Web crawler, sometimes called a spider or spiderbot and often shortened to crawler, is an Internet bot that systematically browses the World Wide Web and that is typically operated by search engines for the purpose of Web indexing (''web spider ...
*
YaCy - P2P web search engine with distributed crawling
*
Seeks - Open-Source P2P Web Search
Sources
External links
Majestic-12 Distributed Search EngineUniCrawl: A Practical Geographically DistributedDistributed web crawling made easy: system and architecture
{{Web crawlers
Applications of distributed computing
Internet search algorithms