TheInfoList

PageRank (PR) is an algorithm used by Google Search to rank web pages in their search engine results. PageRank is a way of measuring the importance of website pages. According to Google: Currently, PageRank is not the only algorithm used by Google to order search results, but it is the first algorithm that was used by the company, and it is the best known. As of September 24, 2019, PageRank and all associated patents are expired.

Description

thumb|right|Cartoon illustrating the basic principle of PageRank. The size of each face is proportional to the total size of the other faces which are pointing to it. PageRank is a link analysis algorithm and it assigns a numerical weighting to each element of a hyperlinked set of documents, such as the World Wide Web, with the purpose of "measuring" its relative importance within the set. The algorithm may be applied to any collection of entities with reciprocal quotations and references. The numerical weight that it assigns to any given element ''E'' is referred to as the ''PageRank of E'' and denoted by $PR\left(E\right).$ A PageRank results from a mathematical algorithm based on the webgraph, created by all World Wide Web pages as nodes and hyperlinks as edges, taking into consideration authority hubs such as cnn.com or mayoclinic.org. The rank value indicates an importance of a particular page. A hyperlink to a page counts as a vote of support. The PageRank of a page is defined recursively and depends on the number and PageRank metric of all pages that link to it ("incoming links"). A page that is linked to by many pages with high PageRank receives a high rank itself. Numerous academic papers concerning PageRank have been published since Page and Brin's original paper. In practice, the PageRank concept may be vulnerable to manipulation. Research has been conducted into identifying falsely influenced PageRank rankings. The goal is to find an effective means of ignoring links from documents with falsely influenced PageRank. Other link-based ranking algorithms for Web pages include the HITS algorithm invented by Jon Kleinberg (used by Teoma and now Ask.com), the IBM CLEVER project, the TrustRank algorithm and the Hummingbird algorithm.

History

The eigenvalue problem was suggested in 1976 by Gabriel Pinski and Francis Narin, who worked on scientometrics ranking scientific journals, in 1977 by Thomas Saaty in his concept of Analytic Hierarchy Process which weighted alternative choices, and in 1995 by Bradley Love and Steven Sloman as a cognitive model for concepts, the centrality algorithm. A search engine called "RankDex" from IDD Information Services, designed by Robin Li in 1996, developed a strategy for site-scoring and page-ranking. Li referred to his search mechanism as "link analysis," which involved ranking the popularity of a web site based on how many other sites had linked to it. RankDex, the first search engine with page-ranking and site-scoring algorithms, was launched in 1996."About: RankDex"
, RankDex; accessed 3 May 2014.
Li patented the technology in RankDex, with his patent filed in 1997 and granted in 1999. He later used it when he founded Baidu in China in 2000. Google founder Larry Page referenced Li's work as a citation in some of his U.S. patents for PageRank. Larry Page and Sergey Brin developed PageRank at Stanford University in 1996 as part of a research project about a new kind of search engine. An interview with HÃ©ctor GarcÃ­a-Molina: Stanford Computer Science Professor and Advisor to Sergey provides background into the development of the page-rank algorithm. Sergey Brin had the idea that information on the web could be ordered in a hierarchy by "link popularity": a page ranks higher as there are more links to it.187-page study from Graz University, Austria
, includes the note that also human brains are used when determining the page rank in Google.
PDF

Algorithm

The PageRank algorithm outputs a probability distribution used to represent the likelihood that a person randomly clicking on links will arrive at any particular page. PageRank can be calculated for collections of documents of any size. It is assumed in several research papers that the distribution is evenly divided among all documents in the collection at the beginning of the computational process. The PageRank computations require several passes, called "iterations", through the collection to adjust approximate PageRank values to more closely reflect the theoretical true value. A probability is expressed as a numeric value between 0 and 1. A 0.5 probability is commonly expressed as a "50% chance" of something happening. Hence, a document with a PageRank of 0.5 means there is a 50% chance that a person clicking on a random link will be directed to said document.

Simplified algorithm

Assume a small universe of four web pages: A, B, C, and D. Links from a page to itself are ignored. Multiple outbound links from one page to another page are treated as a single link. PageRank is initialized to the same value for all pages. In the original form of PageRank, the sum of PageRank over all pages was the total number of pages on the web at that time, so each page in this example would have an initial value of 1. However, later versions of PageRank, and the remainder of this section, assume a probability distribution between 0 and 1. Hence the initial value for each page in this example is 0.25. The PageRank transferred from a given page to the targets of its outbound links upon the next iteration is divided equally among all outbound links. If the only links in the system were from pages B, C, and D to A, each link would transfer 0.25 PageRank to A upon the next iteration, for a total of 0.75. :$PR\left(A\right)= PR\left(B\right) + PR\left(C\right) + PR\left(D\right).\,$ Suppose instead that page B had a link to pages C and A, page C had a link to page A, and page D had links to all three pages. Thus, upon the first iteration, page B would transfer half of its existing value, or 0.125, to page A and the other half, or 0.125, to page C. Page C would transfer all of its existing value, 0.25, to the only page it links to, A. Since D had three outbound links, it would transfer one third of its existing value, or approximately 0.083, to A. At the completion of this iteration, page A will have a PageRank of approximately 0.458. :$PR\left(A\right)= \frac+ \frac+ \frac.\,$ In other words, the PageRank conferred by an outbound link is equal to the document's own PageRank score divided by the number of outbound links L( ). :$PR\left(A\right)= \frac+ \frac+ \frac. \,$ In the general case, the PageRank value for any page u can be expressed as: :$PR\left(u\right) = \sum_ \frac$, i.e. the PageRank value for a page u is dependent on the PageRank values for each page v contained in the set Bu (the set containing all pages linking to page u), divided by the number ''L''(''v'') of links from page v.

Damping factor

The PageRank theory holds that an imaginary surfer who is randomly clicking on links will eventually stop clicking. The probability, at any step, that the person will continue is a damping factor ''d''. Various studies have tested different damping factors, but it is generally assumed that the damping factor will be set around 0.85. The damping factor is subtracted from 1 (and in some variations of the algorithm, the result is divided by the number of documents (''N'') in the collection) and this term is then added to the product of the damping factor and the sum of the incoming PageRank scores. That is, : $PR\left(A\right) = + d \left\left( \frac+ \frac+ \frac+\,\cdots \right\right).$ So any page's PageRank is derived in large part from the PageRanks of other pages. The damping factor adjusts the derived value downward. The original paper, however, gave the following formula, which has led to some confusion: : $PR\left(A\right)= 1 - d + d \left\left( \frac+ \frac+ \frac+\,\cdots \right\right).$ The difference between them is that the PageRank values in the first formula sum to one, while in the second formula each PageRank is multiplied by ''N'' and the sum becomes ''N''. A statement in Page and Brin's paper that "the sum of all PageRanks is one" and claims by other Google employees support the first variant of the formula above. Page and Brin confused the two formulas in their most popular paper "The Anatomy of a Large-Scale Hypertextual Web Search Engine", where they mistakenly claimed that the latter formula formed a probability distribution over web pages. Google recalculates PageRank scores each time it crawls the Web and rebuilds its index. As Google increases the number of documents in its collection, the initial approximation of PageRank decreases for all documents. The formula uses a model of a ''random surfer'' who reaches their target site after several clicks, then switches to a random page. The PageRank value of a page reflects the chance that the random surfer will land on that page by clicking on a link. It can be understood as a Markov chain in which the states are pages, and the transitions are the links between pages â€“ all of which are all equally probable. If a page has no links to other pages, it becomes a sink and therefore terminates the random surfing process. If the random surfer arrives at a sink page, it picks another URL at random and continues surfing again. When calculating PageRank, pages with no outbound links are assumed to link out to all other pages in the collection. Their PageRank scores are therefore divided evenly among all other pages. In other words, to be fair with pages that are not sinks, these random transitions are added to all nodes in the Web. This residual probability, ''d'', is usually set to 0.85, estimated from the frequency that an average surfer uses his or her browser's bookmark feature. So, the equation is as follows: :$PR\left(p_i\right) = \frac + d \sum_ \frac$ where $p_1, p_2, ..., p_N$ are the pages under consideration, $M\left(p_i\right)$ is the set of pages that link to $p_i$, $L\left(p_j\right)$ is the number of outbound links on page $p_j$, and $N$ is the total number of pages. The PageRank values are the entries of the dominant right eigenvector of the modified adjacency matrix rescaled so that each column adds up to one. This makes PageRank a particularly elegant metric: the eigenvector is :$\mathbf = \begin PR\left(p_1\right) \\ PR\left(p_2\right) \\ \vdots \\ PR\left(p_N\right) \end$ where R is the solution of the equation :$\mathbf = \begin \\ \\ \vdots \\ \end + d \begin \ell\left(p_1,p_1\right) & \ell\left(p_1,p_2\right) & \cdots & \ell\left(p_1,p_N\right) \\ \ell\left(p_2,p_1\right) & \ddots & & \vdots \\ \vdots & & \ell\left(p_i,p_j\right) & \\ \ell\left(p_N,p_1\right) & \cdots & & \ell\left(p_N,p_N\right) \end \mathbf$ where the adjacency function $\ell\left(p_i,p_j\right)$ is the ratio between number of links outbound from page j to page i to the total number of outbound links of page j. The adjacency function is 0 if page $p_j$ does not link to $p_i$, and normalized such that, for each ''j'' :$\sum_^N \ell\left(p_i,p_j\right) = 1$, i.e. the elements of each column sum up to 1, so the matrix is a stochastic matrix (for more details see the computation section below). Thus this is a variant of the eigenvector centrality measure used commonly in network analysis. Because of the large eigengap of the modified adjacency matrix above, the values of the PageRank eigenvector can be approximated to within a high degree of accuracy within only a few iterations. Google's founders, in their original paper, reported that the PageRank algorithm for a network consisting of 322 million links (in-edges and out-edges) converges to within a tolerable limit in 52 iterations. The convergence in a network of half the above size took approximately 45 iterations. Through this data, they concluded the algorithm can be scaled very well and that the scaling factor for extremely large networks would be roughly linear in , where n is the size of the network. As a result of Markov theory, it can be shown that the PageRank of a page is the probability of arriving at that page after a large number of clicks. This happens to equal $t^$ where $t$ is the expectation of the number of clicks (or random jumps) required to get from the page back to itself. One main disadvantage of PageRank is that it favors older pages. A new page, even a very good one, will not have many links unless it is part of an existing site (a site being a densely connected set of pages, such as Wikipedia). Several strategies have been proposed to accelerate the computation of PageRank. Various strategies to manipulate PageRank have been employed in concerted efforts to improve search results rankings and monetize advertising links. These strategies have severely impacted the reliability of the PageRank concept, which purports to determine which documents are actually highly valued by the Web community. Since December 2007, when it started ''actively'' penalizing sites selling paid text links, Google has combatted link farms and other schemes designed to artificially inflate PageRank. How Google identifies link farms and other PageRank manipulation tools is among Google's trade secrets.

Computation

PageRank can be computed either iteratively or algebraically. The iterative method can be viewed as the power iteration method or the power method. The basic mathematical operations performed are identical.

Iterative

At $t=0$, an initial probability distribution is assumed, usually :$PR\left(p_i; 0\right) = \frac$. where N is the total number of pages, and $p_i; 0$ is page i at time 0. At each time step, the computation, as detailed above, yields :$PR\left(p_i;t+1\right) = \frac + d \sum_ \frac$ where d is the damping factor, or in matrix notation where $\mathbf_i\left(t\right)=PR\left(p_i; t\right)$ and $\mathbf$ is the column vector of length $N$ containing only ones. The matrix $\mathcal$ is defined as : $\mathcal_ = \begin 1 /L\left(p_j\right) , & \mboxj\mboxi\ \\ 0, & \mbox \end$ i.e., :$\mathcal := \left(K^ A\right)^T$, where $A$ denotes the adjacency matrix of the graph and $K$ is the diagonal matrix with the outdegrees in the diagonal. The probability calculation is made for each page at a time point, then repeated for the next time point. The computation ends when for some small $\epsilon$ :$|\mathbf\left(t+1\right) - \mathbf\left(t\right)| < \epsilon$, i.e., when convergence is assumed.

Algebraic

â€”For $t \to \infty$ (i.e., in the steady state), the equation () reads The solution is given by :$\mathbf = \left(\mathbf-d \mathcal\right)^ \frac \mathbf$, with the identity matrix $\mathbf$. The solution exists and is unique for $0 < d < 1$. This can be seen by noting that $\mathcal$ is by construction a stochastic matrix and hence has an eigenvalue equal to one as a consequence of the Perronâ€“Frobenius theorem.

Power method

If the matrix $\mathcal$ is a transition probability, i.e., column-stochastic and $\mathbf$ is a probability distribution (i.e., $|\mathbf|=1$, $\mathbf\mathbf=\mathbf$ where $\mathbf$ is matrix of all ones), then equation () is equivalent to Hence PageRank $\mathbf$ is the principal eigenvector of $\widehat$. A fast and easy way to compute this is using the power method: starting with an arbitrary vector $x\left(0\right)$, the operator $\widehat$ is applied in succession, i.e., :$x\left(t+1\right) = \widehat x\left(t\right)$, until :$|x\left(t+1\right) - x\left(t\right)| < \epsilon$. Note that in equation () the matrix on the right-hand side in the parenthesis can be interpreted as :$\frac \mathbf = \left(1-d\right)\mathbf \mathbf^t$, where $\mathbf$ is an initial probability distribution. In the current case :$\mathbf := \frac \mathbf$. Finally, if $\mathcal$ has columns with only zero values, they should be replaced with the initial probability vector $\mathbf$. In other words, :$\mathcal^\prime := \mathcal + \mathcal$, where the matrix $\mathcal$ is defined as :$\mathcal := \mathbf \mathbf^t$, with : $\mathbf_i = \begin 1, & \mboxL\left(p_i\right)=0\ \\ 0, & \mbox \end$ In this case, the above two computations using $\mathcal$ only give the same PageRank if their results are normalized: : $\mathbf_ = \frac = \frac$.

Implementation

Scala/Apache Spark

A typical example is using Scala's functional programming with Apache Spark RDDs to iteratively compute Page Ranks. object SparkPageRank

MATLAB/Octave

% Parameter M adjacency matrix where M_i,j represents the link from 'j' to 'i', such that for all 'j' % sum(i, M_i,j) = 1 % Parameter d damping factor % Parameter v_quadratic_error quadratic error for v % Return v, a vector of ranks such that v_i is the i-th rank from , 1 function = rank2(M, d, v_quadratic_error) N = size(M, 2); % N is equal to either dimension of M and the number of documents v = rand(N, 1); v = v ./ norm(v, 1); % This is now L1, not L2 last_v = ones(N, 1) * inf; M_hat = (d .* M) + (((1 - d) / N) .* ones(N, N)); while (norm(v - last_v, 2) > v_quadratic_error) last_v = v; v = M_hat * v; % removed the L2 norm of the iterated PR end end %function Example of code calling the rank function defined above: M = 0 0 0 1 ; 0.5 0 0 0 0 ; 0.5 0 0 0 0 ; 0 1 0.5 0 0 ; 0 0 0.5 1 0 rank2(M, 0.80, 0.001)

Python

"""PageRank algorithm with explicit number of iterations. Returns ------- ranking of nodes (pages) in the adjacency matrix """ import numpy as np def pagerank(M, num_iterations: int = 100, d: float = 0.85): """PageRank: The trillion dollar algorithm. Parameters ---------- M : numpy array adjacency matrix where M_i,j represents the link from 'j' to 'i', such that for all 'j' sum(i, M_i,j) = 1 num_iterations : int, optional number of iterations, by default 100 d : float, optional damping factor, by default 0.85 Returns ------- numpy array a vector of ranks such that v_i is the i-th rank from , 1 v sums to 1 """ N = M.shape v = np.random.rand(N, 1) v = v / np.linalg.norm(v, 1) M_hat = (d * M + (1 - d) / N) for i in range(num_iterations): v = M_hat @ v return v M = np.array(0, 0, 0, 0, 1 .5, 0, 0, 0, 0 .5, 0, 0, 0, 0 , 1, 0.5, 0, 0 , 0, 0.5, 1, 0) v = pagerank(M, 100, 0.85) This example takes â‰ˆ13 iterations to converge.

Variations

PageRank of an undirected graph

The PageRank of an undirected graph $G$ is statistically close to the degree distribution of the graph $G$, but they are generally not identical: If $R$ is the PageRank vector defined above, and $D$ is the degree distribution vector :$D = \begin \deg\left(p_1\right) \\ \deg\left(p_2\right) \\ \vdots \\ \deg\left(p_N\right) \end$ where $\deg\left(p_i\right)$ denotes the degree of vertex $p_i$, and $E$ is the edge-set of the graph, then, with $Y=\mathbf$, shows that: $\|Y-D\|_1\leq \|R-D\|_1\leq \|Y-D\|_1,$ that is, the PageRank of an undirected graph equals to the degree distribution vector if and only if the graph is regular, i.e., every vertex has the same degree.

Generalization of PageRank and eigenvector centrality for ranking objects of two kinds

A generalization of PageRank for the case of ranking two interacting groups of objects was described by Daugulis. In applications it may be necessary to model systems having objects of two kinds where a weighted relation is defined on object pairs. This leads to considering bipartite graphs. For such graphs two related positive or nonnegative irreducible matrices corresponding to vertex partition sets can be defined. One can compute rankings of objects in both groups as eigenvectors corresponding to the maximal positive eigenvalues of these matrices. Normed eigenvectors exist and are unique by the Perron or Perronâ€“Frobenius theorem. Example: consumers and products. The relation weight is the product consumption rate.

Distributed algorithm for PageRank computation

Sarma et al. describe two random walk-based distributed algorithms for computing PageRank of nodes in a network. One algorithm takes $O\left(\log n/\epsilon\right)$ rounds with high probability on any graph (directed or undirected), where n is the network size and $\epsilon$ is the reset probability ($1-\epsilon$, which is called the damping factor) used in the PageRank computation. They also present a faster algorithm that takes $O\left(\sqrt/\epsilon\right)$ rounds in undirected graphs. In both algorithms, each node processes and sends a number of bits per round that are polylogarithmic in n, the network size.

SERP rank

The Google Directory PageRank was an 8-unit measurement. Unlike the Google Toolbar, which shows a numeric PageRank value upon mouseover of the green bar, the Google Directory only displayed the bar, never the numeric values. Google Directory was closed on July 20, 2011.

False or spoofed PageRank

In the past, the PageRank shown in the Toolbar was easily manipulated. Redirection from one page to another, either via a HTTP 302 response or a "Refresh" meta tag, caused the source page to acquire the PageRank of the destination page. Hence, a new page with PR 0 and no incoming links could have acquired PR 10 by redirecting to the Google home page. This spoofing technique was a known vulnerability. Spoofing can generally be detected by performing a Google search for a source URL; if the URL of an entirely different site is displayed in the results, the latter URL may represent the destination of a redirection.

Manipulating PageRank

Directed Surfer Model

A more intelligent surfer that probabilistically hops from page to page depending on the content of the pages and query terms the surfer that it is looking for. This model is based on a query-dependent PageRank score of a page which as the name suggests is also a function of query. When given a multiple-term query, $Q=\$, the surfer selects a $q$ according to some probability distribution, $P\left(q\right)$, and uses that term to guide its behavior for a large number of steps. It then selects another term according to the distribution to determine its behavior, and so on. The resulting distribution over visited web pages is QD-PageRank.

Social components

Katja Mayer views PageRank as a social network as it connects differing viewpoints and thoughts in a single place. People go to PageRank for information and are flooded with citations of other authors who also have an opinion on the topic. This creates a social aspect where everything can be discussed and collected to provoke thinking. There is a social relationship that exists between PageRank and the people who use it as it is constantly adapting and changing to the shifts in modern society. Viewing the relationship between PageRank and the individual through sociometry allows for an in-depth look at the connection that results. Matteo Pasquinelli reckons the basis for the belief that PageRank has a social component lies in the idea of attention economy. With attention economy, value is placed on products that receive a greater amount of human attention and the results at the top of the PageRank garner a larger amount of focus then those on subsequent pages. The outcomes with the higher PageRank will therefore enter the human consciousness to a larger extent. These ideas can influence decision-making and the actions of the viewer have a direct relation to the PageRank. They possess a higher potential to attract a user's attention as their location increases the attention economy attached to the site. With this location they can receive more traffic and their online marketplace will have more purchases. The PageRank of these sites allow them to be trusted and they are able to parlay this trust into increased business.

Other uses

The mathematics of PageRank are entirely general and apply to any graph or network in any domain. Thus, PageRank is now regularly used in bibliometrics, social and information network analysis, and for link prediction and recommendation. It's even used for systems analysis of road networks, as well as biology, chemistry, neuroscience, and physics.

Pagerank has recently been used to quantify the scientific impact of researchers. The underlying citation and collaboration networks are used in conjunction with pagerank algorithm in order to come up with a ranking system for individual publications which propagates to individual authors. The new index known as pagerank-index (Pi) is demonstrated to be fairer compared to h-index in the context of many drawbacks exhibited by h-index. For the analysis of protein networks in biology PageRank is also a useful tool. In any ecosystem, a modified version of PageRank may be used to determine species that are essential to the continuing health of the environment. A similar new use of PageRank is to rank academic doctoral programs based on their records of placing their graduates in faculty positions. In PageRank terms, academic departments link to each other by hiring their faculty from each other (and from themselves). A version of PageRank has recently been proposed as a replacement for the traditional Institute for Scientific Information (ISI) impact factor, and implemented at Eigenfactor as well as at SCImago. Instead of merely counting total citation to a journal, the "importance" of each citation is determined in a PageRank fashion. In neuroscience, the PageRank of a neuron in a neural network has been found to correlate with its relative firing rate.

Internet use

Personalized PageRank is used by Twitter to present users with other accounts they may wish to follow. Swiftype's site search product builds a "PageRank thatâ€™s specific to individual websites" by looking at each website's signals of importance and prioritizing content based on factors such as number of links from the home page. A Web crawler may use PageRank as one of a number of importance metrics it uses to determine which URL to visit during a crawl of the web. One of the early working papers that were used in the creation of Google is ''Efficient crawling through URL ordering'', which discusses the use of a number of different importance metrics to determine how deeply, and how much of a site Google will crawl. PageRank is presented as one of a number of these importance metrics, though there are others listed such as the number of inbound and outbound links for a URL, and the distance from the root directory on a site to the URL. The PageRank may also be used as a methodology to measure the apparent impact of a community like the Blogosphere on the overall Web itself. This approach uses therefore the PageRank to measure the distribution of attention in reflection of the Scale-free network paradigm.

Other applications

In 2005, in a pilot study in Pakistan, ''Structural Deep Democracy, SD2'' was used for leadership selection in a sustainable agriculture group called Contact Youth. SD2 uses ''PageRank'' for the processing of the transitive proxy votes, with the additional constraints of mandating at least two initial proxies per voter, and all voters are proxy candidates. More complex variants can be built on top of SD2, such as adding specialist proxies and direct votes for specific issues, but SD2 as the underlying umbrella system, mandates that generalist proxies should always be used. In sport the PageRank algorithm has been used to rank the performance of: teams in the National Football League (NFL) in the USA; individual soccer players; and athletes in the Diamond League. PageRank has been used to rank spaces or streets to predict how many people (pedestrians or vehicles) come to the individual spaces or streets. In lexical semantics it has been used to perform Word Sense Disambiguation, Semantic similarity, and also to automatically rank WordNet synsets according to how strongly they possess a given semantic property, such as positivity or negativity.

nofollow

*Attention inequality *CheiRank *Domain Authority *EigenTrust â€” a decentralized PageRank algorithm *Google bomb *Google Hummingbird *Google matrix *Google Panda *Google Penguin *Google Search *Hilltop algorithm *Katz centrality â€“ a 1953 scheme closely related to pagerank *Link building *Search engine optimization *SimRank â€” a measure of object-to-object similarity based on random-surfer model *TrustRank *VisualRank - Google's application of PageRank to image-search *Webgraph

References

Citations

Sources

* * * * * * *

Relevant patents

Original PageRank U.S. Patentâ€”Method for node ranking in a linked database
€”Patent number 6,285,999â€”September 4, 2001

€”Patent number 6,799,176â€”September 28, 2004 * [http://patft.uspto.gov/netacgi/nph-Parser?Sect1=PTO2&Sect2=HITOFF&u=%2Fnetahtml%2FPTO%2Fsearch-adv.htm&r=1&p=1&f=G&l=50&d=PTXT&S1=7,058,628.PN.&OS=pn/7,058,628&RS=PN/7,058,628 PageRank U.S. Patentâ€”Method for node ranking in a linked database]â€”Patent number 7,058,628â€”June 6, 2006
PageRank U.S. Patentâ€”Scoring documents in a linked database
€”Patent number 7,269,587â€”September 11, 2007