Normalized Discounted Cumulative Gain
   HOME

TheInfoList



OR:

Discounted cumulative gain (DCG) is a measure of ranking quality. In
information retrieval Information retrieval (IR) in computing and information science is the process of obtaining information system resources that are relevant to an information need from a collection of those resources. Searches can be based on full-text or other co ...
, it is often used to measure effectiveness of
web Web most often refers to: * Spider web, a silken structure created by the animal * World Wide Web or the Web, an Internet-based hypertext system Web, WEB, or the Web may also refer to: Computing * WEB, a literate programming system created by ...
search engine A search engine is a software system designed to carry out web searches. They search the World Wide Web in a systematic way for particular information specified in a textual web search query. The search results are generally presented in a ...
algorithm In mathematics and computer science, an algorithm () is a finite sequence of rigorous instructions, typically used to solve a class of specific Computational problem, problems or to perform a computation. Algorithms are used as specificat ...
s or related applications. Using a graded relevance scale of documents in a search-engine result set, DCG measures the usefulness, or ''gain'', of a document based on its position in the result list. The gain is accumulated from the top of the result list to the bottom, with the gain of each result discounted at lower ranks. Kalervo Järvelin, Jaana Kekäläinen: Cumulated gain-based evaluation of IR techniques. ACM Transactions on Information Systems 20(4), 422–446 (2002)


Overview

Two assumptions are made in using DCG and its related measures. # Highly relevant documents are more useful when appearing earlier in a search engine result list (have higher ranks) # Highly relevant documents are more useful than marginally relevant documents, which are in turn more useful than non-relevant documents. DCG originates from an earlier, more primitive, measure called Cumulative Gain.


Cumulative Gain

Cumulative Gain (CG) is the sum of the graded relevance values of all results in a search result list. This predecessor of DCG does not include the rank (position) of a result in the result list into the consideration of the usefulness of a result set. The CG at a particular rank position p is defined as: : \mathrm = \sum_^ rel_ Where rel_ is the graded relevance of the result at position i. The value computed with the CG function is unaffected by changes in the ordering of search results. That is, moving a highly relevant document d_ above a higher ranked, less relevant, document d_ does not change the computed value for CG (assuming i,j \leq p). Based on the two assumptions made above about the usefulness of search results, (N)DCG is usually preferred over CG. Cumulative Gain is sometimes called Graded Precision as it is identical to the Precision metric if the rating scale is binary.


Discounted Cumulative Gain

The premise of DCG is that highly relevant documents appearing lower in a search result list should be penalized as the graded relevance value is reduced logarithmically proportional to the position of the result. The traditional formula of DCG accumulated at a particular rank position p is defined as: : \mathrm = \sum_^ \frac = rel_1 + \sum_^ \frac Previously there was no theoretically sound justification for using a
logarithm In mathematics, the logarithm is the inverse function to exponentiation. That means the logarithm of a number  to the base  is the exponent to which must be raised, to produce . For example, since , the ''logarithm base'' 10 o ...
ic reduction factor other than the fact that it produces a smooth reduction. But Wang et al. (2013) gave theoretical guarantee for using the logarithmic reduction factor in Normalized DCG (NDCG). The authors show that for every pair of substantially different ranking functions, the NDCG can decide which one is better in a consistent manner. An alternative formulation of DCG places stronger emphasis on retrieving relevant documents: : \mathrm = \sum_^ \frac The latter formula is commonly used in industry including major web search companies and data science competition platforms such as Kaggle. These two formulations of DCG are the same when the relevance values of documents are
binary Binary may refer to: Science and technology Mathematics * Binary number, a representation of numbers using only two digits (0 and 1) * Binary function, a function that takes two arguments * Binary operation, a mathematical operation that t ...
; rel_ \in \. Note that Croft et al. (2010) and Burges et al. (2005) present the second DCG with a log of base e, while both versions of DCG above use a log of base 2. When computing NDCG with the first formulation of DCG, the base of the log does not matter, but the base of the log does affect the value of NDCG for the second formulation. Clearly, the base of the log affects the value of DCG in both formulations.


Normalized DCG

Search result lists vary in length depending on the query. Comparing a search engine's performance from one query to the next cannot be consistently achieved using DCG alone, so the cumulative gain at each position for a chosen value of p should be normalized across queries. This is done by sorting all relevant documents in the corpus by their relative relevance, producing the maximum possible DCG through position p, also called Ideal DCG (IDCG) through that position. For a query, the ''normalized discounted cumulative gain'', or nDCG, is computed as: : \mathrm = \frac , where IDCG is ideal discounted cumulative gain, : \mathrm = \sum_^ \frac and REL_p represents the list of relevant documents (ordered by their relevance) in the corpus up to position p. The nDCG values for all queries can be averaged to obtain a measure of the average performance of a search engine's ranking algorithm. Note that in a perfect ranking algorithm, the DCG_p will be the same as the IDCG_p producing an nDCG of 1.0. All nDCG calculations are then relative values on the interval 0.0 to 1.0 and so are cross-query comparable. The main difficulty encountered in using nDCG is the unavailability of an ideal ordering of results when only partial
relevance feedback Relevance feedback is a feature of some information retrieval systems. The idea behind relevance feedback is to take the results that are initially returned from a given query, to gather user feedback, and to use information about whether or not th ...
is available.


Example

Presented with a list of documents in response to a search query, an experiment participant is asked to judge the relevance of each document to the query. Each document is to be judged on a scale of 0-3 with 0 meaning not relevant, 3 meaning highly relevant, and 1 and 2 meaning "somewhere in between". For the documents ordered by the ranking algorithm as : D_, D_, D_, D_, D_, D_ the user provides the following relevance scores: : 3, 2, 3, 0, 1, 2 That is: document 1 has a relevance of 3, document 2 has a relevance of 2, etc. The Cumulative Gain of this search result listing is: : \mathrm = \sum_^ rel_ = 3 + 2 + 3 + 0 + 1 + 2 = 11 Changing the order of any two documents does not affect the CG measure. If D_3 and D_4 are switched, the CG remains the same, 11. DCG is used to emphasize highly relevant documents appearing early in the result list. Using the logarithmic scale for reduction, the DCG for each result in order is: So the DCG_ of this ranking is: : \mathrm = \sum_^ \frac = 3 + 1.262 + 1.5 + 0 + 0.387 + 0.712 = 6.861 Now a switch of D_3 and D_4 results in a reduced DCG because a less relevant document is placed higher in the ranking; that is, a more relevant document is discounted more by being placed in a lower rank. The performance of this query to another is incomparable in this form since the other query may have more results, resulting in a larger overall DCG which may not necessarily be better. In order to compare, the DCG values must be normalized. To normalize DCG values, an ideal ordering for the given query is needed. For this example, that ordering would be the monotonically decreasing sort of all known relevance judgments. In addition to the six from this experiment, suppose we also know there is a document D_7 with relevance grade 3 to the same query and a document D_8 with relevance grade 2 to that query. Then the ideal ordering is: : 3, 3, 3, 2, 2, 2, 1, 0 The ideal ranking is cut again to length 6 to match the depth of analysis of the ranking: : 3, 3, 3, 2, 2, 2 The DCG of this ideal ordering, or ''IDCG (Ideal DCG)'' , is computed to rank 6: : \mathrm = 8.740 And so the nDCG for this query is given as: : \mathrm = \frac = \frac = 0.785


Limitations

# Normalized DCG metric does not penalize for bad documents in the result. For example, if a query returns two results with scores and respectively, both would be considered equally good even if the latter contains a bad document. For the ranking judgments one might use numerical scores instead of . This would cause the score to lower if bad results are returned, prioritizing the precision of the results over the recall. Note that this approach can result in an overall negative score which would shift the lower bound of the score from to a negative value. # Normalized DCG does not penalize for missing documents in the result. For example, if a query returns two results with scores and respectively, both would be considered equally good, assuming ideal DCG is computed to rank 3 for the former and rank 5 for the latter. One way to take into account this limitation is to enforce fixed set size for the result set and use minimum scores for the missing documents. In previous example, we would use the scores and and quote nDCG as nDCG@5. # Normalized DCG may not be suitable to measure performance of queries that may often have several equally good results. This is especially true when this metric is limited to only the first few results as it is done in practice. For example, for queries such as "restaurants" nDCG@1 would account for only the first result and hence if one result set contains only 1 restaurant from the nearby area while the other contains 5, both would end up having the same score even though the latter is more comprehensive.


See also

* Evaluation measures (information retrieval) * Learning to rank


References

{{Machine learning evaluation metrics Information retrieval evaluation