
A receiver operating characteristic curve, or ROC curve, is a
graphical plot that illustrates the performance of a
binary classifier model (can be used for multi class classification as well) at varying threshold values. ROC analysis is commonly applied in the assessment of diagnostic test performance in clinical epidemiology.
The ROC curve is the plot of the
true positive rate (TPR) against the
false positive rate (FPR) at each threshold setting.
The ROC can also be thought of as a plot of the
statistical power
In frequentist statistics, power is the probability of detecting a given effect (if that effect actually exists) using a given test in a given context. In typical use, it is a function of the specific test that is used (including the choice of tes ...
as a function of the
Type I Error
Type I error, or a false positive, is the erroneous rejection of a true null hypothesis in statistical hypothesis testing. A type II error, or a false negative, is the erroneous failure in bringing about appropriate rejection of a false null hy ...
of the decision rule (when the performance is calculated from just a sample of the population, it can be thought of as estimators of these quantities). The ROC curve is thus the sensitivity as a function of
false positive rate.
Given that the
probability distribution
In probability theory and statistics, a probability distribution is a Function (mathematics), function that gives the probabilities of occurrence of possible events for an Experiment (probability theory), experiment. It is a mathematical descri ...
s for both true positive and false positive are known, the ROC curve is obtained as the
cumulative distribution function
In probability theory and statistics, the cumulative distribution function (CDF) of a real-valued random variable X, or just distribution function of X, evaluated at x, is the probability that X will take a value less than or equal to x.
Ever ...
(CDF, area under the probability distribution from
to the discrimination threshold) of the detection probability in the ''y''-axis versus the CDF of the false positive probability on the ''x''-axis.
ROC analysis provides tools to select possibly optimal models and to discard suboptimal ones independently from (and prior to specifying) the cost context or the class distribution. ROC analysis is related in a direct and natural way to the cost/benefit analysis of diagnostic
decision making
In psychology, decision-making (also spelled decision making and decisionmaking) is regarded as the cognitive process resulting in the selection of a belief or a course of action among several possible alternative options. It could be either ra ...
.
Terminology
The true-positive rate is also known as
sensitivity or ''probability of detection''.
The false-positive rate is also known as the ''probability of false alarm''
and equals (1 −
specificity).
The ROC is also known as a relative operating characteristic curve, because it is a comparison of two operating characteristics (TPR and FPR) as the criterion changes.
[Swets, John A.]
''Signal detection theory and ROC analysis in psychology and diagnostics : collected papers''
Lawrence Erlbaum Associates, Mahwah, NJ, 1996
History
The ROC curve was first developed by electrical engineers and radar engineers during World War II for detecting enemy objects in battlefields, starting in 1941, which led to its name ("receiver operating characteristic").
It was soon introduced to
psychology
Psychology is the scientific study of mind and behavior. Its subject matter includes the behavior of humans and nonhumans, both consciousness, conscious and Unconscious mind, unconscious phenomena, and mental processes such as thoughts, feel ...
to account for the perceptual detection of stimuli. ROC analysis has been used in
medicine
Medicine is the science and Praxis (process), practice of caring for patients, managing the Medical diagnosis, diagnosis, prognosis, Preventive medicine, prevention, therapy, treatment, Palliative care, palliation of their injury or disease, ...
,
radiology
Radiology ( ) is the medical specialty that uses medical imaging to diagnose diseases and guide treatment within the bodies of humans and other animals. It began with radiography (which is why its name has a root referring to radiation), but tod ...
,
biometrics
Biometrics are body measurements and calculations related to human characteristics and features. Biometric authentication (or realistic authentication) is used in computer science as a form of identification and access control. It is also used t ...
,
forecasting
Forecasting is the process of making predictions based on past and present data. Later these can be compared with what actually happens. For example, a company might Estimation, estimate their revenue in the next year, then compare it against the ...
of
natural hazard
A natural disaster is the very harmful impact on a society or community brought by natural phenomenon or hazard. Some examples of natural hazards include avalanches, droughts, earthquakes, floods, heat waves, landslides - including submarin ...
s,
meteorology
Meteorology is the scientific study of the Earth's atmosphere and short-term atmospheric phenomena (i.e. weather), with a focus on weather forecasting. It has applications in the military, aviation, energy production, transport, agricultur ...
, model performance assessment, and other areas for many decades and is increasingly used in
machine learning
Machine learning (ML) is a field of study in artificial intelligence concerned with the development and study of Computational statistics, statistical algorithms that can learn from data and generalise to unseen data, and thus perform Task ( ...
and
data mining
Data mining is the process of extracting and finding patterns in massive data sets involving methods at the intersection of machine learning, statistics, and database systems. Data mining is an interdisciplinary subfield of computer science and ...
research.
Basic concept
A classification model (
classifier or
diagnosis
Diagnosis (: diagnoses) is the identification of the nature and cause of a certain phenomenon. Diagnosis is used in a lot of different academic discipline, disciplines, with variations in the use of logic, analytics, and experience, to determine " ...
) is a
mapping of instances between certain classes/groups. Because the classifier or diagnosis result can be an arbitrary
real value (continuous output), the classifier boundary between classes must be determined by a threshold value (for instance, to determine whether a person has
hypertension
Hypertension, also known as high blood pressure, is a Chronic condition, long-term Disease, medical condition in which the blood pressure in the artery, arteries is persistently elevated. High blood pressure usually does not cause symptoms i ...
based on a
blood pressure
Blood pressure (BP) is the pressure of Circulatory system, circulating blood against the walls of blood vessels. Most of this pressure results from the heart pumping blood through the circulatory system. When used without qualification, the term ...
measure). Or it can be a
discrete
Discrete may refer to:
*Discrete particle or quantum in physics, for example in quantum theory
* Discrete device, an electronic component with just one circuit element, either passive or active, other than an integrated circuit
* Discrete group, ...
class label, indicating one of the classes.
Consider a two-class prediction problem (
binary classification
Binary classification is the task of classifying the elements of a set into one of two groups (each called ''class''). Typical binary classification problems include:
* Medical testing to determine if a patient has a certain disease or not;
* Qual ...
), in which the outcomes are labeled either as positive (''p'') or negative (''n''). There are four possible outcomes from a binary classifier. If the outcome from a prediction is ''p'' and the actual value is also ''p'', then it is called a ''true positive'' (TP); however if the actual value is ''n'' then it is said to be a ''false positive'' (FP). Conversely, a ''true negative'' (TN) has occurred when both the prediction outcome and the actual value are ''n'', and a ''false negative'' (FN) is when the prediction outcome is ''n'' while the actual value is ''p''.
To get an appropriate example in a real-world problem, consider a diagnostic test that seeks to determine whether a person has a certain disease. A false positive in this case occurs when the person tests positive, but does not actually have the disease. A false negative, on the other hand, occurs when the person tests negative, suggesting they are healthy, when they actually do have the disease.
Consider an experiment from P positive instances and N negative instances for some condition. The four outcomes can be formulated in a 2×2 ''
contingency table
In statistics, a contingency table (also known as a cross tabulation or crosstab) is a type of table in a matrix format that displays the multivariate frequency distribution of the variables. They are heavily used in survey research, business int ...
'' or ''
confusion matrix
In the field of machine learning and specifically the problem of statistical classification, a confusion matrix, also known as error matrix, is a specific table layout that allows visualization of the performance of an algorithm, typically a super ...
'', as follows:
ROC space
The contingency table can derive several evaluation "metrics" (see infobox). To draw a ROC curve, only the true positive rate (TPR) and false positive rate (FPR) are needed (as functions of some classifier parameter). The TPR defines how many correct positive results occur among all positive samples available during the test. FPR, on the other hand, defines how many incorrect positive results occur among all negative samples available during the test.
A ROC space is defined by FPR and TPR as ''x'' and ''y'' axes, respectively, which depicts relative trade-offs between true positive (benefits) and false positive (costs). Since TPR is equivalent to sensitivity and FPR is equal to 1 −
specificity, the ROC graph is sometimes called the sensitivity vs (1 − specificity) plot. Each prediction result or instance of a
confusion matrix
In the field of machine learning and specifically the problem of statistical classification, a confusion matrix, also known as error matrix, is a specific table layout that allows visualization of the performance of an algorithm, typically a super ...
represents one point in the ROC space.
The best possible prediction method would yield a point in the upper left corner or coordinate (0,1) of the ROC space, representing 100% sensitivity (no false negatives) and 100%
specificity (no false positives). The (0,1) point is also called a ''perfect classification''. A random guess would give a point along a diagonal line (the so-called ''line of no-discrimination'') from the bottom left to the top right corners (regardless of the positive and negative
base rate
In probability and statistics, the base rate (also known as prior probabilities) is the class of probabilities unconditional on "featural evidence" ( likelihoods).
It is the proportion of individuals in a population who have a certain characte ...
s). An intuitive example of random guessing is a decision by flipping coins. As the size of the sample increases, a random classifier's ROC point tends towards the diagonal line. In the case of a balanced coin, it will tend to the point (0.5, 0.5).
The diagonal divides the ROC space. Points above the diagonal represent good classification results (better than random); points below the line represent bad results (worse than random). Note that the output of a consistently bad predictor could simply be inverted to obtain a good predictor.
Consider four prediction results from 100 positive and 100 negative instances:
Plots of the four results above in the ROC space are given in the figure. The result of method A clearly shows the best predictive power among A, B, and C. The result of B lies on the random guess line (the diagonal line), and it can be seen in the table that the
accuracy of B is 50%. However, when C is mirrored across the center point (0.5,0.5), the resulting method C′ is even better than A. This mirrored method simply reverses the predictions of whatever method or test produced the C contingency table. Although the original C method has negative predictive power, simply reversing its decisions leads to a new predictive method C′ which has positive predictive power. When the C method predicts p or n, the C′ method would predict n or p, respectively. In this manner, the C′ test would perform the best. The closer a result from a contingency table is to the upper left corner, the better it predicts, but the distance from the random guess line in either direction is the best indicator of how much predictive power a method has. If the result is below the line (i.e. the method is worse than a random guess), all of the method's predictions must be reversed in order to utilize its power, thereby moving the result above the random guess line.
Curves in ROC space

In binary classification, the class prediction for each instance is often made based on a
continuous random variable
In probability theory and statistics, a probability distribution is a function that gives the probabilities of occurrence of possible events for an experiment. It is a mathematical description of a random phenomenon in terms of its sample spa ...
, which is a "score" computed for the instance (e.g. the estimated probability in logistic regression). Given a threshold parameter
, the instance is classified as "positive" if
, and "negative" otherwise.
follows a probability density
if the instance actually belongs to class "positive", and
if otherwise. Therefore, the true positive rate is given by
and the false positive rate is given by
.
The ROC curve plots parametrically
versus
with
as the varying parameter.
For example, imagine that the blood protein levels in diseased people and healthy people are
normally distributed
In probability theory and statistics, a normal distribution or Gaussian distribution is a type of continuous probability distribution for a real number, real-valued random variable. The general form of its probability density function is
f(x ...
with means of 2
g/
dL and 1 g/dL respectively. A medical test might measure the level of a certain protein in a blood sample and classify any number above a certain threshold as indicating disease. The experimenter can adjust the threshold (green vertical line in the figure), which will in turn change the false positive rate. Increasing the threshold would result in fewer false positives (and more false negatives), corresponding to a leftward movement on the curve. The actual shape of the curve is determined by how much overlap the two distributions have.
Criticisms

Several studies criticize certain applications of the ROC curve and its area under the curve as measurements for assessing binary classifications when they do not capture the information relevant to the application.
The main criticism to the ROC curve described in these studies regards the incorporation of areas with low sensitivity and low
specificity (both lower than 0.5) for the calculation of the total area under the curve (AUC).,
as described in the plot on the right.
According to the authors of these studies, that portion of area under the curve (with low sensitivity and low specificity) regards confusion matrices where binary predictions obtain bad results, and therefore should not be included for the assessment of the overall performance.
Moreover, that portion of AUC indicates a space with high or low confusion matrix threshold which is rarely of interest for scientists performing a binary classification in any field.
Another criticism to the ROC and its area under the curve is that they say nothing about precision and negative predictive value.
A high ROC AUC, such as 0.9 for example, might correspond to low values of precision and negative predictive value, such as 0.2 and 0.1 in the
, 1
The comma is a punctuation mark that appears in several variants in different languages. Some typefaces render it as a small line, slightly curved or straight, but inclined from the vertical; others give it the appearance of a miniature fille ...
range.
If one performed a binary classification, obtained an ROC AUC of 0.9 and decided to focus only on this metric, they might overoptimistically believe their binary test was excellent. However, if this person took a look at the values of precision and negative predictive value, they might discover their values are low.
The ROC AUC summarizes sensitivity and specificity, but does not inform regarding precision and negative predictive value.
Further interpretations
Sometimes, the ROC is used to generate a summary statistic. Common versions are:
* the intercept of the ROC curve with the line at 45 degrees orthogonal to the no-discrimination line - the balance point where
Sensitivity =
Specificity
* the intercept of the ROC curve with the tangent at 45 degrees parallel to the no-discrimination line that is closest to the error-free point (0,1) – also called
Youden's J statistic
Youden's J statistic (also called Youden's index) is a single statistic that captures the performance of a dichotomy, dichotomous diagnostic test. In meteorology, this statistic is referred to as Peirce Skill Score (PSS), Hanssen–Kuipers Discrim ...
and generalized as Informedness
* the area between the ROC curve and the no-discrimination line multiplied by two and subtraction of one is called the ''Gini coefficient'', especially in the context of
credit scoring. It should not be confused with the
measure of statistical dispersion also called Gini coefficient.
* the area between the full ROC curve and the triangular ROC curve including only (0,0), (1,1) and one selected operating point
– Consistency
* the area under the ROC curve, or "AUC" ("area under curve"), or A' (pronounced "a-prime"), or "c-statistic" ("concordance statistic").
* the
sensitivity index ''d′'' (pronounced "d-prime"), the distance between the mean of the distribution of activity in the system under noise-alone conditions and its distribution under signal-alone conditions, divided by their
standard deviation
In statistics, the standard deviation is a measure of the amount of variation of the values of a variable about its Expected value, mean. A low standard Deviation (statistics), deviation indicates that the values tend to be close to the mean ( ...
, under the assumption that both these distributions are
normal with the same standard deviation. Under these assumptions, the shape of the ROC is entirely determined by ''d′''.
However, any attempt to summarize the ROC curve into a single number loses information about the pattern of tradeoffs of the particular discriminator algorithm.
Probabilistic interpretation
The area under the curve (often referred to as simply the AUC) is equal to the probability that a classifier will rank a randomly chosen positive instance higher than a randomly chosen negative one (assuming 'positive' ranks higher than 'negative').
[Fawcett, Tom (2006); ]
An introduction to ROC analysis
', Pattern Recognition Letters, 27, 861–874. In other words, when given one randomly selected positive instance and one randomly selected negative instance, AUC is the probability that the classifier will be able to tell which one is which.
This can be seen as follows: the area under the curve is given by (the integral boundaries are reversed as large threshold
has a lower value on the ''x''-axis)
:
:
:
where
is the score for a positive instance and
is the score for a negative instance, and
and
are probability densities as defined in previous section.
If
and
follows two Gaussian distributions, then
.
Area under the curve
It can be shown that the AUC is closely related to the
Mann–Whitney U,
which tests whether positives are ranked higher than negatives. For a predictor
, an unbiased estimator of its AUC can be expressed by the following ''Wilcoxon-Mann-Whitney'' statistic:
:
where