HOME

TheInfoList



OR:

''Sensitivity'' and ''specificity'' mathematically describe the accuracy of a test which reports the presence or absence of a condition. Individuals for which the condition is satisfied are considered "positive" and those for which it is not are considered "negative". *Sensitivity (true positive rate) refers to the probability of a positive test, conditioned on truly being positive. *Specificity (true negative rate) refers to the probability of a negative test, conditioned on truly being negative. If the true condition can not be known, a " gold standard test" is assumed to be correct. In a diagnostic test, sensitivity is a measure of how well a test can identify true positives and specificity is a measure of how well a test can identify true negatives. For all testing, both diagnostic and screening, there is usually a trade-off between sensitivity and specificity, such that higher sensitivities will mean lower specificities and vice versa. If the goal is to return the ratio at which the test identifies the percentage of people highly likely to be identified as having the condition, the number of true positives should be high and the number of false negatives should be very low, which results in high sensitivity. This is especially important when the consequence of failing to treat the condition is serious and/or the treatment is very effective and has minimal side effects. If the goal is to return the ratio at which the test identifies the percentage of people highly likely to be identified as not having the condition, the number of true negatives should be high and the number of false positives should be very low, which results in high specificity. That is, people highly likely to be excluded by the test. This is especially important when people who are identified as having a condition may be subjected to more testing, expense, stigma, anxiety, etc. The terms "sensitivity" and "specificity" were introduced by American biostatistician Jacob Yerushalmy in 1947.


Application to screening study

Imagine a study evaluating a test that screens people for a disease. Each person taking the test either has or does not have the disease. The test outcome can be positive (classifying the person as having the disease) or negative (classifying the person as not having the disease). The test results for each subject may or may not match the subject's actual status. In that setting: * True positive: Sick people correctly identified as sick * False positive: Healthy people incorrectly identified as sick * True negative: Healthy people correctly identified as healthy * False negative: Sick people incorrectly identified as healthy After getting the numbers of true positives, false positives, true negatives, and false negatives, the sensitivity and specificity for the test can be calculated. If it turns out that the sensitivity is high then any person who has the disease is likely to be classified as positive by the test. On the other hand, if the specificity is high, any person who does not have the disease is likely to be classified as negative by the test. An NIH web site has a discussion of how these ratios are calculated.


Definition


Sensitivity

Consider the example of a medical test for diagnosing a condition. Sensitivity refers to the test's ability to correctly detect ill patients who do have the condition. In the example of a medical test used to identify a condition, the sensitivity (sometimes also named the detection rate in a clinical setting) of the test is the proportion of people who test positive for the disease among those who have the disease. Mathematically, this can be expressed as: :\begin \text & = \frac \\ pt& = \frac \\ pt& = \text \end A negative result in a test with high sensitivity is useful for ruling out disease. A high sensitivity test is reliable when its result is negative since it rarely misdiagnoses those who have the disease. A test with 100% sensitivity will recognize all patients with the disease by testing positive. A negative test result would definitively ''rule out'' presence of the disease in a patient. However, a positive result in a test with high sensitivity is not necessarily useful for ruling in disease. Suppose a 'bogus' test kit is designed to always give a positive reading. When used on diseased patients, all patients test positive, giving the test 100% sensitivity. However, sensitivity does not take into account false positives. The bogus test also returns positive on all healthy patients, giving it a false positive rate of 100%, rendering it useless for detecting or "ruling in" the disease. The calculation of sensitivity does not take into account indeterminate test results. If a test cannot be repeated, indeterminate samples either should be excluded from the analysis (the number of exclusions should be stated when quoting sensitivity) or can be treated as false negatives (which gives the worst-case value for sensitivity and may therefore underestimate it). A test with a higher sensitivity has a lower
type II error In statistical hypothesis testing, a type I error is the mistaken rejection of an actually true null hypothesis (also known as a "false positive" finding or conclusion; example: "an innocent person is convicted"), while a type II error is the f ...
rate.


Specificity

Consider the example of a medical test for diagnosing a disease. Specificity relates to the test's ability to correctly reject healthy patients without a condition. Specificity of a test is the proportion of those who truly do not have the condition who test negative for the condition. Mathematically, this can also be written as: : \begin \text & = \frac \\ pt& = \frac \\ pt& = \text \end A positive result in a test with high specificity is useful for ruling in disease. The test rarely gives positive results in healthy patients. A positive result signifies a high probability of the presence of disease. A test with 100% specificity will recognize all patients without the disease by testing negative, so a positive test result would definitely rule in the presence of the disease. However, a negative result from a test with high specificity is not necessarily useful for ruling out disease. For example, a test that always returns a negative test result will have a specificity of 100% because specificity does not consider false negatives. A test like that would return negative for patients with the disease, making it useless for ruling out the disease. A test with a higher specificity has a lower
type I error In statistical hypothesis testing, a type I error is the mistaken rejection of an actually true null hypothesis (also known as a "false positive" finding or conclusion; example: "an innocent person is convicted"), while a type II error is the fa ...
rate.


Graphical illustration

File:HighSensitivity LowSpecificity 1401x1050.png, High sensitivity and low specificity File:LowSensitivity HighSpecificity 1400x1050.png, Low sensitivity and high specificity File:Specificity vs Sensitivity Graph.png, A graphical illustration of sensitivity and specificity The above graphical illustration is meant to show the relationship between sensitivity and specificity. The black, dotted line in the center of the graph is where the sensitivity and specificity are the same. As one moves to the left of the black dotted line, the sensitivity increases, reaching its maximum value of 100% at line A, and the specificity decreases. The sensitivity at line A is 100% because at that point there are zero false negatives, meaning that all the negative test results are true negatives. When moving to the right, the opposite applies, the specificity increases until it reaches the B line and becomes 100% and the sensitivity decreases. The specificity at line B is 100% because the number of false positives is zero at that line, meaning all the positive test results are true positives. The middle solid line in both figures that show the level of sensitivity and specificity is the test cutoff point. Moving this line resulting in the trade-off between the level of sensitivity and specificity as previously described. The left-hand side of this line contains the data points that have the condition (the blue dots indicate the false negatives). The right-hand side of the line shows the data points that do not have the condition (red dots indicate false positives). The total number of data points is 80. 40 of them have a medical condition and are on the left side. The rest is on the right side and do not have the medical condition. For the figure that shows high sensitivity and low specificity, the number of false negatives is 3, and the number of data point that has the medical condition is 40, so the sensitivity is . The number of false positives is 9, so the specificity is . Similarly, the number of false negatives in another figure is 8, and the number of data point that has the medical condition is 40, so the sensitivity is . The number of false positives is 3, so the specificity is . File:100-sensitivity.png, A test result with 100 percent sensitivity. File:100-specificity.png, A test result with 100 percent specificity. The red dot indicates the patient with the medical condition. The red background indicates the area where the test predicts the data point to be positive. The true positive in this figure is 6, and false negatives of 0 (because all positive condition is correctly predicted as positive). Therefore, the sensitivity is 100% (from ). This situation is also illustrated in the previous figure where the dotted line is at position A (the left-hand side is predicted as negative by the model, the right-hand side is predicted as positive by the model). When the dotted line, test cut-off line, is at position A, the test correctly predicts all the population of the true positive class, but it will fail to correctly identify the data point from the true negative class. Similar to the previously explained figure, the red dot indicates the patient with the medical condition. However, in this case, the green background indicates that the test predicts that all patients are free of the medical condition. The number of data point that is true negative is then 26, and the number of false positives is 0. This result in 100% specificity (from ). Therefore, sensitivity or specificity alone cannot be used to measure the performance of the test.


Medical usage

In
medical diagnosis Medical diagnosis (abbreviated Dx, Dx, or Ds) is the process of determining which disease or condition explains a person's symptoms and signs. It is most often referred to as diagnosis with the medical context being implicit. The information re ...
, test sensitivity is the ability of a test to correctly identify those with the disease (true positive rate), whereas test specificity is the ability of the test to correctly identify those without the disease (true negative rate). If 100 patients known to have a disease were tested, and 43 test positive, then the test has 43% sensitivity. If 100 with no disease are tested and 96 return a completely negative result, then the test has 96% specificity. Sensitivity and specificity are prevalence-independent test characteristics, as their values are intrinsic to the test and do not depend on the disease
prevalence In epidemiology, prevalence is the proportion of a particular population found to be affected by a medical condition (typically a disease or a risk factor such as smoking or seatbelt use) at a specific time. It is derived by comparing the number o ...
in the population of interest.
Positive Positive is a property of positivity and may refer to: Mathematics and science * Positive formula, a logical formula not containing negation * Positive number, a number that is greater than 0 * Plus sign, the sign "+" used to indicate a posi ...
and negative predictive values, but not sensitivity or specificity, are values influenced by the prevalence of disease in the population that is being tested. These concepts are illustrated graphically in this apple
Bayesian clinical diagnostic model
which show the positive and negative predictive values as a function of the prevalence, sensitivity and specificity.


Misconceptions

It is often claimed that a highly specific test is effective at ruling in a disease when positive, while a highly sensitive test is deemed effective at ruling out a disease when negative. This has led to the widely used mnemonics SPPIN and SNNOUT, according to which a highly specific test, when positive, rules in disease (SP-P-IN), and a highly sensitive test, when negative, rules out disease (SN-N-OUT). Both rules of thumb are, however, inferentially misleading, as the diagnostic power of any test is determined by both its sensitivity ''and'' its specificity. The tradeoff between specificity and sensitivity is explored in
ROC analysis A receiver operating characteristic curve, or ROC curve, is a graphical plot that illustrates the diagnostic ability of a binary classifier system as its discrimination threshold is varied. The method was originally developed for operators of ...
as a trade off between TPR and FPR (that is, recall and
fallout Nuclear fallout is the residual radioactive material propelled into the upper atmosphere following a nuclear blast, so called because it "falls out" of the sky after the explosion and the shock wave has passed. It commonly refers to the radioac ...
). Giving them equal weight optimizes
informedness Youden's J statistic (also called Youden's index) is a single statistic that captures the performance of a dichotomous diagnostic test. Informedness is its generalization to the multiclass case and estimates the probability of an informed decision. ...
= specificity + sensitivity − 1 = TPR − FPR, the magnitude of which gives the probability of an informed decision between the two classes (> 0 represents appropriate use of information, 0 represents chance-level performance, < 0 represents perverse use of information).


Sensitivity index

The
sensitivity index The sensitivity index or discriminability index or detectability index is a dimensionless statistic used in signal detection theory. A higher index indicates that the signal can be more readily detected. Definition The discriminability index is ...
or ''d′'' (pronounced "dee-prime") is a
statistic A statistic (singular) or sample statistic is any quantity computed from values in a sample which is considered for a statistical purpose. Statistical purposes include estimating a population parameter, describing a sample, or evaluating a hypo ...
used in signal
detection theory Detection theory or signal detection theory is a means to measure the ability to differentiate between information-bearing patterns (called stimulus in living organisms, signal in machines) and random patterns that distract from the information (ca ...
. It provides the separation between the means of the signal and the noise distributions, compared against the standard deviation of the noise distribution. For normally distributed signal and noise with mean and standard deviations \mu_S and \sigma_S, and \mu_N and \sigma_N, respectively, ''d′'' is defined as: : d^\prime = \frac An estimate of ''d′'' can be also found from measurements of the
hit rate Hit rate is a metric or measure of business performance traditionally associated with sales. It is defined as the number of sales of a product divided by the number of customers who go online, planned call, or visit a company to find out about the ...
and false-alarm rate. It is calculated as: : ''d′'' = ''Z''(hit rate) − ''Z''(false alarm rate), where function ''Z''(''p''), ''p'' ∈
, 1 The comma is a punctuation mark that appears in several variants in different languages. It has the same shape as an apostrophe or single closing quotation mark () in many typefaces, but it differs from them in being placed on the baseline (t ...
is the inverse of the cumulative Gaussian distribution. ''d′'' is a
dimensionless A dimensionless quantity (also known as a bare quantity, pure quantity, or scalar quantity as well as quantity of dimension one) is a quantity to which no physical dimension is assigned, with a corresponding SI unit of measurement of one (or 1) ...
statistic. A higher ''d′'' indicates that the signal can be more readily detected.


Confusion matrix

The relationship between sensitivity, specificity, and similar terms can be understood using the following table. Consider a group with P positive instances and N negative instances of some condition. The four outcomes can be formulated in a 2×2 ''
contingency table In statistics, a contingency table (also known as a cross tabulation or crosstab) is a type of table in a matrix format that displays the (multivariate) frequency distribution of the variables. They are heavily used in survey research, business i ...
'' or ''
confusion matrix In the field of machine learning and specifically the problem of statistical classification, a confusion matrix, also known as an error matrix, is a specific table layout that allows visualization of the performance of an algorithm, typically a su ...
'', as well as derivations of several metrics using the four outcomes, as follows:


Estimation of errors in quoted sensitivity or specificity

Sensitivity and specificity values alone may be highly misleading. The 'worst-case' sensitivity or specificity must be calculated in order to avoid reliance on experiments with few results. For example, a particular test may easily show 100% sensitivity if tested against the
gold standard A gold standard is a monetary system in which the standard economic unit of account is based on a fixed quantity of gold. The gold standard was the basis for the international monetary system from the 1870s to the early 1920s, and from the la ...
four times, but a single additional test against the gold standard that gave a poor result would imply a sensitivity of only 80%. A common way to do this is to state the
binomial proportion confidence interval In statistics, a binomial proportion confidence interval is a confidence interval for the probability of success calculated from the outcome of a series of success–failure experiments (Bernoulli trial, Bernoulli trials). In other words, a binomia ...
, often calculated using a Wilson score interval.
Confidence intervals In frequentist statistics, a confidence interval (CI) is a range of estimates for an unknown parameter. A confidence interval is computed at a designated ''confidence level''; the 95% confidence level is most common, but other levels, such as 9 ...
for sensitivity and specificity can be calculated, giving the range of values within which the correct value lies at a given confidence level (e.g., 95%).


Terminology in information retrieval

In
information retrieval Information retrieval (IR) in computing and information science is the process of obtaining information system resources that are relevant to an information need from a collection of those resources. Searches can be based on full-text or other co ...
, the positive predictive value is called
precision Precision, precise or precisely may refer to: Science, and technology, and mathematics Mathematics and computing (general) * Accuracy and precision, measurement deviation from true value and its scatter * Significant figures, the number of digit ...
, and sensitivity is called
recall Recall may refer to: * Recall (bugle call), a signal to stop * Recall (information retrieval), a statistical measure * ''ReCALL'' (journal), an academic journal about computer-assisted language learning * Recall (memory) * ''Recall'' (Overwatch ...
. Unlike the Specificity vs Sensitivity tradeoff, these measures are both independent of the number of true negatives, which is generally unknown and much larger than the actual numbers of relevant and retrieved documents. This assumption of very large numbers of true negatives versus positives is rare in other applications. The
F-score In statistics, statistical analysis of binary classification, the F-score or F-measure is a measure of a test's Accuracy_and_precision#In_binary_classification, accuracy. It is calculated from the Precision (information retrieval), precision and ...
can be used as a single measure of performance of the test for the positive class. The F-score is the
harmonic mean In mathematics, the harmonic mean is one of several kinds of average, and in particular, one of the Pythagorean means. It is sometimes appropriate for situations when the average rate is desired. The harmonic mean can be expressed as the recipro ...
of precision and recall: :F = 2 \times \frac In the traditional language of
statistical hypothesis testing A statistical hypothesis test is a method of statistical inference used to decide whether the data at hand sufficiently support a particular hypothesis. Hypothesis testing allows us to make probabilistic statements about population parameters. ...
, the sensitivity of a test is called the
statistical power In statistics, the power of a binary hypothesis test is the probability that the test correctly rejects the null hypothesis (H_0) when a specific alternative hypothesis (H_1) is true. It is commonly denoted by 1-\beta, and represents the chances ...
of the test, although the word ''power'' in that context has a more general usage that is not applicable in the present context. A sensitive test will have fewer Type II errors.


See also


Notes


References


Further reading

* *


External links


UIC Calculator



MedCalc Free Online Calculator

Bayesian clinical diagnostic model applet
{{Machine learning evaluation metrics Accuracy and precision Bioinformatics Biostatistics Cheminformatics Medical statistics Statistical ratios Statistical classification