D-prime
   HOME

TheInfoList



OR:

The sensitivity index or discriminability index or detectability index is a dimensionless
statistic A statistic (singular) or sample statistic is any quantity computed from values in a sample which is considered for a statistical purpose. Statistical purposes include estimating a population parameter, describing a sample, or evaluating a hypo ...
used in
signal detection theory Detection theory or signal detection theory is a means to measure the ability to differentiate between information-bearing patterns (called stimulus in living organisms, signal in machines) and random patterns that distract from the information (ca ...
. A higher index indicates that the signal can be more readily detected.


Definition

The discriminability index is the separation between the means of two distributions (typically the signal and the noise distributions), in units of the standard deviation.


Equal variances/covariances

For two univariate distributions a and b with the same standard deviation, it is denoted by d' ('dee-prime'): : d' = \frac. In higher dimensions, i.e. with two multivariate distributions with the same variance-covariance matrix \mathbf, (whose symmetric square-root, the standard deviation matrix, is \mathbf), this generalizes to the
Mahalanobis distance The Mahalanobis distance is a measure of the distance between a point ''P'' and a distribution ''D'', introduced by P. C. Mahalanobis in 1936. Mahalanobis's definition was prompted by the problem of identifying the similarities of skulls based ...
between the two distributions: : d'=\sqrt = \lVert \mathbf^(\boldsymbol_a-\boldsymbol_b) \rVert = \lVert \boldsymbol_a-\boldsymbol_b \rVert /\sigma_, where \sigma_ = 1/ \lVert\mathbf^\boldsymbol\rVert is the 1d slice of the sd along the unit vector \boldsymbol through the means, i.e. the d' equals the d' along the 1d slice through the means. This is also estimated as Z(\text)-Z(\text) .


Unequal variances/covariances

When the two distributions have different standard deviations (or in general dimensions, different covariance matrices), there exist several contending indices, all of which reduce to d' for equal variance/covariance.


Bayes discriminability index

This is the maximum (Bayes-optimal) discriminability index for two distributions, based on the amount of their overlap, i.e. the optimal (Bayes) error of classification e_b by an ideal observer, or its complement, the optimal accuracy a_b: : d'_b=-2Z\left(\text e_b\right)=2Z\left(\text a_b\right), where Z is the inverse cumulative distribution function of the standard normal. The Bayes discriminability between univariate or multivariate normal distributions can be numerically computed
Matlab code
, and may also be used as an approximation when the distributions are close to normal. d'_b is a positive-definite statistical distance measure that is free of assumptions about the distributions, like the Kullback-Leibler divergence D_\text. D_\text(a,b) is asymmetric, whereas d'_b(a,b) is symmetric for the two distributions. However, d'_b does not satisfy the triangle inequality, so it is not a full metric. In particular, for a yes/no task between two univariate normal distributions with means \mu_a,\mu_b and variances v_a>v_b, the Bayes-optimal classification accuracies are: : p(A, a)=p(^2_ > v_b c), \; \; p(B, b)=p(^2_ < v_a c), where \chi'^2 denotes the
non-central chi-squared distribution In probability theory and statistics, the noncentral chi-squared distribution (or noncentral chi-square distribution, noncentral \chi^2 distribution) is a noncentral generalization of the chi-squared distribution. It often arises in the power an ...
, \lambda=\left(\frac\right)^2, and c=\lambda+\frac. The Bayes discriminability d'_b=2Z\left(\frac \right). d'_b can also be computed from the
ROC curve A receiver operating characteristic curve, or ROC curve, is a graphical plot that illustrates the diagnostic ability of a binary classifier system as its discrimination threshold is varied. The method was originally developed for operators of m ...
of a yes/no task between two univariate normal distributions with a single shifting criterion. It can also be computed from the ROC curve of any two distributions (in any number of variables) with a shifting likelihood-ratio, by locating the point on the ROC curve that is farthest from the diagonal. For a two-interval task between these distributions, the optimal accuracy is a_b=p \left( \tilde^2_>0 \right) (\tilde^2 denotes the
generalized chi-squared distribution In probability theory and statistics, the generalized chi-squared distribution (or generalized chi-square distribution) is the distribution of a quadratic form of a multinormal variable (normal vector), or a linear combination of different no ...
), where \boldsymbol=\begin \sigma_s^2 & -\sigma_n^2 \end, \; \boldsymbol=\begin 1 & 1 \end, \; \boldsymbol=\frac \begin \sigma_s^2 & \sigma_n^2 \end. The Bayes discriminability d'_b=2Z\left(a_b\right).


RMS sd discriminability index

A common approximate (i.e. sub-optimal) discriminability index that has a closed-form is to take the average of the variances, i.e. the rms of the two standard deviations: d'_a=\left\vert \mu_a -\mu_b \right\vert/\sigma_\text (also denoted by d_a). It is \sqrt times the z-score of the area under the
receiver operating characteristic A receiver operating characteristic curve, or ROC curve, is a graphical plot that illustrates the diagnostic ability of a binary classifier system as its discrimination threshold is varied. The method was originally developed for operators of ...
curve (AUC) of a single-criterion observer. This index is extended to general dimensions as the Mahalanobis distance using the pooled covariance, i.e. with \mathbf_\text=\left left(\mathbf_a+\mathbf_b\right)/2 \right\frac as the common sd matrix.


Average sd discriminability index

Another index is d'_e=\left\vert \mu_a -\mu_b \right\vert/\sigma_\text, extended to general dimensions using \mathbf_\text=\left(\mathbf_a+\mathbf_b\right)/2 as the common sd matrix.


Comparison of the indices

It has been shown that for two univariate normal distributions, d'_a \leq d'_e \leq d'_b, and for multivariate normal distributions, d'_a \leq d'_e still. Thus, d'_a and d'_e underestimate the maximum discriminability d'_b of univariate normal distributions. d'_a can underestimate d'_b by a maximum of approximately 30%. At the limit of high discriminability for univariate normal distributions, d'_e converges to d'_b. These results often hold true in higher dimensions, but not always. Simpson and Fitter promoted d'_a as the best index, particularly for two-interval tasks, but Das and Geisler have shown that d'_b is the optimal discriminability in all cases, and d'_e is often a better closed-form approximation than d'_a, even for two-interval tasks. The approximate index d'_, which uses the geometric mean of the sd's, is less than d'_b at small discriminability, but greater at large discriminability.


See also

*
Receiver operating characteristic A receiver operating characteristic curve, or ROC curve, is a graphical plot that illustrates the diagnostic ability of a binary classifier system as its discrimination threshold is varied. The method was originally developed for operators of ...
(ROC) *
Summary statistics In descriptive statistics, summary statistics are used to summarize a set of observations, in order to communicate the largest amount of information as simply as possible. Statisticians commonly try to describe the observations in * a measure of ...
*
Effect size In statistics, an effect size is a value measuring the strength of the relationship between two variables in a population, or a sample-based estimate of that quantity. It can refer to the value of a statistic calculated from a sample of data, the ...


References

*


External links


Interactive signal detection theory tutorial
including calculation of ''d''′. Detection theory Signal processing Summary statistics {{stat-stub