Katana VentraIP

Receiver operating characteristic

A receiver operating characteristic curve, or ROC curve, is a graphical plot that illustrates the performance of a binary classifier model (can be used for multi class classification as well) at varying threshold values.

The ROC curve is the plot of the true positive rate (TPR) against the false positive rate (FPR) at each threshold setting.


The ROC can also be thought of as a plot of the statistical power as a function of the Type I Error of the decision rule (when the performance is calculated from just a sample of the population, it can be thought of as estimators of these quantities). The ROC curve is thus the sensitivity or recall as a function of false positive rate.


Given the probability distributions for both true positive and false positive are known, the ROC curve is obtained as the cumulative distribution function (CDF, area under the probability distribution from to the discrimination threshold) of the detection probability in the y-axis versus the CDF of the false positive probability on the x-axis.


ROC analysis provides tools to select possibly optimal models and to discard suboptimal ones independently from (and prior to specifying) the cost context or the class distribution. ROC analysis is related in a direct and natural way to the cost/benefit analysis of diagnostic decision making.

Terminology[edit]

The true-positive rate is also known as sensitivity, recall or probability of detection.[1] The false-positive rate is also known as the probability of false alarm[1] and equals (1 − specificity). The ROC is also known as a relative operating characteristic curve, because it is a comparison of two operating characteristics (TPR and FPR) as the criterion changes.[2]

History[edit]

The ROC curve was first developed by electrical engineers and radar engineers during World War II for detecting enemy objects in battlefields, starting in 1941, which led to its name ("receiver operating characteristic").[3]


It was soon introduced to psychology to account for the perceptual detection of stimuli. ROC analysis has been used in medicine, radiology, biometrics, forecasting of natural hazards,[4] meteorology,[5] model performance assessment,[6] and other areas for many decades and is increasingly used in machine learning and data mining research.

the intercept of the ROC curve with the line at 45 degrees orthogonal to the no-discrimination line - the balance point where = Specificity

Sensitivity

the intercept of the ROC curve with the tangent at 45 degrees parallel to the no-discrimination line that is closest to the error-free point (0,1) – also called and generalized as Informedness

Youden's J statistic

the area between the ROC curve and the no-discrimination line multiplied by two is called the Gini coefficient, especially in the context of .[22] It should not be confused with the measure of statistical dispersion also called Gini coefficient.

credit scoring

the area between the full ROC curve and the triangular ROC curve including only (0,0), (1,1) and one selected operating point – Consistency

[23]

the area under the ROC curve, or "AUC" ("area under curve"), or A' (pronounced "a-prime"), or "c-statistic" ("concordance statistic").[25]

[24]

the (pronounced "d-prime"), the distance between the mean of the distribution of activity in the system under noise-alone conditions and its distribution under signal-alone conditions, divided by their standard deviation, under the assumption that both these distributions are normal with the same standard deviation. Under these assumptions, the shape of the ROC is entirely determined by d′.

sensitivity index d′

Z-score[edit]

If a standard score is applied to the ROC curve, the curve will be transformed into a straight line.[50] This z-score is based on a normal distribution with a mean of zero and a standard deviation of one. In memory strength theory, one must assume that the zROC is not only linear, but has a slope of 1.0. The normal distributions of targets (studied objects that the subjects need to recall) and lures (non studied objects that the subjects attempt to recall) is the factor causing the zROC to be linear.


The linearity of the zROC curve depends on the standard deviations of the target and lure strength distributions. If the standard deviations are equal, the slope will be 1.0. If the standard deviation of the target strength distribution is larger than the standard deviation of the lure strength distribution, then the slope will be smaller than 1.0. In most studies, it has been found that the zROC curve slopes constantly fall below 1, usually between 0.5 and 0.9.[51] Many experiments yielded a zROC slope of 0.8. A slope of 0.8 implies that the variability of the target strength distribution is 25% larger than the variability of the lure strength distribution.[52]


Another variable used is d' (d prime) (discussed above in "Other measures"), which can easily be expressed in terms of z-values. Although d' is a commonly used parameter, it must be recognized that it is only relevant when strictly adhering to the very strong assumptions of strength theory made above.[53]


The z-score of an ROC curve is always linear, as assumed, except in special situations. The Yonelinas familiarity-recollection model is a two-dimensional account of recognition memory. Instead of the subject simply answering yes or no to a specific input, the subject gives the input a feeling of familiarity, which operates like the original ROC curve. What changes, though, is a parameter for Recollection (R). Recollection is assumed to be all-or-none, and it trumps familiarity. If there were no recollection component, zROC would have a predicted slope of 1. However, when adding the recollection component, the zROC curve will be concave up, with a decreased slope. This difference in shape and slope result from an added element of variability due to some items being recollected. Patients with anterograde amnesia are unable to recollect, so their Yonelinas zROC curve would have a slope close to 1.0.[54]

ROC curves beyond binary classification[edit]

The extension of ROC curves for classification problems with more than two classes is cumbersome. Two common approaches for when there are multiple classes are (1) average over all pairwise AUC values[63] and (2) compute the volume under surface (VUS).[64][65] To average over all pairwise classes, one computes the AUC for each pair of classes, using only the examples from those two classes as if there were no other classes, and then averages these AUC values over all possible pairs. When there are c classes there will be c(c − 1) / 2 possible pairs of classes.


The volume under surface approach has one plot a hypersurface rather than a curve and then measure the hypervolume under that hypersurface. Every possible decision rule that one might use for a classifier for c classes can be described in terms of its true positive rates (TPR1, . . . , TPRc). It is this set of rates that defines a point, and the set of all possible decision rules yields a cloud of points that define the hypersurface. With this definition, the VUS is the probability that the classifier will be able to correctly label all c examples when it is given a set that has one randomly selected example from each class. The implementation of a classifier that knows that its input set consists of one example from each class might first compute a goodness-of-fit score for each of the c2 possible pairings of an example to a class, and then employ the Hungarian algorithm to maximize the sum of the c selected scores over all c! possible ways to assign exactly one example to each class.


Given the success of ROC curves for the assessment of classification models, the extension of ROC curves for other supervised tasks has also been investigated. Notable proposals for regression problems are the so-called regression error characteristic (REC) Curves [66] and the Regression ROC (RROC) curves.[67] In the latter, RROC curves become extremely similar to ROC curves for classification, with the notions of asymmetry, dominance and convex hull. Also, the area under RROC curves is proportional to the error variance of the regression model.

Animated ROC demo

ROC demo

another ROC demo

ROC video explanation

An Introduction to the Total Operating Characteristic: Utility in Land Change Model Evaluation

How to run the TOC Package in R

TOC R package on Github

Excel Workbook for generating TOC curves

Balakrishnan, Narayanaswamy (1991); Handbook of the Logistic Distribution, Marcel Dekker, Inc.,  978-0-8247-8587-1

ISBN

Brown, Christopher D.; Davis, Herbert T. (2006). "Receiver operating characteristic curves and related decision measures: a tutorial". Chemometrics and Intelligent Laboratory Systems. 80: 24–38. :10.1016/j.chemolab.2005.05.004.

doi

Rotello, Caren M.; Heit, Evan; Dubé, Chad (2014). (PDF). Psychonomic Bulletin & Review. 22 (4): 944–954. doi:10.3758/s13423-014-0759-2. PMID 25384892. S2CID 6046065.

"When more data steer us wrong: replications with the wrong dependent measure perpetuate erroneous conclusions"

Fawcett, Tom (2004). (PDF). Pattern Recognition Letters. 27 (8): 882–891. CiteSeerX 10.1.1.145.4649. doi:10.1016/j.patrec.2005.10.012.

"ROC Graphs: Notes and Practical Considerations for Researchers"

Gonen, Mithat (2007); Analyzing Receiver Operating Characteristic Curves Using SAS, SAS Press,  978-1-59994-298-8

ISBN

Green, William H., (2003) Econometric Analysis, fifth edition, , ISBN 0-13-066189-9

Prentice Hall

Heagerty, Patrick J.; Lumley, Thomas; Pepe, Margaret S. (2000). "Time-dependent ROC Curves for Censored Survival Data and a Diagnostic Marker". Biometrics. 56 (2): 337–344. :10.1111/j.0006-341x.2000.00337.x. PMID 10877287. S2CID 8822160.

doi

Hosmer, David W.; and Lemeshow, Stanley (2000); Applied Logistic Regression, 2nd ed., New York, NY: , ISBN 0-471-35632-8

Wiley

Lasko, Thomas A.; Bhagwat, Jui G.; Zou, Kelly H.; Ohno-Machado, Lucila (2005). "The use of receiver operating characteristic curves in biomedical informatics". Journal of Biomedical Informatics. 38 (5): 404–415.  10.1.1.97.9674. doi:10.1016/j.jbi.2005.02.008. PMID 16198999.

CiteSeerX

Mas, Jean-François; Filho, Britaldo Soares; Pontius, Jr, Robert Gilmore; Gutiérrez, Michelle Farfán; Rodrigues, Hermann (2013). . ISPRS International Journal of Geo-Information. 2 (3): 869–887. Bibcode:2013IJGI....2..869M. doi:10.3390/ijgi2030869.{{cite journal}}: CS1 maint: multiple names: authors list (link)

"A suite of tools for ROC analysis of spatial models"

Pontius, Jr, Robert Gilmore; Parmentier, Benoit (2014). . Landscape Ecology. 29 (3): 367–382. doi:10.1007/s10980-013-9984-8. S2CID 15924380.{{cite journal}}: CS1 maint: multiple names: authors list (link)

"Recommendations for using the Relative Operating Characteristic (ROC)"

Pontius, Jr, Robert Gilmore; Pacheco, Pablo (2004). . GeoJournal. 61 (4): 325–334. doi:10.1007/s10708-004-5049-5. S2CID 155073463.{{cite journal}}: CS1 maint: multiple names: authors list (link)

"Calibration and validation of a model of forest disturbance in the Western Ghats, India 1920–1990"

Pontius, Jr, Robert Gilmore; Batchu, Kiran (2003). "Using the relative operating characteristic to quantify certainty in prediction of location of land cover change in India". Transactions in GIS. 7 (4): 467–484. :10.1111/1467-9671.00159. S2CID 14452746.{{cite journal}}: CS1 maint: multiple names: authors list (link)

doi

Pontius, Jr, Robert Gilmore; Schneider, Laura (2001). . Agriculture, Ecosystems & Environment. 85 (1–3): 239–248. doi:10.1016/S0167-8809(01)00187-6.{{cite journal}}: CS1 maint: multiple names: authors list (link)

"Land-use change model validation by a ROC method for the Ipswich watershed, Massachusetts, USA"

Stephan, Carsten; Wesseling, Sebastian; Schink, Tania; Jung, Klaus (2003). . Clinical Chemistry. 49 (3): 433–439. doi:10.1373/49.3.433. PMID 12600955.

"Comparison of Eight Computer Programs for Receiver-Operating Characteristic Analysis"

Swets, John A.; Dawes, Robyn M.; and Monahan, John (2000); Better Decisions through Science, , October, pp. 82–87

Scientific American

Zou, Kelly H.; O'Malley, A. James; Mauri, Laura (2007). . Circulation. 115 (5): 654–7. doi:10.1161/circulationaha.105.594929. PMID 17283280.

"Receiver-operating characteristic analysis for evaluating diagnostic tests and predictive models"

Zhou, Xiao-Hua; ; McClish, Donna K. (2002). Statistical Methods in Diagnostic Medicine. New York, NY: Wiley & Sons. ISBN 978-0-471-34772-9.

Obuchowski, Nancy A.

Chicco D.; Jurman G. (2023). . BioData Mining. 16 (1): 4. doi:10.1186/s13040-023-00322-4. PMC 9938573. PMID 36800973.

"The Matthews correlation coefficient (MCC) should replace the ROC AUC as the standard metric for assessing binary classification"