1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85

# ROCR <img src="https://raw.githubusercontent.com/ipatys/ROCR/rocrimages/ROCR_small.png" align="right">
<! badges: start >
[![RCMDcheck](https://github.com/ipatys/ROCR/workflows/RCMDcheck/badge.svg)](https://github.com/ipatys/ROCR/actions?query=workflow:RCMDcheck)
[![CRAN Status](https://www.rpkg.org/badges/version/ROCR)](https://CRAN.rproject.org/package=ROCR)
[![codecov](https://codecov.io/gh/ipatys/ROCR/branch/master/graph/badge.svg)](https://codecov.io/gh/ipatys/ROCR)
<! badges: end >
*visualizing classifier performance in R, with only 3 commands*
![](https://raw.githubusercontent.com/ipatys/ROCR/rocrimages/ourplot_website.gif)
### Please support our work by citing the ROCR article in your publications:
***Sing T, Sander O, Beerenwinkel N, Lengauer T. [2005]
ROCR: visualizing classifier performance in R.
Bioinformatics 21(20):39401.***
Free full text:
http://bioinformatics.oxfordjournals.org/content/21/20/3940.full
[<img src="https://raw.githubusercontent.com/ipatys/ROCR/rocrimages/logo_mpi_430.png" align="right" width="300">](https://www.mpiinf.mpg.de/home/)
`ROCR` was originally developed at the [Max Planck Institute for Informatics](https://www.mpiinf.mpg.de/home/)
## Introduction
`ROCR` (with obvious pronounciation) is an R package for evaluating and visualizing classifier performance. It is...
 ...easy to use: adds only three new commands to R.
 ...flexible: integrates tightly with R's builtin graphics facilities.
 ...powerful: Currently, 28 performance measures are implemented, which can be freely combined to form parametric curves such as ROC curves, precision/recall curves, or lift curves. Many options such as curve averaging (for crossvalidation or bootstrap), augmenting the averaged curves by standard error bar or boxplots, labeling cutoffs to the curve, or coloring curves according to cutoff.
### Performance measures that `ROCR` knows:
Accuracy, error rate, true positive rate, false positive rate, true negative rate, false negative rate, sensitivity, specificity, recall, positive predictive value, negative predictive value, precision, fallout, miss, phi correlation coefficient, Matthews correlation coefficient, mutual information, chi square statistic, odds ratio, lift value, precision/recall F measure, ROC convex hull, area under the ROC curve, precision/recall breakeven point, calibration error, mean crossentropy, root mean squared error, SAR measure, expected cost, explicit cost.
### `ROCR` features:
ROC curves, precision/recall plots, lift charts, cost curves, custom curves by freely selecting one performance measure for the x axis and one for the y axis, handling of data from crossvalidation or bootstrapping, curve averaging (vertically, horizontally, or by threshold), standard error bars, box plots, curves that are colorcoded by cutoff, printing threshold values on the curve, tight integration with Rs plotting facilities (making it easy to adjust plots or to combine multiple plots), fully customizable, easy to use (only 3 commands).
## Installation of `ROCR`
The most straightforward way to install and use `ROCR` is to install it from
`CRAN` by starting `R` and using the `install.packages` function:
```
install.packages("ROCR")
```
Alternatively you can install it from command line using the tar ball like this:
```
R CMD INSTALL ROCR_*.tar.gz
```
## Getting started
from withing R ...
```
library(ROCR)
demo(ROCR)
help(package=ROCR)
```
## Examples
Using ROCR's 3 commands to produce a simple ROC plot:
```
pred < prediction(predictions, labels)
perf < performance(pred, measure = "tpr", x.measure = "fpr")
plot(perf, col=rainbow(10))
```
## Documentation
 The Reference Manual found [here](https://CRAN.rproject.org/package=ROCR)
 Slide deck for a tutorial talk (feel free to reuse for teaching, but please give appropriate credits and write us an email) [[PPT](https://raw.githubusercontent.com/ipatys/ROCR/rocrimages/ROCR_Talk_Tobias_Sing.ppt)]
 A few pointers to the literature on classifier evaluation
## Contact
Questions, comments, and suggestions are very welcome. Open an issue on GitHub and we can discuss. We are also interested in seeing how ROCR is used in publications. Thus, if you have prepared a paper using ROCR we'd be happy to know.
