A collection of metrics for analysing confusion matrices
# David’s helpful metrics library
There are many different ways to evaluate a confusion matrix. This helpful module implements a large number of them
- q2 (True Positive rate, recall, sensitivity)
- q7 (Matthews Correlation Coefficient)
- req (relative Error Quotient)
- tanimoto (Tanimoto Index)
- hamming (Hamming distance as a proportion)
The original impelmentation was in Perl around 2005 and I appear to have not noted many of the references. My apologies.
Details of the calcualtion are in the docstring. This module should be used as follows:
from metrics import Metrics
Metrics.list_metrics() # lists method names
Metrics.list_metrics(verbose=True) # gives a dictionary with the docstring
Metrics.measure(method, tp=TP, fp=FP, tn=TN, fn=FN) # for True Positive, False Negative etc.
You probably want to wrap this with try .. except as it will show an error if inappropriate data is given. The measure method will convert counts to proportional data.
Don’t forget to Metrics.cite(method) which will give a list of citations, if available. If you wish to add to the citations then submit a pull request.
I’d like to expand the help text in due course for each metric.
[Find this on BitBucket]( https://bitbucket.org/davidmam/metrics.git)
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
|Filename, size||File type||Python version||Upload date||Hashes|
|Filename, size confusion_metrics-0.0.4-py3-none-any.whl (8.1 kB)||File type Wheel||Python version py3||Upload date||Hashes View hashes|
|Filename, size confusion-metrics-0.0.4.tar.gz (5.9 kB)||File type Source||Python version None||Upload date||Hashes View hashes|
Hashes for confusion_metrics-0.0.4-py3-none-any.whl