A collection of metrics for analysing confusion matrices
# David’s helpful metrics library
There are many different ways to evaluate a confusion matrix. This helpful module implements a large number of them
The original impelmentation was in Perl around 2005 and I appear to have not noted many of the references. My apologies.
Details of the calcualtion are in the docstring. This module should be used as follows:
from metrics import Metrics
Metrics.list_metrics() # lists method names
Metrics.list_metrics(verbose=True) # gives a dictionary with the docstring
Metrics.measure(method, tp=TP, fp=FP, tn=TN, fn=FN) # for True Positive, False Negative etc.
You probably want to wrap this with try .. except as it will show an error if inappropriate data is given. The measure method will convert counts to proportional data.
Don’t forget to Metrics.cite(method) which will give a list of citations, if available. If you wish to add to the citations then submit a pull request.
I’d like to expand the help text in due course for each metric.
Further information on many of the metrics and their behaviour can be found at (Tharwat, Applied Computing and Informatics (2018),https://doi.org/10.1016/j.aci.2018.08.003)[https://doi.org/10.1016/j.aci.2018.08.003]
[Find this on BitBucket]( https://bitbucket.org/davidmam/metrics.git)
q1 q2 q3 q4 q5 q6 q7 dpower agf markedness bcr ber gm agm op req tanimoto roc specificity fprate fnrate precision negativepv plr nlr youden accuracy fscore f2measure fmeasure f0_5measure power logpower bajic_k chisquare ctg yuleY yuleQ ivesgibbs acp acc gdip1 gdip2 gdip3 hamming jaccard
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Hashes for confusion_metrics-0.1.0-py3-none-any.whl