Quickly evaluate multi-label classifiers in various metrics
Project description
Evaluation metrics for multi-label classification models
This toolkit is used to focus on different evaluation metrics that can be used for evaluating the performance of a multilabel classifier.
Intro
The evaluation metrics for multi-label classification can be broadly classified into two categories:
- Example-Based Evaluation Metrics
- Label Based Evaluation Metrics.
Metrics
Exact Match Ratio (EMR) 1/0 Loss Hamming Loss Example-Based Accuracy Example-Based Precision Label Based Metrics Macro Averaged Accuracy Macro Averaged Precision Macro Averaged Recall Micro Averaged Accuracy Micro Averaged Precision Micro Averaged Recall α- Evaluation Score
Examples
from multilabel_eval_metrics import *
import numpy as np
if __name__=="__main__":
y_true = np.array([[0, 1], [1, 1], [1, 1], [0, 1], [1, 0]])
y_pred = np.array([[1, 1], [1, 0], [1, 1], [0, 1], [1, 0]])
print(y_true)
print(y_pred)
result=MultiLabelMetrics(y_true,y_pred).get_metric_summary(show=True)
License
The multilabel-eval-metrics
toolkit is provided by Donghua Chen with MIT License.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for multilabel-eval-metrics-0.0.1.tar.gz
Algorithm | Hash digest | |
---|---|---|
SHA256 | 462ca11ff5e89d5831a233f3dd1edc7f9e38457aa341b0552b3ff37d657f1551 |
|
MD5 | 54925c82b0db2936267ae00646d248b8 |
|
BLAKE2b-256 | d98ee32ac78225c50cc7194f6174f0c637113e3243f82ddcc5af4949850f3b98 |
Hashes for multilabel_eval_metrics-0.0.1-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 9b50c3d0a3df318f5067b11356963e39a44e4e6aba98636571cc64cd2b486c8f |
|
MD5 | 957039f0b9b836d3b1747b1742b5c7e5 |
|
BLAKE2b-256 | cd4442203135c243e2d634ca6e0a2718c76b3c3c0350ee17bba4d547f2745680 |