Quickly evaluate multi-label classifiers in various metrics
Project description
MultiLabel Classifier Evaluation Metrics
This toolkit focuses on different evaluation metrics that can be used for evaluating the performance of a multilabel classifier.
Intro
The evaluation metrics for multi-label classification can be broadly classified into two categories:
- Example-Based Evaluation Metrics
- Label Based Evaluation Metrics
Metrics
- Exact Match Ratio (EMR)
- 1/0 Loss
- Hamming Loss
- Example-Based Accuracy
- Example-Based Precision
- Label Based Metrics
- Macro Averaged Accuracy
- Macro Averaged Precision
- Macro Averaged Recall
- Micro Averaged Accuracy
- Micro Averaged Precision
- Micro Averaged Recall
- α- Evaluation Score
Examples
from multilabel_eval_metrics import *
import numpy as np
if __name__=="__main__":
y_true = np.array([[0, 1], [1, 1], [1, 1], [0, 1], [1, 0]])
y_pred = np.array([[1, 1], [1, 0], [1, 1], [0, 1], [1, 0]])
print(y_true)
print(y_pred)
result=MultiLabelMetrics(y_true,y_pred).get_metric_summary(show=True)
License
The multilabel-eval-metrics
toolkit is provided by Donghua Chen with MIT License.
Reference
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Close
Hashes for multilabel-eval-metrics-0.0.2.tar.gz
Algorithm | Hash digest | |
---|---|---|
SHA256 | f410224fa78c7026e82c522391b3e7cd2e166d970c0827a9161e2f75e89f4d32 |
|
MD5 | f37be3cbe898223261fffa4c6a16380d |
|
BLAKE2b-256 | 08ad3a85e01344f5c0cf5a14d79474d0120526b6c45dc88a444a4405c59702a3 |
Close
Hashes for multilabel_eval_metrics-0.0.2-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 15865bc1f2ef573423ce08e2ea3a487d365e9080283cca0180be07d2efec48c2 |
|
MD5 | 11049a9ef4d2ae7ff40609c0d2376574 |
|
BLAKE2b-256 | 86df8fb9ceee947b3dbd3880ea8a6767562d0446340c3eededc0eb582ba2f275 |