Skip to main content

Quickly evaluate multi-label classifiers in various metrics

Project description

Evaluation metrics for multi-label classification models

This toolkit is used to focus on different evaluation metrics that can be used for evaluating the performance of a multilabel classifier.

Intro

The evaluation metrics for multi-label classification can be broadly classified into two categories:

  • Example-Based Evaluation Metrics
  • Label Based Evaluation Metrics.

Metrics

Exact Match Ratio (EMR) 1/0 Loss Hamming Loss Example-Based Accuracy Example-Based Precision Label Based Metrics Macro Averaged Accuracy Macro Averaged Precision Macro Averaged Recall Micro Averaged Accuracy Micro Averaged Precision Micro Averaged Recall α- Evaluation Score

Reference

Examples

from multilabel_eval_metrics import *
import numpy as np
if __name__=="__main__":
    y_true = np.array([[0, 1], [1, 1], [1, 1], [0, 1], [1, 0]])
    y_pred = np.array([[1, 1], [1, 0], [1, 1], [0, 1], [1, 0]])
    print(y_true)
    print(y_pred)
    result=MultiLabelMetrics(y_true,y_pred).get_metric_summary(show=True)

License

The multilabel-eval-metrics toolkit is provided by Donghua Chen with MIT License.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

multilabel-eval-metrics-0.0.1.tar.gz (10.4 kB view hashes)

Uploaded Source

Built Distribution

multilabel_eval_metrics-0.0.1-py3-none-any.whl (8.5 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page