Skip to main content

Quickly evaluate multi-label classifiers in various metrics

Project description

MultiLabel Classifier Evaluation Metrics

This toolkit focuses on different evaluation metrics that can be used for evaluating the performance of a multilabel classifier.

Intro

The evaluation metrics for multi-label classification can be broadly classified into two categories:

  • Example-Based Evaluation Metrics
  • Label Based Evaluation Metrics

Metrics

  • Exact Match Ratio (EMR)
  • 1/0 Loss
  • Hamming Loss
  • Example-Based Accuracy
  • Example-Based Precision
  • Label Based Metrics
  • Macro Averaged Accuracy
  • Macro Averaged Precision
  • Macro Averaged Recall
  • Micro Averaged Accuracy
  • Micro Averaged Precision
  • Micro Averaged Recall
  • α- Evaluation Score

Examples

from multilabel_eval_metrics import *
import numpy as np
if __name__=="__main__":
    y_true = np.array([[0, 1], [1, 1], [1, 1], [0, 1], [1, 0]])
    y_pred = np.array([[1, 1], [1, 0], [1, 1], [0, 1], [1, 0]])
    print(y_true)
    print(y_pred)
    result=MultiLabelMetrics(y_true,y_pred).get_metric_summary(show=True)

License

The multilabel-eval-metrics toolkit is provided by Donghua Chen with MIT License.

Reference

Evaluation Metrics for Multi-Label Classification

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

multilabel-eval-metrics-0.0.2.tar.gz (10.3 kB view hashes)

Uploaded Source

Built Distribution

multilabel_eval_metrics-0.0.2-py3-none-any.whl (8.4 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page