Skip to main content

Quickly evaluate multi-label classifiers in various metrics

Project description

MultiLabel Classifier Evaluation Metrics

This toolkit focuses on different evaluation metrics that can be used for evaluating the performance of a multilabel classifier.

Intro

The evaluation metrics for multi-label classification can be broadly classified into two categories:

  • Example-Based Evaluation Metrics
  • Label Based Evaluation Metrics

Metrics

  • Exact Match Ratio (EMR)
  • 1/0 Loss
  • Hamming Loss
  • Example-Based Accuracy
  • Example-Based Precision
  • Label Based Metrics
  • Macro Averaged Accuracy
  • Macro Averaged Precision
  • Macro Averaged Recall
  • Micro Averaged Accuracy
  • Micro Averaged Precision
  • Micro Averaged Recall
  • α- Evaluation Score

Examples

from multilabel_eval_metrics import *
import numpy as np
if __name__=="__main__":
    y_true = np.array([[0, 1], [1, 1], [1, 1], [0, 1], [1, 0]])
    y_pred = np.array([[1, 1], [1, 0], [1, 1], [0, 1], [1, 0]])
    print(y_true)
    print(y_pred)
    result=MultiLabelMetrics(y_true,y_pred).get_metric_summary(show=True)

License

The multilabel-eval-metrics toolkit is provided by Donghua Chen with MIT License.

Reference

Evaluation Metrics for Multi-Label Classification

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

multilabel-eval-metrics-0.0.2.tar.gz (10.3 kB view details)

Uploaded Source

Built Distribution

multilabel_eval_metrics-0.0.2-py3-none-any.whl (8.4 kB view details)

Uploaded Python 3

File details

Details for the file multilabel-eval-metrics-0.0.2.tar.gz.

File metadata

  • Download URL: multilabel-eval-metrics-0.0.2.tar.gz
  • Upload date:
  • Size: 10.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.4.2 importlib_metadata/4.8.3 pkginfo/1.7.1 requests/2.27.1 requests-toolbelt/0.9.1 tqdm/4.31.1 CPython/3.6.6

File hashes

Hashes for multilabel-eval-metrics-0.0.2.tar.gz
Algorithm Hash digest
SHA256 f410224fa78c7026e82c522391b3e7cd2e166d970c0827a9161e2f75e89f4d32
MD5 f37be3cbe898223261fffa4c6a16380d
BLAKE2b-256 08ad3a85e01344f5c0cf5a14d79474d0120526b6c45dc88a444a4405c59702a3

See more details on using hashes here.

File details

Details for the file multilabel_eval_metrics-0.0.2-py3-none-any.whl.

File metadata

  • Download URL: multilabel_eval_metrics-0.0.2-py3-none-any.whl
  • Upload date:
  • Size: 8.4 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.4.2 importlib_metadata/4.8.3 pkginfo/1.7.1 requests/2.27.1 requests-toolbelt/0.9.1 tqdm/4.31.1 CPython/3.6.6

File hashes

Hashes for multilabel_eval_metrics-0.0.2-py3-none-any.whl
Algorithm Hash digest
SHA256 15865bc1f2ef573423ce08e2ea3a487d365e9080283cca0180be07d2efec48c2
MD5 11049a9ef4d2ae7ff40609c0d2376574
BLAKE2b-256 86df8fb9ceee947b3dbd3880ea8a6767562d0446340c3eededc0eb582ba2f275

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page