Skip to main content

Evaluation module for aicard.

Project description

AICard-Eval

This is a package created under the AI-CODE and it's part of the transparency services for AI model cards. Its purpose is to provide a single tool for evaluating AI models. The output is standardized and ment (but not restricted) to be used as an import for aicard package.

Notice: This is an alpha version. Some functions might not work as intended. Suported cases are text and image classifications (binary, multiclass, and multilabel), and object detection.

⚡ Quickstart

To install use:

conda create -n aicard-eval python=3.11
conta activate aicard-eval
pip install aicard-eval

or if you clone this repo

pip install -e .

Follow the script bellow. The aicard-eval will choose the correct metrics corresponding to your case. For more examples see the examples/ folder.

You can use datasets and models from service providers e.g. huggingface or you can use your local models and datasets. Supported datasets types are: .csv, .tsv, .json, .jsonl, .xml, .yml, .yaml, .parquet, .feather, .pickle and supported image types are .jpg, .jpeg, .png, .gif, .bmp, .tiff, .tif

import aicard_eval
from datasets import load_dataset
from transformers import pipeline

# 1) Load your model
classifier = pipeline(task="text-classification", model="SamLowe/roberta-base-go_emotions", top_k=None)

# 2) Load your dataset
dataset = load_dataset("google-research-datasets/go_emotions", split='test')
class_names = dataset.features["labels"].feature.names


# 3) Define a function to handle the dataset
def pipeline(data):
        sentences = [text for text in data['text']]
        model_outputs = classifier(sentences)
        out = []
        for sample in model_outputs:
            flat = {d['label']: d['score'] for d in sample}
            out.append([flat[name] for name in class_names])
        return out

# 4) call the aicard-eval evaluate function
metrics = aicard_eval.evaluate(
    data=dataset,
    pipeline=pipeline,
    task=aicard_eval.tasks.vision.image_classification,
    batch_size=32)

print(metrics)
# {'package version': '0.1.0', 
# 'datetime': '2025-Nov-14 14:54', 
# 'task': 'Text Classification', 
# 'metrics': {
#     'precision_macro': 0.5090416420534856, 
#     'precision_micro': 0.5741662060070021, 
#     'recall_macro': 0.46497245851260965, 
#     'recall_micro': 0.5741662060070021, 
#     'top1_acc_micro': 0.5741662060070021, 
#     'top1_acc_macro': 0.5741662060070021, 
#     'top1_acc_weighted': 0.5741662060070021, 
#     'f1_macro': 0.4661938061623436, 
#     'f1_micro': 0.5741662060070021, 
#     'auc_roc_macro': 0.9286682043487104, 
#     'auc_roc_weighted': 0.9099991445506153}, 
# 'batch_size': 32, 
# 'hardware': 'CPU: AMD Ryzen 7 7800X3D 8-Core Processor, RAM: 15.62 GB, CUDA: | NVIDIA-SMI 580.102.01 Driver Version: 581.57  CUDA Version: 13.0|', 
# 'execution_time': 'inference: 34.68s, metrics: 45.76ms', 
# 'num_classes': 28}

💡 Pipeline Instructions

The pipeline funtion is the inference loop that the evaluate function calls to generate the predictions of the model. It is completely abstract which means it can contain whatever the user wants. There are only two rules to follow to construct the pipeline:

  1. It must have a single function parameter def pipeline(data)
  2. It must return a specific format depending on the task.

The package supports several formats for each task but until they are thoroughly tested here is a list you can follow:

Task Return Format Example
Binary Classification list [ int ] [ 0,1,0,0 ]
Multi-class Classification list[ int ] [ 2,9,3,0 ]
Multi-label Classification list[ list[ int ] ] [ [ 2 ],[ 9,3 ],[ 3,0,1 ],[ 0 ] ]
Object Detection list[ dict ] [{
"boxes": [ [ 25, 27, 37, 54 ], [ 119, 111, 40, 67 ] ],
"labels": [ 0, 1 ],
"scores": [ .88, .70 ]
},
{
"boxes": [ [ 64, 111, 64, 58 ] ],
"labels": [ 0 ],
"scores": [ .71 ]`
}]

On the other hand data is basically the dataset the user imported split into batches of size batch_size. A loop will call the pipeline function until all batches are processed by it. The data is a dictionary of lists dict[str, list]. For example if we import a .csv:

name, age
Alice, 30
Bob, 25
Charlie, 35

with batch_size=3 then

>>> data['name']
['Alice', 'Bob', 'Charlie']
>>> data['age'][0]
30

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

aicard_eval-0.1.1.tar.gz (15.3 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

aicard_eval-0.1.1-py3-none-any.whl (16.2 kB view details)

Uploaded Python 3

File details

Details for the file aicard_eval-0.1.1.tar.gz.

File metadata

  • Download URL: aicard_eval-0.1.1.tar.gz
  • Upload date:
  • Size: 15.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.14

File hashes

Hashes for aicard_eval-0.1.1.tar.gz
Algorithm Hash digest
SHA256 152dd09d940096f38cd5e69e9528db04f9960674aa5c3297494098db5c54da46
MD5 09f20676d9ab333cced3e23967279f70
BLAKE2b-256 ec8d35c77ecf3ed74ed94e1cb903986303b64dd0643024ec2b2f5056fe6d7a66

See more details on using hashes here.

File details

Details for the file aicard_eval-0.1.1-py3-none-any.whl.

File metadata

  • Download URL: aicard_eval-0.1.1-py3-none-any.whl
  • Upload date:
  • Size: 16.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.14

File hashes

Hashes for aicard_eval-0.1.1-py3-none-any.whl
Algorithm Hash digest
SHA256 a3d3272ea3a3e105308e86b8e8fb0c20f36074d56655f828eccc06e617f55980
MD5 4661c3bfeb728a8865554c257d5fb9c3
BLAKE2b-256 fa6f9c932b0fb2be692d52650dafd8cb7c2068d899b243fc4c69b598ef048128

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page