Skip to main content

A library to test fastai learners using some evaluation techniques.

Project description

vision_models_evaluation

Install

To install the library, just run:

pip install vision_models_evaluation

How to use

This library provides a method that can help you in the process of model evaluation. Using the scikit-learn validation techniques you can validate your deep learning models.

In order to validate your model, you will need to build and train various versions of it (for example, using a KFold validation, it is needed to build five different models).

For doing so, you need to provide: the DataBlock hparams (hyperparameters), the DataLoader hparams, the technique used to split the data, the Learner construction hparams, the learning mode (whether to use a pretrained model or not: fit_one_cycle or finetune) and the Learner training hparams. So, the first step is to define them all:

db_hparams = {
    "blocks": (ImageBlock, MaskBlock(codes)),
    "get_items": partial(get_image_files, folders=['train']),
    "get_y": get_y_fn,
    "item_tfms": [Resize((480,640)), TargetMaskConvertTransform(), transformPipeline],
    "batch_tfms": Normalize.from_stats(*imagenet_stats)
}
dl_hparams = {
    "source": path_images,
    "bs": 4
}
technique = KFold(n_splits = 5)
learner_hparams = {
    "arch": resnet18,
    "pretrained": True,
    "metrics": [DiceMulti()]
}
learning_hparams = {
    "epochs": 10,
    "base_lr": 0.001,
    "freeze_epochs": 1
}
learning_mode = "finetune"

Then, you need to call the evaluate method with those defined hparams. After the execution, the method will return a dictionary of results (for each metric used to test the model, the value obtained in each fold).

r = evaluate(
    db_hparams,
    dl_hparams,
    technique,
    learner_hparams,
    learning_hparams,
    learning_mode
)

Finally, you can plot the metrics using a boxplot from pandas, for example:

import pandas as pd

df = pd.DataFrame(r)
df.boxplot("DiceMulti");

print(
    df["DiceMulti"].mean(),
    df["DiceMulti"].std()
)

download.png

You can use this method to evaluate your model, but you can also use it to evaluate several models with distinct hparams: you can get the results for each of them and then plot the average of their metrics.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

vision_models_evaluation-0.0.5.tar.gz (9.6 kB view details)

Uploaded Source

Built Distribution

vision_models_evaluation-0.0.5-py3-none-any.whl (9.3 kB view details)

Uploaded Python 3

File details

Details for the file vision_models_evaluation-0.0.5.tar.gz.

File metadata

File hashes

Hashes for vision_models_evaluation-0.0.5.tar.gz
Algorithm Hash digest
SHA256 c99af4261185024951c1002405d39ce736ff46a7529521a471d3044dc9f5c35b
MD5 b2611f359984c5c446197e9117439ac4
BLAKE2b-256 47332f80803d32999dc7f2ec4ea7352f58a338a30caaa7476d1f957b233d526b

See more details on using hashes here.

File details

Details for the file vision_models_evaluation-0.0.5-py3-none-any.whl.

File metadata

File hashes

Hashes for vision_models_evaluation-0.0.5-py3-none-any.whl
Algorithm Hash digest
SHA256 bb291dc4644bdfbd9a8be4bed67bb7254180f096484260e24701c8b979e99c7b
MD5 215458db5cf77dded07126fa6dc1c93e
BLAKE2b-256 4aec5deb93bddc91e12a4f05a4823fb50d431c5fea214c1a1470662e6deb05d7

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page