A library to test fastai learners using some evaluation techniques.
Project description
vision_models_evaluation
Install
To install the library, just run:
pip install vision_models_evaluation
How to use
This library provides a method that can help you in the process of model evaluation. Using the scikit-learn validation techniques you can validate your deep learning models.
In order to validate your model, you will need to build and train various versions of it (for example, using a KFold validation, it is needed to build five different models).
For doing so, you need to provide: the DataBlock
hparams
(hyperparameters), the DataLoader
hparams, the technique used to split
the data, the Learner
construction hparams, the learning mode (whether
to use a pretrained model or not: fit_one_cycle
or finetune
) and the
Learner
training hparams. So, the first step is to define them all:
db_hparams = {
"blocks": (ImageBlock, MaskBlock(codes)),
"get_items": partial(get_image_files, folders=['train']),
"get_y": get_y_fn,
"item_tfms": [Resize((480,640)), TargetMaskConvertTransform(), transformPipeline],
"batch_tfms": Normalize.from_stats(*imagenet_stats)
}
dl_hparams = {
"source": path_images,
"bs": 4
}
technique = KFold(n_splits = 5)
learner_hparams = {
"arch": resnet18,
"pretrained": True,
"metrics": [DiceMulti()]
}
learning_hparams = {
"epochs": 10,
"base_lr": 0.001,
"freeze_epochs": 1
}
learning_mode = "finetune"
Then, you need to call the evaluate
method with those defined hparams.
After the execution, the method will return a dictionary of results (for
each metric used to test the model, the value obtained in each fold).
r = evaluate(
db_hparams,
dl_hparams,
technique,
learner_hparams,
learning_hparams,
learning_mode
)
Finally, you can plot the metrics using a boxplot from pandas, for example:
import pandas as pd
df = pd.DataFrame(r)
df.boxplot("DiceMulti");
print(
df["DiceMulti"].mean(),
df["DiceMulti"].std()
)
You can use this method to evaluate your model, but you can also use it to evaluate several models with distinct hparams: you can get the results for each of them and then plot the average of their metrics.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file vision_models_evaluation-0.0.5.tar.gz
.
File metadata
- Download URL: vision_models_evaluation-0.0.5.tar.gz
- Upload date:
- Size: 9.6 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.2 CPython/3.8.13
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | c99af4261185024951c1002405d39ce736ff46a7529521a471d3044dc9f5c35b |
|
MD5 | b2611f359984c5c446197e9117439ac4 |
|
BLAKE2b-256 | 47332f80803d32999dc7f2ec4ea7352f58a338a30caaa7476d1f957b233d526b |
File details
Details for the file vision_models_evaluation-0.0.5-py3-none-any.whl
.
File metadata
- Download URL: vision_models_evaluation-0.0.5-py3-none-any.whl
- Upload date:
- Size: 9.3 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.2 CPython/3.8.13
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | bb291dc4644bdfbd9a8be4bed67bb7254180f096484260e24701c8b979e99c7b |
|
MD5 | 215458db5cf77dded07126fa6dc1c93e |
|
BLAKE2b-256 | 4aec5deb93bddc91e12a4f05a4823fb50d431c5fea214c1a1470662e6deb05d7 |