pytorch dataset wrappers for in-memory caching
Project description
KappaBenchmark
Utilities for benchmarking pytorch applications.
Setup
pip install kappabenchmark
Dataloading
import kappabenchmark as kbm
dataloader = ...
result = kbm.benchmark_dataloading(
dataloader=dataloader,
num_epochs=...,
)
predefined benchmarks examples
python main_benchmark_grid.py --benchmark imagefolder --root ROOT --num_epochs 5 --batch_size 256 --num_workers 8,16 --num_fetch_workers 0,2,4
register your own benchmark
write a script run_mybenchmark.py
import torch
from torch.utils.data import TensorDataset
from kappabenchmark.dataloading_benchmarks import DATALOADING_BENCHMARKS, DataloadingBenchmark
from kappabenchmark.scripts.main_benchmark_grid import parse_args, main
def mybenchmark(root):
# root is a (optional) path to a directory which is passed via --root
# for this toy dataset it is not needed
return DataloadingBenchmark(dataset=TensorDataset(torch.randn(1024)))
if __name__ == "__main__":
DATALOADING_BENCHMARKS["mybenchmark"] = mybenchmark
main(**parse_args())
python run_mybenchmark.py --benchmark mybenchmark [--root ROOT] --num_epochs 5 --batch_size 256 --num_workers 8,16 --num_fetch_workers 0,2,4
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
kappabenchmark-0.0.9.tar.gz
(6.3 kB
view hashes)
Built Distribution
Close
Hashes for kappabenchmark-0.0.9-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 1526e66f9d7f4ccc2bc7ad9066cc306a95e930f7968c2c2b4b90b44a555dfd15 |
|
MD5 | fb81ea3c8129bdd3ec6464e9116c3d93 |
|
BLAKE2b-256 | ef67c90254d53feb80a04ef33c4e5933f9f871fa9ab0a22b87acbedde239311e |