torch compatibility layer
Project description
pip install torchcompat
Features
Provide a super set implementation of pytorch device interface to enable code to run seamlessly between different accelerators.
Identify uniquely devices
import torchcompat.core as accelerator
# on cuda accelerator == torch.cuda
# on rocm accelerator == torch.cuda
# on xpu accelerator == torch.xpu
# on gaudi accelerator == ...
assert accelerator.is_available() == true
assert accelerator.device_name in ('xpu', 'cuda', "hpu") # rocm is seen as cuda by pytorch
assert accelerator.device_string(0) == "cuda:0" or "xpu:0" or "hpu:0"
assert accelerator.fetch_device(0) == torch.device("cuda:0")
accelerator.set_enable_tf32(true) # toggle the right flags for each backend
Example
example here
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
torchcompat-1.0.4.tar.gz
(5.8 kB
view hashes)
Built Distribution
Close
Hashes for torchcompat-1.0.4-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | e00d6581d8b83aaa4ea58ff0ae06460e30038f037722e3486d871864f2f890bd |
|
MD5 | b9a0c34f5bc269b103970237feac2baa |
|
BLAKE2b-256 | 6d77130599275ae14fc5e2e8d69fe46b323e43dfb02f451edc055698138fab95 |