torch compatibility layer
Project description
pip install torchcompat
Features
Provide a super set implementation of pytorch device interface to enable code to run seamlessly between different accelerators.
Identify uniquely devices
import torchcompat.core as accelerator
# on cuda accelerator == torch.cuda
# on rocm accelerator == torch.cuda
# on xpu accelerator == torch.xpu
# on gaudi accelerator == ...
assert accelerator.is_available() == true
assert accelerator.device_name in ('xpu', 'cuda', "hpu") # rocm is seen as cuda by pytorch
assert accelerator.device_string(0) == "cuda:0" or "xpu:0" or "hpu:0"
assert accelerator.fetch_device(0) == torch.device("cuda:0")
accelerator.set_enable_tf32(true) # toggle the right flags for each backend
Example
example here
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
torchcompat-1.0.5.tar.gz
(5.8 kB
view hashes)
Built Distribution
Close
Hashes for torchcompat-1.0.5-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 082425f5af56641434b37abcaa95d6a26acdf27b01cadf170e22d97710c31955 |
|
MD5 | 8fb876bd09a529afd68a0e25204821d4 |
|
BLAKE2b-256 | 1923f21c1bdb8c1739b589fe1f813e25cfa6a2bcc0afe8b4bdf1c4f8dc290c7b |