torch compatibility layer
Project description
pip install torchcompat
Features
Provide a super set implementation of pytorch device interface to enable code to run seamlessly between different accelerators.
Identify uniquely devices
import torchcompat.core as accelerator
# on cuda accelerator == torch.cuda
# on rocm accelerator == torch.cuda
# on xpu accelerator == torch.xpu
# on gaudi accelerator == ...
assert accelerator.is_available() == true
assert accelerator.device_name in ('xpu', 'cuda', "hpu") # rocm is seen as cuda by pytorch
assert accelerator.device_string(0) == "cuda:0" or "xpu:0" or "hpu:0"
assert accelerator.fetch_device(0) == torch.device("cuda:0")
accelerator.set_enable_tf32(true) # toggle the right flags for each backend
Example
example here
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
torchcompat-0.0.1.tar.gz
(5.3 kB
view hashes)
Built Distribution
Close
Hashes for torchcompat-0.0.1-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 1b98424a3ebdff4930906818b1fb317a41fba96c81864101eeedca7d454296d5 |
|
MD5 | 8c526c384606783637fcbf45cef676ed |
|
BLAKE2b-256 | 6c1d002d09888199005916d9b44872bd62a67116645017b33fdff22ff33589a8 |