A lightweight wrapper that scaffolds PyTorch's (Distributed Data) Parallel.
Project description
DDPW
Distributed Data Parallel Wrapper (DDPW) is a lightweight wrapper that scaffolds PyTorch's (Distributed Data) Parallel.
This code is written in Python 3.10. The DDPW documentation contains details on how to use this package.
Overview
Installation
conda install -c tvsujal ddpw # with conda
pip install ddpw # with pip from PyPI
Usage
from ddpw import Platform, Wrapper
# some task
def task(global_rank, local_rank, group, args):
print(f'This is GPU {global_rank}(G)/{local_rank}(L); args = {args}')
# platform (e.g., 4 GPUs)
platform = Platform(device='gpu', n_gpus=4)
# wrapper
wrapper = Wrapper(platform=platform)
# start
wrapper.start(task, ('example',))
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
ddpw-5.2.0.tar.gz
(10.7 kB
view hashes)
Built Distribution
ddpw-5.2.0-py3-none-any.whl
(11.1 kB
view hashes)