A lightweight wrapper that scaffolds PyTorch's (Distributed Data) Parallel.
Project description
DDPW
Distributed Data Parallel Wrapper (DDPW) is a lightweight wrapper that scaffolds PyTorch's (Distributed Data) Parallel.
This code is written in Python 3.10. The DDPW documentation contains details on how to use this package.
Overview
Installation
conda install -c tvsujal ddpw # with conda
pip install ddpw # with pip from PyPI
Usage
from ddpw import Platform, Wrapper
# some job
def run(global_rank, local_rank, args):
print(f'This is node {global_rank}, device {local_rank}; args = {args}')
# platform (e.g., 4 GPUs)
platform = Platform(device='gpu', n_gpus=4)
# wrapper
wrapper = Wrapper(platform=platform)
# start
wrapper.start(run, (0, 'example', [0.1, 0.2], {'a': 0}))
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
ddpw-5.1.0.tar.gz
(9.5 kB
view hashes)
Built Distribution
ddpw-5.1.0-py3-none-any.whl
(10.2 kB
view hashes)