Skip to main content

A lightweight wrapper that scaffolds PyTorch's (Distributed Data) Parallel.

Project description

DDPW

Distributed Data Parallel Wrapper (DDPW) is a lightweight Python wrapper relevant for PyTorch users.

DDPW handles basic logistical tasks such as creating threads on GPUs/SLURM nodes, setting up inter-process communication, etc., and provides simple, default utility methods to move modules to devices and get dataset samplers, allowing the user to focus on the main aspects of the task. It is written in Python 3.10. The documentation contains details on how to use this package.

Overview

Installation

Conda PyPI

conda install ddpw -c tvsujal # with conda
pip install ddpw # with pip from PyPI

Usage

from ddpw import Platform, Wrapper

# some task
def task(global_rank, local_rank, group, args):
    print(f'This is GPU {global_rank}(G)/{local_rank}(L); args = {args}') 

# platform (e.g., 4 GPUs)
platform = Platform(device='gpu', n_gpus=4)

# wrapper
wrapper = Wrapper(platform=platform)

# start
wrapper.start(task, ('example',))

Status

Publish to Anaconda Publish to PyPI Publish documentation

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

ddpw-5.3.0.tar.gz (12.2 kB view hashes)

Uploaded Source

Built Distribution

ddpw-5.3.0-py3-none-any.whl (12.6 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page