Skip to main content

Log memory usage and auto choose device for machine learning model inference

Project description

soco-device

A package for logging, saving, and automatically choosing device during model inference stage.

Usage

  1. install package

    pip install soco-device

  2. example usage

from soco_device import DeviceCheck

dc = DeviceCheck()
# Return a device name ('cpu'/'cuda') and a list of gpu ids, if any
model_name_or_path = <model name> 
n_gpu_needed = 2
device_name, device_ids = dc.get_device_by_model(model_name_or_path, n_gpu=n_gpu_needed)

# If only single gpu, set device_name as 'cuda:gpu_id'
# For either cpu or multi gpu case, set device_name as 'cpu'/'cuda'
device_name = '{}:{}'.format(device_name, device_ids[0]) if len(device_ids) == 1 else device_name

device = torch.device(device_name)

# ......
# Set up multi gpu training if available
if len(device_ids) > 1:
    model = torch.nn.DataParallel(model, device_ids=device_ids)


# log gpu memory needed in inference stage
dc.log_start()
# ......
# inference model for one time here
dc.log_end()
dc.save(model_name_or_path)

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distribution

soco_device-0.0.5.2-py3-none-any.whl (12.2 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page