Log memory usage and auto choose device for machine learning model inference
Project description
soco-device
A package for logging, saving, and automatically choosing device during model inference stage.
Usage
-
install package
pip install soco-device
-
example usage
from soco_device import DeviceCheck
dc = DeviceCheck()
# Return a device name ('cpu'/'cuda') and a list of gpu ids, if any
model_name_or_path = <model name>
n_gpu_needed = 2
device_name, device_ids = dc.get_device_by_model(model_name_or_path, n_gpu=n_gpu_needed)
# If only single gpu, set device_name as 'cuda:gpu_id'
# For either cpu or multi gpu case, set device_name as 'cpu'/'cuda'
device_name = '{}:{}'.format(device_name, device_ids[0]) if len(device_ids) == 1 else device_name
device = torch.device(device_name)
# ......
# Set up multi gpu training if available
if len(device_ids) > 1:
model = torch.nn.DataParallel(model, device_ids=device_ids)
# log gpu memory needed in inference stage
dc.log_start()
# ......
# inference model for one time here
dc.log_end()
dc.save(model_name_or_path)
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distributions
No source distribution files available for this release.See tutorial on generating distribution archives.
Built Distribution
File details
Details for the file soco_device-0.0.5.2-py3-none-any.whl
.
File metadata
- Download URL: soco_device-0.0.5.2-py3-none-any.whl
- Upload date:
- Size: 12.2 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/3.2.0 pkginfo/1.5.0.1 requests/2.24.0 setuptools/46.1.3 requests-toolbelt/0.9.1 tqdm/4.50.2 CPython/3.6.9
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 0c398cda97fe5c89c98dc51d942f1a2caeac2c160273532e74e8bacb93ac85b8 |
|
MD5 | 5b6cb91a0e6b50df79f5a48071c6976a |
|
BLAKE2b-256 | 1220615c73765ea8f2bf22141d6b398ba749589c7b3e8758d3de0d79fd6c5d49 |