Choose GPU (supports NVIDIA CUDA and Apple Silicon MPS)
Project description
Choose GPU
Automatically configure GPU usage for PyTorch and TensorFlow. Supports both NVIDIA CUDA GPUs and Apple Silicon (M series) MPS.
Overview
choosegpu helps you manage GPU device selection and configuration before importing deep learning frameworks. It:
- On NVIDIA systems: Automatically detects available CUDA GPUs and sets
CUDA_VISIBLE_DEVICESto use free GPUs - On Apple Silicon (Mac M series): Configures PyTorch to use the Metal Performance Shaders (MPS) backend
- Respects your preferences for which GPUs to use
- Can enable memory pooling on NVIDIA GPUs (requires RMM and CuPy)
Important behavioral difference between platforms:
-
NVIDIA CUDA: When you call
configure_gpu(enable=False), it setsCUDA_VISIBLE_DEVICES="-1", which makestorch.cuda.is_available()returnFalse. The GPU is truly "hidden" from PyTorch/TensorFlow. -
Apple Silicon MPS: When you call
configure_gpu(enable=False), there is no effect because the hardware is always available. The GPU is not actually disabled at the PyTorch level - your code must checkchoosegpu.get_gpu_config()to determine whether to use it.
Installation
pip install choosegpu
This installs choosegpu with all dependencies, including nvitop for NVIDIA GPU detection. On Mac Silicon, nvitop will be installed but not used (the library automatically detects the platform and uses MPS instead).
Optional: Memory Pooling (NVIDIA only)
If you want to use memory_pool=True on NVIDIA GPUs, also install RMM and CuPy:
pip install rmm-cu11 cupy-cuda11x # Adjust CUDA version as needed
Usage
Basic Usage
import choosegpu
# Enable GPU (uses MPS on Mac Silicon, CUDA on NVIDIA)
choosegpu.configure_gpu(enable=True)
# Now import your deep learning framework
import torch
# The appropriate GPU backend will be configured
Disable GPU (only effective on NVIDIA, not on Mac Silicon)
import choosegpu
# Disable GPU - use CPU only
choosegpu.configure_gpu(enable=False)
import torch
# On Mac Silicon: check choosegpu.is_gpu_enabled() to see if GPU should be used
Check GPU Configuration
import choosegpu
choosegpu.configure_gpu(enable=True)
# Check if GPU is enabled (recommended)
gpu_enabled = choosegpu.is_gpu_enabled()
# Returns: True if GPU is enabled
# Returns: False if GPU is disabled (CPU mode)
# Returns: None if not yet configured
if gpu_enabled:
print("GPU is enabled")
elif gpu_enabled is False:
print("GPU is disabled (CPU mode)")
else:
print("GPU not yet configured")
# Advanced: Get the raw GPU configuration
gpu_config = choosegpu.get_gpu_config()
# Returns: ["mps"] on Mac Silicon, or ["GPU-UUID"] on NVIDIA
# Returns: ["-1"] when disabled
# Returns: None if not configured
Check Hardware Availability (PyTorch)
import choosegpu
choosegpu.configure_gpu(enable=False) # Disable GPU
# Check configuration state
print(f"GPU enabled: {choosegpu.is_gpu_enabled()}") # False
# Check hardware availability (requires PyTorch)
print(f"GPU hardware available: {choosegpu.check_if_gpu_libraries_see_gpu()}")
# On Mac Silicon: True (hardware cannot be hidden)
# On NVIDIA: False (CUDA_VISIBLE_DEVICES="-1" hides hardware)
Advanced: Prefer Specific GPUs (NVIDIA only)
import choosegpu
# Prefer specific GPU IDs if they're available
choosegpu.configure_gpu(enable=True, gpu_device_ids=[2, 3])
# Or set global preference
choosegpu.preferred_gpu_ids = [2, 3]
choosegpu.configure_gpu(enable=True)
Memory Pooling (NVIDIA only)
import choosegpu
# Enable memory pooling with RMM (requires rmm and cupy)
choosegpu.configure_gpu(enable=True, memory_pool=True)
Platform Support
| Platform | GPU Backend | Hardware Detection | GPU Disable Behavior |
|---|---|---|---|
| Apple Silicon (M series) | MPS (Metal Performance Shaders) | Always available if hardware supports it | Sets configuration flag, but torch.backends.mps.is_available() remains True |
| NVIDIA | CUDA | Uses nvitop to detect available GPUs |
Sets CUDA_VISIBLE_DEVICES="-1", making torch.cuda.is_available() return False |
API Reference
configure_gpu(enable=True, desired_number_of_gpus=1, memory_pool=False, gpu_device_ids=None, overwrite_existing_configuration=True)
Configure GPU usage. Must be called before importing PyTorch/TensorFlow.
Parameters:
enable(bool): Enable or disable GPUdesired_number_of_gpus(int): Number of GPUs to use (NVIDIA only, ignored on Mac Silicon)memory_pool(bool): Enable memory pooling with RMM (NVIDIA only)gpu_device_ids(list): Preferred GPU IDs to use if available (NVIDIA only)overwrite_existing_configuration(bool): Whether to overwrite existing GPU configuration
Returns: List of GPU device identifiers (e.g., ["mps"] or ["GPU-UUID-123"])
is_gpu_enabled()
Check if GPU is enabled. This is the recommended way to check GPU configuration status.
Returns:
True: GPU is enabled (configured to use GPU)False: GPU is disabled (configured to use CPU only)None: GPU configuration has not been set yet
check_if_gpu_libraries_see_gpu()
Check if PyTorch can see GPU hardware (CUDA or MPS). This requires PyTorch to be installed.
Important platform differences:
- Mac Silicon: Returns
Trueif PyTorch MPS is available, REGARDLESS ofconfigure_gpu()settings. Mac hardware cannot be "hidden" like CUDA can. - NVIDIA: Returns
Trueif CUDA is available AND not disabled byconfigure_gpu(). Whenconfigure_gpu(enable=False)is called,CUDA_VISIBLE_DEVICES="-1"makestorch.cuda.is_available()returnFalse.
Returns: True if GPU hardware is available to PyTorch, False otherwise (including when PyTorch is not installed)
get_gpu_config()
Get current GPU configuration. For most use cases, prefer is_gpu_enabled() instead.
Returns:
["mps"]if MPS is enabled on Mac Silicon["-1"]if GPU is disabled["GPU-UUID", ...]for NVIDIA GPUsNoneif not configured
are_gpu_settings_configured()
Check if GPU settings have been configured by this library.
Returns: True if configure_gpu() has been called, False otherwise
Development
Submit PRs against develop branch, then make a release pull request to master.
# Install requirements
pip install --upgrade pip wheel
pip install -r requirements_dev.txt
# Install local package
pip install -e .
# Install pre-commit
pre-commit install
# Run tests
make test
# Run lint
make lint
# bump version before submitting a PR against master (all master commits are deployed)
bump2version patch # possible: major / minor / patch
# also ensure CHANGELOG.md updated
Changelog
0.0.1
- First release on PyPI.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file choosegpu-0.0.3.tar.gz.
File metadata
- Download URL: choosegpu-0.0.3.tar.gz
- Upload date:
- Size: 18.9 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.9.25
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
df0cd29c1309b3119fcb2de559aa1037aa67aff876b288ff8b32708349feb1cd
|
|
| MD5 |
d1f7092e0a5e689f902b8363290469d2
|
|
| BLAKE2b-256 |
a77ca2ec7dacd386cf61e1ebe9996fe83176033eb82290304a5e73d78efe9e1e
|
File details
Details for the file choosegpu-0.0.3-py2.py3-none-any.whl.
File metadata
- Download URL: choosegpu-0.0.3-py2.py3-none-any.whl
- Upload date:
- Size: 9.6 kB
- Tags: Python 2, Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.9.25
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
ce46805b3c00ce9223568654eb132b29ecd0c63ddfc034b470c67579d3de7d65
|
|
| MD5 |
ecf7b5fe341b2dd5eec15baabcfafd11
|
|
| BLAKE2b-256 |
fc64f8b482fb918f9af0244f5ad18a4e5e527e2eae9ac9bf553ea38bc4c8bbb2
|