Skip to main content

Utilities and Helpers for optimal usage of the MUFASA HPC cluster at Politecnico di Milano

Project description

MUFASA

A Python utility module for CPU core management and GPU memory optimization, particularly useful for machine learning workflows.

Installation

You can install MUFASA directly from PyPI:

pip install mufasa-polimi

Features

  • CPU core detection and optimization for SLURM environments
  • Automated GPU memory management and cleanup
  • Detailed memory usage reporting

Usage

Core Management Functions

from mufasa import getCoreAffinity, setOptimalWorkers

# Get available CPU cores
cpu_count = getCoreAffinity()
print(f"Available CPU cores: {cpu_count}")

# Set optimal number of worker processes
workers = setOptimalWorkers()
print(f"Optimal worker count: {workers}")

getCoreAffinity()

Detects the number of available CPU cores, taking into account SLURM job allocations if running in a SLURM environment. Returns the minimum between SLURM-allocated CPUs and system-available CPUs, or the total system CPU count if not in a SLURM environment.

setOptimalWorkers()

Similar to getCoreAffinity(), but defaults to 1 if no SLURM environment is detected. Useful for setting worker counts in parallel processing scenarios.

GPU Memory Management

from mufasa import gpuClean

# Basic cleanup
freed_count, freed_memory = gpuClean()

# Detailed cleanup with verbose output
freed_count, freed_memory = gpuClean(
    exclude_vars=['model', 'optimizer'],  # Variables to preserve
    verbose=True  # Enable detailed reporting
)

gpuClean(local_vars=None, exclude_vars=None, verbose=False)

Automatically detects and frees GPU memory by cleaning up tensor variables.

Parameters:

  • local_vars (dict, optional): Dictionary of local variables to clean. If None, uses the calling frame's locals.
  • exclude_vars (list, optional): List of variable names to exclude from cleanup.
  • verbose (bool): Whether to print detailed information about cleaned variables.

Returns:

  • tuple: (freed_count, freed_memory_mb)
    • freed_count: Number of tensors freed
    • freed_memory_mb: Approximate memory freed in MB

Features:

  • Cleans up PyTorch tensors in local scope
  • Handles nested tensors in dictionaries and lists
  • Provides detailed memory usage reports when verbose=True
  • Allows excluding specific variables from cleanup
  • Automatically triggers garbage collection and GPU memory cache clearing

Example with Verbose Output:

import torch
from mufasa import gpuClean

# Create some example tensors
tensor1 = torch.randn(1000, 1000).cuda()
tensor2 = torch.randn(2000, 2000).cuda()

# Clean up with detailed output
freed_count, freed_memory = gpuClean(verbose=True)

The verbose output includes:

  • Table of cleaned tensors with their shapes and sizes
  • Total number of tensors freed
  • Total memory freed
  • Current GPU memory allocation status
  • List of excluded variables (if any)

Notes

  • SLURM-specific features require a SLURM environment
  • GPU cleaning functions require PyTorch and a CUDA-capable GPU
  • Memory sizes are reported in MB or GB depending on the size
  • The module uses the rich library for formatted console output in verbose mode

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

mufasa_polimi-0.0.5.tar.gz (16.8 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

mufasa_polimi-0.0.5-py3-none-any.whl (17.4 kB view details)

Uploaded Python 3

File details

Details for the file mufasa_polimi-0.0.5.tar.gz.

File metadata

  • Download URL: mufasa_polimi-0.0.5.tar.gz
  • Upload date:
  • Size: 16.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/5.1.1 CPython/3.12.7

File hashes

Hashes for mufasa_polimi-0.0.5.tar.gz
Algorithm Hash digest
SHA256 61d2d8052fa3c1b4aeafee939888f4f5c6969598872757e41c0fbe1445f5b50a
MD5 255bee11626d8f3cfabadeedf63071e5
BLAKE2b-256 2c3a48a0d26b01fce0e278eb6b33fc5b0345c32e51a0355354f50d25612fa087

See more details on using hashes here.

Provenance

The following attestation bundles were made for mufasa_polimi-0.0.5.tar.gz:

Publisher: python-publish.yml on alberto-rota/mufasa

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file mufasa_polimi-0.0.5-py3-none-any.whl.

File metadata

  • Download URL: mufasa_polimi-0.0.5-py3-none-any.whl
  • Upload date:
  • Size: 17.4 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/5.1.1 CPython/3.12.7

File hashes

Hashes for mufasa_polimi-0.0.5-py3-none-any.whl
Algorithm Hash digest
SHA256 9e1fd1d3550aa6460b1c2179730b827710ec499acdca2790cbd2902244fe922d
MD5 ff2d21b1698833c297692b50fa56a8dc
BLAKE2b-256 a76f9f94b792b9103eadb3c775574775f7ef4fd991dfb093edbf12784ffa3bb3

See more details on using hashes here.

Provenance

The following attestation bundles were made for mufasa_polimi-0.0.5-py3-none-any.whl:

Publisher: python-publish.yml on alberto-rota/mufasa

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page