Skip to main content

A Python package for monitoring CPU, RAM, and GPU resources

Project description

📊 ProcStats

A powerful Python package for monitoring CPU, RAM, and GPU resources with adaptive sampling and noise reduction. ProcStats provides robust tools to track system resource usage for any given process or function — with smart stabilization and optional NVIDIA GPU monitoring via pynvml.


🚀 Features

  • Tracks All Child Processes: Monitors resource usage across parent and child processes — ideal for multi-processing applications.
  • 🧠 Smart Stabilization: Adaptive sampling and noise filtering (moving averages + outlier rejection) yield stable, accurate metrics.
  • 💻 Cross-Platform: Works on macOS, Windows, Linux, and Unix-like systems.
  • ⏱️ Timeout Support: Enforce max runtime limits with graceful shutdowns.
  • 🎮 Optional NVIDIA GPU Monitoring: Auto-detects and includes GPU stats if pynvml is available.
  • 🧪 Non-Intrusive Monitoring: Uses multiprocessing to isolate the monitor from the target workload.
  • 📈 High Data Quality: Reports max, average, 95th percentile, and a custom data quality score.
  • 🧪 Code Coverage: >90% test coverage with full support for CPU, RAM, and GPU features (see codecov)

📦 Installation

Install with pip:

pip install procstats

To enable GPU monitoring, also install:

pip install pynvml

🧰 Usage

▶️ Monitor a Python Function

from procstats import monitor_function_resources
from procstats.scripts.cpu_test_lib import burn_cpu_accurate

result = monitor_function_resources(
    burn_cpu_accurate,
    kwargs={"cpu_percent": 150, "duration": 5},
    base_interval=0.05,
    timeout=10.0
)
print(result)

💻 CLI Support

import argparse
import os
import multiprocessing as mp
import torch

from procstats.scripts.cpu_test_lib import burn_cpu_accurate

def gpu_workload(gpu_id: int = 1):
    print(f"[Child] PID: {os.getpid()} using GPU {gpu_id}")
    try:
        device = f"cuda:{gpu_id}"
        a = torch.randn(5000, 5000, device=device)
        for _ in range(2000):
            b = torch.matmul(a, a.T)
    except RuntimeError as e:
        print(f"[Child] GPU task on cuda:{gpu_id} failed: {e}")


def heavy_gpu_task():
    print(f"[Parent] PID: {os.getpid()}")

    # Pick GPU 1 for parent, GPU 0 for child (customize as needed)
    parent_gpu = 1
    child_gpu = 1

    # Start child process
    p = mp.Process(target=gpu_workload, args=(child_gpu,))
    p.start()

    # Run the same logic in the parent process
    gpu_workload(parent_gpu)

    p.join()

    return 10


if __name__ == "__main__":
    parser = argparse.ArgumentParser(description="Run CPU burn workload with optional GPU usage")
    parser.add_argument("--cpu_percent", type=int, default=350, required=True, help="Total CPU percent to consume (e.g., 350)")
    parser.add_argument("--duration", type=int, default=10, required=True, help="Duration of the workload in seconds")

    args = parser.parse_args()

    burn_cpu_accurate(cpu_percent=args.cpu_percent, duration=args.duration)
# Syntax to monitor test_lib.py with procstats
procstats test_lib.py --cpu_percent 350 --duration 5

Example output:

(virenv1) (base) lehoangviet@lehoangviet-MS-7D99:~/Desktop/python_projects/ProcStats-CPP$ procstats test_lib.py --cpu_percent 350 --duration 5
2025-06-07 22:29:18,175 - INFO - Monitoring script: test_lib.py
2025-06-07 22:29:18,175 - INFO - Script arguments: --cpu_percent 350 --duration 5
2025-06-07 22:29:18,175 - INFO - Procstats config - Interval: 0.05s, Timeout: 12.0s
2025-06-07 22:29:18,177 - INFO - Started subprocess with PID: 25511
2025-06-07 22:29:18,285 - INFO - NVIDIA ML initialized successfully
2025-06-07 22:29:18,286 - INFO - NVIDIA ML initialized successfully
2025-06-07 22:29:18,286 - INFO - Starting GPU monitoring for PID 25511 (include_children=True) on 2 GPU(s) with interval 0.05s and timeout 12.0s
[Monitor] Parent process 25511 terminated
2025-06-07 22:29:24,450 - INFO - No processes to monitor (original PID: 25511)
2025-06-07 22:29:24,452 - INFO - Monitoring completed. Tracked 5 PIDs: [25511, 25539, 25540, 25541, 25542]
2025-06-07 22:29:25,430 - ERROR - Failed to load function output: No module named 'test_burn_cpu'
============================================================
PROCSTATS MONITORING RESULTS
============================================================

📊 EXECUTION SUMMARY
Duration: 6.18 seconds
Timeout reached: No
Measurements taken: 35
Data quality score: 50.00

🔄 PROCESS INFORMATION
Max processes: 5
Tracked PIDs: 5

🖥️  CPU USAGE
Max CPU: 372.4%
Average CPU: 227.2%
95th percentile CPU: 372.4%
CPU cores: 12

💾 MEMORY USAGE
Max RAM: 1277.9 MB
Average RAM: 812.2 MB
95th percentile RAM: 1277.9 MB

🎮 GPU USAGE
GPU 0 - Max utilization: 0.0%
GPU 0 - Mean utilization: 0.0%
GPU 0 - Max VRAM: 0.0 MB
GPU 0 - Mean VRAM: 0.0 MB
GPU 1 - Max utilization: 0.0%
GPU 1 - Mean utilization: 0.0%
GPU 1 - Max VRAM: 0.0 MB
GPU 1 - Mean VRAM: 0.0 MB

📤 STDOUT
[Subprocess] Running with PID: 25511
pid: 25511
System: 12 CPU cores (theoretical max 1200%)
Target: 350% CPU for 5s
Strategy: 4 processes
  Process 0: 100%
  Process 1: 100%
  Process 2: 100%
  Process 3: 50%
Process 0: Starting 100% CPU burn for 5s
Process 1: Starting 100% CPU burn for 5s
Process 2: Starting 100% CPU burn for 5s
Process 3: Starting 50% CPU burn for 5s

============================================================
(virenv1) (base) lehoangviet@lehoangviet-MS-7D99:~/Desktop/python_projects/ProcStats-CPP$ 
...
============================================================

📋 Requirements

  • Python: >= 3.8

  • Required:

    • psutil>=5.9.0
  • Optional:

    • pynvml>=11.0.0 (for GPU support)
    • torch>=2.0.0 (for demo workloads)

🤝 Contributing

Contributions are welcome! Open an issue or submit a pull request: 👉 github.com/Mikyx-1/ProcStats


📜 License

Licensed under the MIT License. See the LICENSE file.


👤 Author


Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

procstats-0.2.0.tar.gz (38.5 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

procstats-0.2.0-py3-none-any.whl (44.2 kB view details)

Uploaded Python 3

File details

Details for the file procstats-0.2.0.tar.gz.

File metadata

  • Download URL: procstats-0.2.0.tar.gz
  • Upload date:
  • Size: 38.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.11.11

File hashes

Hashes for procstats-0.2.0.tar.gz
Algorithm Hash digest
SHA256 9a54cc17f571df1166396793feb4d2fdcdbceb6cdc78fbfe7603c0ede64f5adb
MD5 f3235a78aecb62bdc818abf1cb0bdacf
BLAKE2b-256 f695e4815c53c55eca3334b32a879e5d5b3cd17fc9a8c9e1ee44357710ef0406

See more details on using hashes here.

File details

Details for the file procstats-0.2.0-py3-none-any.whl.

File metadata

  • Download URL: procstats-0.2.0-py3-none-any.whl
  • Upload date:
  • Size: 44.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.11.11

File hashes

Hashes for procstats-0.2.0-py3-none-any.whl
Algorithm Hash digest
SHA256 7bcecb3df7d2eaa930083ff3f1689e67bc5d43cd55bd670930c9f5d040ff90a5
MD5 602d0fd59643b99f3de935bc4b340709
BLAKE2b-256 00ed49b46471ca6f0855a9c86bede458aec1b3560403fc80cafb0f6779b076c1

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page