Skip to main content

torch benchmarking tool

Project description

PyTorch Model Benchmarking Tool

This tool provides a comprehensive set of utilities for benchmarking PyTorch models, including performance metrics, memory usage, and model statistics.

Features

  • Measure inference latency on both CPU and GPU
  • Track GPU memory usage
  • Calculate model size and number of parameters
  • Compute MACs (Multiply-Accumulate operations)
  • Calculate model sparsity
  • Generate visualizations of parameter distributions and weight distributions
  • Provide formatted output of benchmark results

Installation

You can install the package using pip:

pip install pytorch-bench

Example

import torch
from torchvision.models import resnet50, ResNet50_Weights
from pytorch_bench import benchmark

# Load model and example input
model = resnet50(weights=ResNet50_Weights.DEFAULT)
example_input = torch.randn(1, 3, 224, 224)

# Run benchmark
results = benchmark(model, example_input)

You can run example.py to see the output in your terminal and play with the different functions.

Benchmarking YOLO Models

You can also benchmark YOLO models using the same benchmark function. The tool will automatically detect if it's a YOLO model and use the appropriate benchmarking method:

from ultralytics import YOLO
from pytorch_bench import benchmark

# Load a YOLO model
model = YOLO('yolo11n.pt')  # or any other YOLO model

# Run benchmark with image size 640
results = benchmark(model, 640)

# The results will include:
# - Number of layers and parameters
# - FLOPs
# - GPU memory usage
# - FPS and inference time
# - Energy consumption metrics

You can run example_yolo.py to see the output in your terminal and play with the different functions.

Advanced Usage

Tracking gpu memory for a torch model

from pytorch_bench import track_gpu_memory

with track_gpu_memory():
    # Your GPU operations here
    pass

max_memory = track_gpu_memory.max_memory
current_memory = track_gpu_memory.current_memory
print(f"Max GPU memory used: {max_memory:.2f} MB")
print(f"Current GPU memory used: {current_memory:.2f} MB")

Getting info about GPU memory

from pytorch_bench import detailed_memory_info

detailed_memory_info()

Calculating model sparsity

from pytorch_bench import get_model_sparsity, get_layer_sparsity

sparsity = get_model_sparsity(model)
print(f"Model sparsity: {sparsity:.2f}")

get_layer_sparsity(model)

Visualizations

When plot=True is set in the benchmark function, two plots will be generated:

  1. num_parameters_distribution.png: Bar chart showing the number of parameters in each layer.
  2. weight_distribution.png: Histograms of weight distributions for each layer.

These plots can provide insights into the model's architecture and weight patterns.

Notes

  • Ensure you have a CUDA-capable GPU for GPU benchmarking.
  • The tool uses CUDA events for precise GPU timing.
  • Memory usage is tracked using PyNVML.
  • MACs calculation requires the torchprofile package.

Contributing

This project started as a personal tool to simplify the process of benchmarking models on EdgeAI resources. It's designed to be a lightweight, easy-to-use solution that can be quickly installed and utilized.

While this is primarily a personal project, I'm open to suggestions and improvements. If you have ideas or find any issues, feel free to:

  1. Open an issue on the GitHub repository to report bugs or suggest enhancements.
  2. Submit pull requests for minor fixes or improvements..

If you find this tool helpful, feel free to star the repository or share it with others who might benefit from it. Thanks for your interest!

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

ilia_torch-0.2.tar.gz (6.7 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

ilia_torch-0.2-py3-none-any.whl (7.1 kB view details)

Uploaded Python 3

File details

Details for the file ilia_torch-0.2.tar.gz.

File metadata

  • Download URL: ilia_torch-0.2.tar.gz
  • Upload date:
  • Size: 6.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.12.5

File hashes

Hashes for ilia_torch-0.2.tar.gz
Algorithm Hash digest
SHA256 ed1a3434aaa54e786c8a17944f5621b091130e9357b2e90b92188bb45df87abd
MD5 cfbf99258dd6b303b25794782d2e2a9b
BLAKE2b-256 9943b761db0120555fc2036f2231bfbddcc3df722a563e5bec50624d4db7f02f

See more details on using hashes here.

File details

Details for the file ilia_torch-0.2-py3-none-any.whl.

File metadata

  • Download URL: ilia_torch-0.2-py3-none-any.whl
  • Upload date:
  • Size: 7.1 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.12.5

File hashes

Hashes for ilia_torch-0.2-py3-none-any.whl
Algorithm Hash digest
SHA256 ca05c2fb2133e6ba9602d942083c9e13612c4138be9b17692ad4f9b4272e0840
MD5 741340aca6cbec90cff87b2151133cb1
BLAKE2b-256 49594de4b70464e5ead4e1c45b59b76bfea586ce9f1f57460b0042930a110c7c

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page