Skip to main content

Memory-efficient matrix operations framework for GPU-constrained environments

Project description

Accelera - Memory-Efficient Matrix Operations Framework

A framework for performing large matrix operations on memory-constrained GPUs through intelligent chunking and CPU-GPU memory management.

Python 3.8+ PyTorch CUDA

๐Ÿš€ Problem Statement

When working with large matrices on GPUs with limited VRAM, operations like matrix multiplication can cause Out-of-Memory (OOM) errors. Accelera solves this by:

  • ๐Ÿงฉ Breaking large operations into smaller chunks
  • ๐Ÿ’พ Intelligently offloading intermediate results to CPU/RAM
  • ๐Ÿ”„ Dynamically managing GPU memory
  • ๐ŸŽฏ Providing a seamless API for large matrix operations

โœจ Features

  • ๐Ÿค– Automatic chunking for matrix operations
  • ๐Ÿง  Dynamic memory management between GPU and CPU
  • โšก CUDA-optimized for NVIDIA GPUs
  • ๐Ÿ“Š Configurable chunk sizes based on available VRAM
  • ๐Ÿ“ˆ Progress tracking for long-running operations
  • ๐Ÿ“‹ Memory usage monitoring
  • ๐Ÿ”Œ Multiple input types (PyTorch tensors, NumPy arrays)

๐Ÿƒโ€โ™‚๏ธ Quick Start

import accelera as acc

# Initialize with automatic VRAM detection
engine = acc.MatrixEngine(auto_detect_memory=True)

# Perform large matrix multiplication that might cause OOM on small GPUs
A = acc.Matrix.random((10000, 8000))  # 10k x 8k matrix (~305 MB)
B = acc.Matrix.random((8000, 12000))  # 8k x 12k matrix (~366 MB)

# This will automatically chunk and manage memory
C = engine.matmul(A, B)  # Result: 10k x 12k matrix (~458 MB)

print(f"โœ… Success! Result shape: {C.shape}")

๐ŸŽฏ Real-world Example

# Scenario: Training a large neural network layer on a 4GB GPU
import accelera as acc

engine = acc.MatrixEngine()

# Large weight matrix (would normally cause OOM)
weights = acc.Matrix.randn((20000, 15000))  # ~1.1 GB
inputs = acc.Matrix.randn((15000, 8000))    # ~457 MB

# Forward pass - automatically chunked if needed
output = engine.matmul(weights, inputs)     # ~610 MB result

# Check memory usage
memory_info = engine.get_memory_info()
print(f"GPU utilization: {memory_info['gpu_utilization']:.1f}%")

๐Ÿ“ฆ Installation

Requirements

  • Python 3.8+
  • PyTorch 2.0+ with CUDA support
  • NVIDIA GPU with CUDA drivers
  • Sufficient CPU RAM for temporary storage

Install

# Clone the repository
git clone https://github.com/maifeeulasad/accelera
cd accelera

# Install dependencies
pip install -r requirements.txt

# Install in development mode
pip install -e .

# Verify installation
make verify

๐Ÿ› ๏ธ Usage Examples

Basic Operations

import accelera as acc
import numpy as np

# Initialize engine
engine = acc.MatrixEngine(auto_detect_memory=True, enable_progress=True)

# Matrix multiplication
A = acc.Matrix.randn((5000, 4000))
B = acc.Matrix.randn((4000, 6000))
C = engine.matmul(A, B)

# Element-wise operations
X = acc.Matrix.randn((3000, 4000))
Y = acc.Matrix.randn((3000, 4000))

# Addition
Z1 = engine.add(X, Y)

# Element-wise multiplication  
Z2 = engine.multiply(X, Y)

# Works with NumPy arrays and PyTorch tensors too!
A_np = np.random.randn(1000, 800).astype(np.float32)
B_np = np.random.randn(800, 1200).astype(np.float32)
C_from_numpy = engine.matmul(A_np, B_np)

Advanced Configuration

# Custom chunking strategy
engine = acc.MatrixEngine(
    chunking_strategy='adaptive',  # 'row', 'tile', 'adaptive'
    chunk_size=1024,               # Manual chunk size
    enable_progress=True           # Show progress bars
)

# Manual memory management
engine.set_chunk_size(512)                    # Smaller chunks for limited memory
engine.enable_auto_memory_detection(False)    # Disable auto-detection
engine.cleanup()                              # Force GPU memory cleanup

# Memory monitoring
memory_info = engine.get_memory_info()
print(f"GPU Memory: {memory_info['gpu_available_gb']:.2f}GB available")
print(f"CPU Memory: {memory_info['cpu_available_gb']:.2f}GB available")

๐Ÿ“Š Performance Comparison

Run the benchmark to see how Accelera performs on your system:

# Run full benchmark suite
make benchmark

# Test specific matrix size
python examples/benchmark.py --custom-size 4000 3000 5000

# Quick demo
make demo

๐Ÿ“ Project Structure

accelera/
โ”œโ”€โ”€ accelera/                  # Core framework
โ”‚   โ”œโ”€โ”€ __init__.py            # Main package exports
โ”‚   โ”œโ”€โ”€ engine.py              # MatrixEngine - main API
โ”‚   โ”œโ”€โ”€ matrix.py              # Matrix wrapper class
โ”‚   โ”œโ”€โ”€ memory_manager.py      # GPU/CPU memory management
โ”‚   โ”œโ”€โ”€ chunking.py            # Chunking strategies
โ”‚   โ””โ”€โ”€ config.py              # Configuration and logging
โ”œโ”€โ”€ examples/                  # Usage examples
โ”‚   โ”œโ”€โ”€ basic_usage.py         # Basic operations demo
โ”‚   โ”œโ”€โ”€ advanced_usage.py      # Advanced features demo
โ”‚   โ””โ”€โ”€ benchmark.py           # Performance benchmarking
โ”œโ”€โ”€ tests/                     # Unit tests
โ”‚   โ””โ”€โ”€ test_accelera.py       # Comprehensive test suite
โ”œโ”€โ”€ DOCUMENTATION.md           # Detailed documentation
โ”œโ”€โ”€ requirements.txt           # Python dependencies
โ”œโ”€โ”€ setup.py                   # Package setup
โ””โ”€โ”€ Makefile                   # Development commands

๐Ÿงช Running Examples

# Basic usage example
python examples/basic_usage.py

# Advanced features demonstration
python examples/advanced_usage.py

# Performance benchmarking
python examples/benchmark.py

# Or use make commands
make examples
make benchmark

๐Ÿ”ง Development

# Install development dependencies
make dev-install

# Run tests
make test

# Run linting
make lint

# Format code
make format

# Clean build artifacts
make clean

๐Ÿ“– Documentation

๐ŸŽฏ Use Cases

  • ๐Ÿง  Deep Learning: Training large neural networks on consumer GPUs
  • ๐Ÿ”ฌ Scientific Computing: Large matrix operations in research
  • ๐Ÿ“Š Data Processing: Batch processing of large datasets
  • ๐ŸŽฎ Computer Graphics: Large transformation matrices
  • ๐Ÿ“ˆ Financial Modeling: Risk calculations with large covariance matrices

โš ๏ธ System Requirements

  • NVIDIA GPU (optional)
  • CUDA (not sure about minimum version)

๐Ÿค Contributing

Following the guidelines in claude.md:

  1. Fork the repository
  2. Create a feature branch: git checkout -b feature-name
  3. Follow the coding standards: Small commits, clear intent, boring solutions
  4. Add tests for new functionality
  5. Submit a pull request with clear description

๐Ÿ“„ License

This project is licensed under the MIT License - see the LICENSE file for details.

๐Ÿ™ Acknowledgments

  • PyTorch team for the excellent tensor library
  • NVIDIA for CUDA and GPU computing
  • Community feedback and contributions

๐Ÿ’ก Pro Tip: Start with the basic example, then explore advanced features. The framework is designed to be simple by default but powerful when needed!

# Get started in 3 lines
import accelera as acc
engine = acc.MatrixEngine()
result = engine.matmul(large_matrix_A, large_matrix_B)

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

accelera-0.1.2.post2.tar.gz (29.5 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

accelera-0.1.2.post2-py3-none-any.whl (28.8 kB view details)

Uploaded Python 3

File details

Details for the file accelera-0.1.2.post2.tar.gz.

File metadata

  • Download URL: accelera-0.1.2.post2.tar.gz
  • Upload date:
  • Size: 29.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.11

File hashes

Hashes for accelera-0.1.2.post2.tar.gz
Algorithm Hash digest
SHA256 5f52414e2d7508613b4faf1583e00a6cfba4b56853bab9c1375b48c488d81934
MD5 54695a0c7967f2f9236e9b09aa10dc72
BLAKE2b-256 e4bb6cfe6d220d3cf2e6afcfbe9b424835f6ba33af7eaacab3841540c330ca5c

See more details on using hashes here.

File details

Details for the file accelera-0.1.2.post2-py3-none-any.whl.

File metadata

  • Download URL: accelera-0.1.2.post2-py3-none-any.whl
  • Upload date:
  • Size: 28.8 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.11

File hashes

Hashes for accelera-0.1.2.post2-py3-none-any.whl
Algorithm Hash digest
SHA256 cd4ac58e09129bb7a678dd6b41b35ddb28027df78ddd20f984a2c7c4ba59d5ce
MD5 bbf1b20609f40b3868a6d96298adb97a
BLAKE2b-256 9b2832dca6f57c579496e3925a3376a5158a16144ffc0030a6ea066b1483fd31

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page