PyTorch-style tensor operations with CUDA kernels compiled by oven-compiler
Project description
Oven-Tensor
A PyTorch-style tensor library with GPU acceleration using CUDA kernels compiled by oven-compiler.
Features
- PyTorch-like Interface: Familiar tensor operations with
.to(),.cpu(),.gpu()methods - Automatic Kernel Compilation: Python kernels compiled to PTX using oven-compiler
- Smart Caching: Compiled kernels cached for fast subsequent loads
- CPU/GPU Hybrid: Seamless switching between NumPy (CPU) and CUDA (GPU)
- Extensible: Easy to add custom kernels
Installation
pip install oven-tensor
Requirements:
- Python 3.7+
- CUDA-capable GPU
- oven-compiler in PATH
- PyCUDA
Quick Start
import oven_tensor as ot
# Create tensors
x = ot.tensor([1.0, 2.0, 3.0, 4.0])
y = ot.tensor([2.0, 3.0, 4.0, 5.0])
# CPU operations (NumPy)
z_cpu = x + y
print(z_cpu) # Tensor([3. 5. 7. 9.], device=cpu)
# GPU operations (CUDA)
x_gpu = x.gpu()
y_gpu = y.gpu()
z_gpu = x_gpu + y_gpu
print(z_gpu.cpu()) # Tensor([3. 5. 7. 9.], device=cpu)
Operations
Tensor Creation
ot.tensor([1, 2, 3]) # From data
ot.zeros((2, 3)) # Zero tensor
ot.ones((2, 3)) # Ones tensor
ot.randn((2, 3)) # Random normal
Unary Operations
x.sigmoid(), x.exp(), x.sqrt(), x.abs()
x.sin(), x.cos(), x.log(), x.tanh()
Binary Operations
x + y, x - y, x * y, x / y, x ** y, x % y
Device Management
x.gpu() # Move to GPU
x.cpu() # Move to CPU
x.to(ot.device('gpu')) # Explicit device
Cache Management
# Command-line tool
oven-tensor-cache list # List functions
oven-tensor-cache clear # Clear cache
oven-tensor-cache info # Show cache info
# Python API
ot.clear_kernel_cache()
ot.reload_kernels()
ot.list_available_functions()
Custom Kernels
Add kernels in oven_tensor/kernels/:
# my_kernel.py
import oven.language as ol
def my_function(x_ptr: ol.ptr, y_ptr: ol.ptr):
idx = ol.get_global_id()
x_val = ol.load(x_ptr, idx)
y_val = x_val * 2.0 + 1.0
ol.store(y_val, y_ptr, idx)
Testing
# Run all tests
./run_tests.sh
# Run specific test categories
pytest tests/ -m "not gpu" # Skip GPU tests
pytest tests/ -m "not slow" # Skip slow tests
pytest tests/ --cov=oven_tensor # With coverage
# Run specific test files
pytest tests/test_tensor_basic.py
pytest tests/test_kernel_cache.py
License
MIT License
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distributions
No source distribution files available for this release.See tutorial on generating distribution archives.
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file oven_tensor-0.1.0-py3-none-any.whl.
File metadata
- Download URL: oven_tensor-0.1.0-py3-none-any.whl
- Upload date:
- Size: 10.1 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.10.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
5d7085a6e3b325a80d5eb57158365bba514ef7e25297f7de6fcd35fe2c510fc5
|
|
| MD5 |
3760b692a2bc0f9ecb19103c3ff9d4f0
|
|
| BLAKE2b-256 |
8783371d42bc77d1784363d582cdcbceb1d40427440476d9b997057872a5557d
|