Experimental PyTorch-like autograd engine with an optional Vulkan compute backend (Raspberry Pi 5-focused).
Project description
rasptorch
rasptorch is an experimental deep learning library inspired by PyTorch, built around a simple goal: make neural networks practical on a Raspberry Pi 5 while taking advantage of its GPU.
It ships with three layers that work together:
- A NumPy-backed autograd engine and
nnmodule for CPU execution. - An experimental Vulkan backend for GPU tensor operations and GPU-focused training paths.
- A chat-style CLI and Streamlit UI for interactive model building, training, persistence, and inspection.
What’s Included
- Tensor ops: elementwise math, matmul, reductions, indexing, reshaping, stacking, and broadcasting.
- Neural network layers:
Linear,MLP,CNN,GRU,Transformer, normalization layers, activations, pooling, embeddings, and attention. - Training tools: optimizers, learning-rate schedulers, gradient clipping, regularization helpers, and reusable training loops.
- Persistence: save and load rasptorch checkpoints without requiring
torch. - Interfaces:
rasptorch chatfor REPL-style interaction andrasptorch uifor the Streamlit dashboard.
The Vulkan path uses real compute shaders compiled to SPIR-V. The supported kernels live in rasptorch/shaders/.
Installation
From PyPI
Base install:
pip install rasptorch
This installs the Python Vulkan bindings used by the GPU backend, but it does not by itself guarantee that the Pi 5 GPU path is available.
The optional gpu extra currently maps to the same Vulkan Python dependency.
Development install:
pip install -e ".[dev]"
Requirements for GPU mode
- Raspberry Pi 5 with working Vulkan drivers.
glslcon yourPATH.- Use the GPU-capable device path (
--device gpuor--device autowhen Vulkan is working).
If either of those is missing, GPU mode will fail clearly instead of silently pretending to work.
Quick Start
Run the chat REPL:
uv run rasptorch chat
Launch the UI:
uv run rasptorch ui
Open the Streamlit app at http://localhost:8501 unless you pass a different port.
Show the top-level CLI help:
uv run rasptorch --help
Execution Modes
main.py exposes three training/runtime modes:
cpu: NumPy autograd on the CPU.gpu: explicit Vulkan training with purpose-built kernels.gpu-autograd: experimental GPU autograd for a growing set of ops.
Examples:
uv run main.py --device cpu --epochs 10
uv run main.py --device gpu --epochs 50 --batch-size 32 --lr 0.1
uv run main.py --device gpu-autograd --epochs 50 --batch-size 32 --lr 0.1
CLI Examples
Create tensors:
uv run rasptorch tensor random --shape 2,3,4
uv run rasptorch tensor zeros --shape 3,4
uv run rasptorch tensor ones --shape 5,10
Build models:
uv run rasptorch model linear --input-size 10 --hidden-sizes "32,16" --output-size 2
uv run rasptorch model mlp --layers "64,32,16,2"
uv run rasptorch model cnn --in-channels 3 --out-channels "32,64,128"
uv run rasptorch model transformer --vocab-size 1000 --d-model 128 --num-heads 4 --num-layers 2
Manage models:
uv run rasptorch model list
uv run rasptorch model remove --model-id <model-id>
uv run rasptorch model save --model-id <model-id> --path model.pth
uv run rasptorch model load --path model.pth
uv run rasptorch model combine <model-a> <model-b>
Train a model:
uv run rasptorch model train --model-id <model-id> --epochs 10 --lr 0.001 --batch-size 32
Use JSON output for scripting or agents:
uv run rasptorch --json tensor zeros --shape 3,4
See rasptorch/CLI/rasptorch CLI.md for the full CLI reference.
UI Overview
The Streamlit UI is organized around a few focused pages:
- Models: browse, inspect, load, save, delete, and visualize model structure.
- Build & Train: create models, combine models, run training, and launch hyperparameter search.
- Dashboard: compare training runs and loss curves.
- Chat/REPL: use the same chat-style commands as the terminal CLI.
- Logs: review session events and actions.
The UI also includes dataset validation and preprocessing controls, a persistent log viewer, and a lightweight explainability preview for uploaded images.
GPU Notes
The fast path is compute-only: keep tensors on GPU and avoid unnecessary .numpy() readbacks.
Useful checks:
uv run gpu_demo.py --smoke-only
uv run main.py --device gpu --epochs 1 --save model.pth
The first command validates the Vulkan backend. The second produces a rasptorch checkpoint without needing torch.
Training Utilities
rasptorch.train provides a small reusable training loop with optional validation and metrics.
Common pieces:
rasptorch.train.fit(...)rasptorch.train.Accuracy()rasptorch.train.classification_target_one_hot(...)
Example:
from rasptorch import functional as F
from rasptorch.optim import SGD
from rasptorch.train import Accuracy, classification_target_one_hot, fit
model = ...
opt = SGD(model.parameters(), lr=0.01, momentum=0.9, weight_decay=1e-4)
fit(
model,
opt,
train_loader,
loss_fn=F.cross_entropy,
device="gpu",
epochs=10,
val_loader=val_loader,
target_transform=classification_target_one_hot(num_classes=10, device="gpu"),
metrics=[Accuracy()],
)
PyTorch Bridge
rasptorch.torch_bridge can convert supported PyTorch inference models so compatible layers run on GPU and unsupported layers fall back to CPU.
from rasptorch.torch_bridge import convert_torch_model
import torch
torch_model = torch.nn.Sequential(
torch.nn.Linear(10, 64),
torch.nn.ReLU(),
torch.nn.Linear(64, 10),
).eval()
rasp_model = convert_torch_model(torch_model, device="gpu")
Project Layout
rasptorch/- core library modules.rasptorch/CLI/- Click CLI and chat REPL.rasptorch/ui/app.py- Streamlit UI.rasptorch/shaders/- Vulkan compute shaders.tests/- CPU, CLI, UI, and backend tests.
Development
Run the test suite:
pytest
Run the Vulkan smoke test:
uv run gpu_demo.py --smoke-only
Build packaging artifacts:
python -m pip install -U build twine
python -m build
Upload to Pypi
python twine upload --non-interactive dist/*
Limitations
- Vulkan support is experimental and focused on the Raspberry Pi 5.
- GPU autograd coverage is growing, but it is not complete.
- Some higher-level features still fall back to CPU paths depending on the operation and shape.
License
MIT. See LICENSE.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file rasptorch-3.3.1.tar.gz.
File metadata
- Download URL: rasptorch-3.3.1.tar.gz
- Upload date:
- Size: 207.8 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.11.13
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
ba59b132884b6435b65f74fd27a92d3f0b699462df1a174b5f66963b6f0b23a1
|
|
| MD5 |
c2671bacebd5b90020ac40d60deaf683
|
|
| BLAKE2b-256 |
49f2f70083ccb81bf529220e61f5f2e9d4000a8f04f846575e338698b7758572
|
File details
Details for the file rasptorch-3.3.1-py3-none-any.whl.
File metadata
- Download URL: rasptorch-3.3.1-py3-none-any.whl
- Upload date:
- Size: 235.4 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.11.13
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
22ff096ba70d795bcf59d5833a950803de19589a32075061e26d388328b4e37a
|
|
| MD5 |
7c579960e3e675930c2118a884d2b223
|
|
| BLAKE2b-256 |
654184e617ed9ef85ae6954977ac325dd12326a05c7e69b1ffec75aa80ed0288
|