A deep learning library powered by a C++ autograd engine, featuring a PyTorch-like API and cross-platform support.
Project description
SynapX
What is this project?
SynapX is a deep learning library that implements its core functionality (autograd and tensor operations) in C++ and exposes it to Python via bindings. The goal was to leverage the computational power of C++ while keeping Python's ease of use. This project uses libtorch as its main dependency to serve as the backend for tensor operations across multiple devices (CPU, CUDA, ROCm, etc).
The project is designed to be compatible with Windows and Linux (tested on Windows 10 and Ubuntu 22.04) and implements a PyTorch-like API to make it familiar and easy to use - if you know PyTorch, you know SynapX.
Why I built this
The aim was to create an autograd engine in C++, implement Python bindings, build a Deep Learning library on top of it, and package everything into a cross-platform Python package. It's essentially an exploration of how automatic differentiation works under the hood, combined with the practical challenge of bridging C++ performance with Python usability.
Any contributions or ideas are more than welcome!
Note: This project builds on my previous exploration synapgrad, which implemented similar autograd concepts purely in Python using numpy for tensor operations.
Quick Start
import synapx
w = synapx.randn((3, 4), requires_grad=True)
x = synapx.randn((2, 3), requires_grad=True)
b = synapx.tensor([1.0, 2.0, 3.0, 4.0], requires_grad=True)
# Matrix multiplication and broadcasting (addmm or nn.functional.linear could also be used here)
y = synapx.matmul(x, w) + b # Shape: (2, 4)
# Slice operations
y_slice = y[:, 1:3] # Take columns 1-2
# Unbind along dimension 0 (split into individual tensors)
y1, y2 = synapx.unbind(y_slice, dim=0)
# Compute loss
loss = (y1 * y2).sum()
# Gradients are computed automatically
loss.backward()
print(f"w.grad shape: {w.grad.shape}")
print(f"x.grad shape: {x.grad.shape}")
print(f"b.grad: {b.grad}")
# Use no_grad context for inference
with synapx.no_grad():
inference_result = synapx.addmm(b, x, w)
print(f"Inference (no gradients): {inference_result}")
Simple neural network:
import synapx
import synapx.nn as nn
class SimpleNet(nn.Module):
def __init__(self):
super().__init__()
self.linear1 = nn.Linear(784, 128)
self.linear2 = nn.Linear(128, 10)
self.relu = nn.ReLU()
def forward(self, x):
x = self.relu(self.linear1(x))
return self.linear2(x)
model = SimpleNet()
x = synapx.randn((32, 784))
y = model(x)
print(y.shape) # (32, 10)
Installation
pip install synapx
The package automatically detects your PyTorch installation and uses its attached libtorch version as the backend. Make sure you have PyTorch installed:
pip install torch
Project Structure
The project is organized into 4 main parts:
libsynapx/
This is the C++ library that powers everything. Here's where the critical components live:
- Tensor class: A wrapper around libtorch tensors
- Autograd engine: Implements backward functions, backpropagation algorithm, and graph components
- Dynamic computation graph: Builds the DAG as operations are chained
Dependencies: The main dependencies are pybind11, libtorch, spdlog, and VCPKG. Both pybind11 and libtorch can be installed with pip and will be automatically detected during the CMake build process. SynapX links to the shared libraries of libtorch from your Python torch installation.
Compatibility note: Building SynapX against libtorch+cpu is sufficient to work with both CPU-only and CUDA versions. Compiling against the CPU-only version provides support for both, but not the other way around.
synapx/
Once the C++ code is compiled and installed, this directory contains the complete Python package with all components linked together: the Deep Learning library, tensor operations, and autograd engine.
The package is compiled for each libtorch minor version it supports. A SynapX compiled with libtorch 2.3.X won't necessarily work with 2.4.X, and the same goes for different Python versions. The correct C++ library is loaded dynamically at runtime by checking your installed PyTorch version.
examples/
Contains practical examples showcasing the library in action. You'll need additional dependencies for these:
pip install scikit-learn numpy matplotlib pkbar
The examples include simple problems like make_moons and MNIST classification. You can easily switch between PyTorch and SynapX backends to compare performance and functionality.
Performance note: Currently, there's still a significant performance gap between PyTorch and SynapX, even though both use the same underlying libtorch operations. This is an early version where I focused on getting everything working correctly - the optimization phase comes next, and there are many known bottlenecks and excessive memory consumption issues that can be addressed.
tests/
Contains tests for almost every tensor operation supported in SynapX, as well as tests for layers, activations, and other components.
Run all tests:
pip install pytest
python -m pytest tests
To compare PyTorch and SynapX performance for each implemented operation:
python -m pytest ./tests/test_ops.py -s
Building from Source
Setting up CMake presets
Create a CMakeUserPresets.json file in the libsynapx/ directory:
Windows example:
{
"version": 3,
"configurePresets": [
{
"name": "vs2022-release",
"inherits": "windows-release",
"displayName": "Visual Studio 2022 Windows Release",
"generator": "Visual Studio 17 2022",
"environment": {
"VCPKG_ROOT": "C:\\Users\\<user>\\vcpkg"
},
"cacheVariables": {
"VCPKG_TARGET_TRIPLET": "x64-windows",
"CMAKE_CXX_COMPILER": "cl",
"BUILD_PYTHON_BINDINGS": "ON",
"BUILD_EXAMPLES": "OFF"
}
}
],
"buildPresets": [
{
"name": "vs2022-release",
"configurePreset": "vs2022-release",
"displayName": "Visual Studio 2022 Windows Release Build",
"configuration": "Release"
}
]
}
Linux example:
{
"version": 3,
"configurePresets": [
{
"name": "ninja-release",
"inherits": "linux-release",
"displayName": "Ninja g++ Release",
"generator": "Ninja",
"environment": {
"VCPKG_ROOT": "/home/<user>/vcpkg"
},
"cacheVariables": {
"VCPKG_TARGET_TRIPLET": "x64-linux",
"CMAKE_C_COMPILER": "/usr/bin/gcc-12",
"CMAKE_CXX_COMPILER": "/usr/bin/g++-12",
"CMAKE_CUDA_HOST_COMPILER": "/usr/bin/gcc-12",
"BUILD_PYTHON_BINDINGS": "ON",
"BUILD_EXAMPLES": "OFF"
}
}
]
}
Note: Explicitly specify CMAKE_CUDA_HOST_COMPILER when building against libtorch+cuda libraries.
Building
cd libsynapx
# Windows
make rebuild preset=vs2022-release target=install
# Linux
make rebuild preset=ninja-release target=install
Generating Python stubs
After compilation, generate stub files using:
pip install pybind11-stubgen
python scripts/generate_pyi.py
Supported Versions
SynapX is compiled for specific combinations of Python and libtorch versions. With each release, GitHub Actions automatically builds the necessary wheels and uploads them to PyPI. Currently supported versions:
| Python Version | PyTorch Versions | Status |
|---|---|---|
| 3.9 | 2.4.X, 2.5.X, 2.6.X, 2.7.X | ✅ |
| 3.10 | 2.4.X, 2.5.X, 2.6.X, 2.7.X | ✅ |
| 3.11 | 2.4.X, 2.5.X, 2.6.X, 2.7.X | ✅ |
| 3.12 | 2.4.X, 2.5.X, 2.6.X, 2.7.X | ✅ |
Current Limitations and TODO
Things that need work and will probably be implemented in upcoming versions:
- Improve autograd speed: There are significant bottlenecks that can be optimized
- Add tensor hooks: For inspecting or modifying the backward pass from Python. The
retain_grad()function doesn't work yet because it should be implemented as a tensor hook - Fix
__iter__in Python tensor class: Current implementation is not optimal, but works. - Add remaining conv functionality: 1D and 2D convolutions, maxpool, avgpool
- Add CNN MNIST example
Future ideas (contributions welcome!):
- Visualize computation graphs with graphviz (forward and backward)
- Add PyNode support to let users define backward functions in Python that get called from the C++ engine
- Multi-backend support: Instead of relying only on libtorch, add the ability to switch between libtorch, xtensor, etc. This would require decoupling some logic from the current Tensor class and restructuring parts of the codebase, but it could be interesting for comparing the autograd engine with different tensor operation backends
The last point would be quite ambitious. Adding support for the wide range of operations needed for a complete autograd system using libraries not specifically designed for N-dimensional tensors with complex indexing and slicing (like Eigen, Blaze, or similar linear algebra libraries) would be a significant undertaking.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distributions
Built Distributions
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file synapx-0.1.0-cp312-cp312-win_amd64.whl.
File metadata
- Download URL: synapx-0.1.0-cp312-cp312-win_amd64.whl
- Upload date:
- Size: 1.9 MB
- Tags: CPython 3.12, Windows x86-64
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.12.9
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
75e43515f2732c240e51941f670b583af134020599d4363820aae05e3e9d144e
|
|
| MD5 |
d1c8084ada10c128020e89bdfce926f9
|
|
| BLAKE2b-256 |
5ff7bc23ade6587958ecdbb0da11b81408994450ccbe77cecbfb411a0616ad2b
|
File details
Details for the file synapx-0.1.0-cp312-cp312-manylinux_2_34_x86_64.whl.
File metadata
- Download URL: synapx-0.1.0-cp312-cp312-manylinux_2_34_x86_64.whl
- Upload date:
- Size: 2.3 MB
- Tags: CPython 3.12, manylinux: glibc 2.34+ x86-64
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.12.9
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
00812dfebe3c892ae67265ca68acfa1332ccf2c1d7ccadfca4557a910d976101
|
|
| MD5 |
36f4781e072ebc9bd50f9d09c088aa82
|
|
| BLAKE2b-256 |
d51a156f9c67521b11372b8017ee0505bd9aaac5cdfb82b64bbdc3af9e38abc7
|
File details
Details for the file synapx-0.1.0-cp311-cp311-win_amd64.whl.
File metadata
- Download URL: synapx-0.1.0-cp311-cp311-win_amd64.whl
- Upload date:
- Size: 1.8 MB
- Tags: CPython 3.11, Windows x86-64
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.12.9
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
1a2b1ee578cd5f010735ceac6128c6c6f1b756616ed9351977e7fe24b5cd7116
|
|
| MD5 |
d5becf75ad67c5cc4f6d57de1f1f3eb3
|
|
| BLAKE2b-256 |
8c981e6d9da86a039abe72b8718440d29bca681deb267d1f3b4585fac8d29d6f
|
File details
Details for the file synapx-0.1.0-cp311-cp311-manylinux_2_34_x86_64.whl.
File metadata
- Download URL: synapx-0.1.0-cp311-cp311-manylinux_2_34_x86_64.whl
- Upload date:
- Size: 2.3 MB
- Tags: CPython 3.11, manylinux: glibc 2.34+ x86-64
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.12.9
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
0fe5c55cc19346619af59cc3bb5b52e92f7cecf5886bb7c1fc83a6f21f214fb5
|
|
| MD5 |
ada51b781b6f62bbfe6174c75634d902
|
|
| BLAKE2b-256 |
5ff296f217387b68eaa0523a9cf6669975f02f84ca113b12155dc82e3f49c6eb
|
File details
Details for the file synapx-0.1.0-cp310-cp310-win_amd64.whl.
File metadata
- Download URL: synapx-0.1.0-cp310-cp310-win_amd64.whl
- Upload date:
- Size: 1.8 MB
- Tags: CPython 3.10, Windows x86-64
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.12.9
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
e9ad16556d62c4bcfd396e9486dcdf52c0d62ea454495e86c83e77397fac8920
|
|
| MD5 |
287bda9a5b607b786c0efb5d622198cd
|
|
| BLAKE2b-256 |
be3c7747a411da3873696e344caa69f6c961a4c8c23f71afb91d2a6b5c980548
|
File details
Details for the file synapx-0.1.0-cp310-cp310-manylinux_2_34_x86_64.whl.
File metadata
- Download URL: synapx-0.1.0-cp310-cp310-manylinux_2_34_x86_64.whl
- Upload date:
- Size: 2.3 MB
- Tags: CPython 3.10, manylinux: glibc 2.34+ x86-64
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.12.9
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
9731a626138790509c3d2709d0d9136b0bccf80faf222099c6c5dd15a1ab3336
|
|
| MD5 |
9f4486cea707ead948f717cc8a97bffa
|
|
| BLAKE2b-256 |
39c219f3aac1f2db56b0a716a59ec1cd65d5587fb4b4f37ce0b81380e176fda5
|
File details
Details for the file synapx-0.1.0-cp39-cp39-win_amd64.whl.
File metadata
- Download URL: synapx-0.1.0-cp39-cp39-win_amd64.whl
- Upload date:
- Size: 1.9 MB
- Tags: CPython 3.9, Windows x86-64
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.12.9
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
ef2339f19a72c07c1b76f80efa0fc919ceef240a67dd36bfdcd0ac3720c94fd3
|
|
| MD5 |
2c92dfb68e868595ee0e81c6918bd1d4
|
|
| BLAKE2b-256 |
8b6d4779adf879f8f6fa98d2615ce0b9257289e35fa0fcc24bdecaf2073b2ae8
|
File details
Details for the file synapx-0.1.0-cp39-cp39-manylinux_2_34_x86_64.whl.
File metadata
- Download URL: synapx-0.1.0-cp39-cp39-manylinux_2_34_x86_64.whl
- Upload date:
- Size: 2.3 MB
- Tags: CPython 3.9, manylinux: glibc 2.34+ x86-64
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.12.9
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
217bacae1c0368394a7db4e13f43e6bf61170b355776785149ac2ad43eb9ab6e
|
|
| MD5 |
dada7f4dc8328680e92607ba6722cac2
|
|
| BLAKE2b-256 |
7d2f12d224a617c21f402a3a77da392b03048417bceb0ea8bdf3d76e4138e360
|