A lightweight C++ tensor and autograd library with Python bindings
Project description
TTorch
TTorch is a lightweight tensor and automatic differentiation library written in C++ with Python bindings.
The project explores the internal architecture of modern deep learning frameworks by implementing core components such as:
- tensor operations
- automatic differentiation (autograd)
- Python bindings
- modular compute kernels
The long-term goal is to build a minimal deep learning backend similar to PyTorch, while keeping the codebase simple and educational.
Key Features
- High-performance C++ core
- Automatic differentiation engine
- Python bindings for easy use
- Modular tensor operation system
- CMake-based build system
- Python package distribution via
pyproject.toml
Project Structure
TTorch/
│
├── pyproject.toml
├── CMakeLists.txt
├── README.md
├── LICENSE
│
├── src/
│ ├── tensor.cpp
│ ├── autograd.cpp
│ └── bindings.cpp
│
├── include/
│ ├── tensor.h
│ └── autograd.h
│
├── python/
│ └── my_cpp_lib/
│ └── __init__.py
│
└── tests/
└── test.cpp
Directory Overview
| Directory | Purpose |
|---|---|
src/ |
C++ source implementation |
include/ |
Public C++ headers |
python/ |
Python interface package |
tests/ |
Unit tests |
pyproject.toml |
Python build configuration |
CMakeLists.txt |
C++ build configuration |
Installation
Install from Source
Clone the repository:
git clone https://github.com/DuongAnh1212/TTorch.git
cd TTorch
Install build tools:
pip install build
Build the Python package:
python -m build
Install the generated wheel:
pip install dist/*.whl
Quick Example
#include "tensor.h"
// Create tensors
Tensor a = Tensor::form({2, 3}, {1, 2, 3, 4, 5, 6});
Tensor b = Tensor::ones({2, 3});
// Element-wise operations
Tensor c = a.add(b); // element-wise addition
Tensor d = a.multiply(b); // element-wise multiplication
Tensor e = a.add_int(10.0); // broadcast scalar addition
Tensor f = a.scale_int(2.0); // broadcast scalar multiply
// Matrix operations
Tensor t = a.transpose(); // or a.T()
Tensor ab = a.dot(a.T()); // matrix multiplication → (2,2)
// Reductions
Tensor s = a.sum(0); // sum along axis 0 → shape (3,)
Tensor m = a.mean(1); // mean along axis 1 → shape (2,)
// Shape manipulation
Tensor flat = a.flatten(); // → shape (6,)
Tensor r = a.reshape({3, 2}); // → shape (3,2)
// Print
a.print();
Autograd Example
#include "tensor.h"
#include "autograd.h"
Tensor x = Tensor::form({2, 2}, {1, 2, 3, 4});
x.requires_grad = true;
Tensor y = relu(x); // forward pass with grad tracking
y.backward(); // backpropagate gradients
x.grad->print(); // gradient of x
x.zero_grad(); // reset gradients
Development
Build with CMake
mkdir build
cd build
cmake ..
make
Run tests
pytest
Architecture Overview
The library follows a layered architecture similar to modern deep learning frameworks:
Python API
↓
Python Bindings (pybind11)
↓
C++ Core Library
↓
Tensor + Autograd Engine
↓
CPU / Future GPU Kernels
Tensor API
Static Constructors
| Method | Description |
|---|---|
Tensor::zeros({rows, cols}) |
Tensor filled with 0.0 |
Tensor::ones({rows, cols}) |
Tensor filled with 1.0 |
Tensor::custom({rows, cols}, val) |
Tensor filled with a custom scalar |
Tensor::form({rows, cols}, data) |
Tensor from a vector<double> |
Shape & Access
| Method | Description |
|---|---|
dims() |
Returns shape as vector<int> |
ndim() |
Returns the number of dimensions |
size(int dim) |
Returns the size of a given dimension |
at({i, j}) |
Element access by index |
value(data) |
Set tensor data from vector<double> |
slice(v, start, end) |
Extract a sub-range from a flat data vector |
reshape(newshape) |
Returns reshaped tensor (same total elements) |
view(newshape) |
Alias for reshape, also prints the result |
flatten() |
Returns 1D tensor |
Math Operations
| Method | Description |
|---|---|
add(Tensor) |
Element-wise addition (same shape) |
add_int(double) |
Add scalar to every element |
scale_int(double) |
Multiply every element by scalar |
multiply(Tensor) |
Element-wise multiplication (same shape) |
dot(Tensor) |
Matrix multiplication — supports 1D and 2D tensors |
transpose() / T() |
Transpose a 2D tensor |
sum(axis) |
Sum along axis (1D or 2D) |
mean(axis) |
Mean along axis (1D or 2D) |
Autograd
| Method | Description |
|---|---|
backward() |
Backpropagate gradients from this tensor |
zero_grad() |
Reset accumulated gradient to zero |
Display
| Method | Description |
|---|---|
print() |
Pretty-print the tensor with nested brackets |
Autograd API
TTorch implements a dynamic computation graph. Each operation attaches a GradFn to the output tensor, enabling automatic gradient computation via backward().
Gradient Functions
| GradFn | Forward op | Backward rule |
|---|---|---|
AddBackward |
add(a, b) |
grad flows equally to a and b |
AddScalarBackward |
add_int(a, s) |
grad flows to a unchanged |
MulBackward |
multiply(a, b) |
grad_a = grad * b, grad_b = grad * a |
ScaleBackward |
scale_int(a, s) |
grad_a = grad * s |
DotBackward |
dot(a, b) |
grad_a = grad @ b.T, grad_b = a.T @ grad |
TransposeBackward |
transpose(a) |
grad_a = grad.transpose() |
FlattenBackward |
flatten(a) |
grad_a = grad.reshape(original_shape) |
SumBackward |
sum(a, axis) |
broadcast grad back to original shape |
MeanBackward |
mean(a, axis) |
grad / N, broadcast back to original shape |
ReLUBackward |
relu(a) |
grad_a = grad * (a > 0) |
SigmoidBackward |
sigmoid(a) |
grad_a = grad * sigmoid(a) * (1 - sigmoid(a)) |
Activation Functions
| Function | Description |
|---|---|
relu(Tensor& x) |
Element-wise ReLU with gradient tracking |
sigmoid(Tensor& x) |
Element-wise sigmoid with gradient tracking |
Roadmap
Planned development milestones:
- Core tensor data structure
- Tensor math operations (add, multiply, dot, transpose, sum, mean)
- Autograd computation graph (GradFn architecture)
- Backpropagation engine (backward, zero_grad)
- Activation functions (relu, sigmoid)
- Tensor::backward() engine (build_topo + reverse walk)
- Python bindings (pybind11)
- Neural network modules
- Optimizers
- GPU backend support
Dependencies
Core dependencies:
- C++17
- CMake
- Python 3.8+
- pybind11
Additional dependencies may be introduced as the project evolves.
Contributing
Contributions are welcome.
Steps to contribute:
- Fork the repository
- Create a feature branch
- Implement your changes
- Submit a pull request
Please ensure that tests pass before submitting contributions.
License
This project is licensed under the terms described in the LICENSE file.
Status
This project is currently experimental and under active development.
APIs and internal design may change as the project evolves.
Inspiration
This project is inspired by the architecture of modern ML frameworks:
- PyTorch
- TensorFlow
- tinygrad
- NumPy
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file ttorch-0.1.0.tar.gz.
File metadata
- Download URL: ttorch-0.1.0.tar.gz
- Upload date:
- Size: 41.9 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
877e316bf1062d665cd493589e837ee3f1c7c96d4523324395a2e2f096a28465
|
|
| MD5 |
ef4526541cabc62ce082950b6fdf0455
|
|
| BLAKE2b-256 |
8786d58622b7d88f1373577092b74f55891cbbfd28cf4a449c8a4530f30b9535
|
File details
Details for the file ttorch-0.1.0-cp313-cp313-macosx_26_0_arm64.whl.
File metadata
- Download URL: ttorch-0.1.0-cp313-cp313-macosx_26_0_arm64.whl
- Upload date:
- Size: 5.4 kB
- Tags: CPython 3.13, macOS 26.0+ ARM64
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
1146556cd841cd326fe8592018401e18e7274d45b7eaf234fa8a3eeceb5cedfb
|
|
| MD5 |
9d3f60fba13c241752e027a06602e58d
|
|
| BLAKE2b-256 |
b572108d3563886d809fae70943d9cc458f16be6e5487da0f437778f5f5170ea
|