Simple pytorch implementation of log-signatures.
Project description
log_signatures_pytorch
Differentiable log-signature and signature kernels implemented in PyTorch with both CPU-friendly and GPU-parallel execution paths.
What you'll find
- Batched signature and log-signature computation for tensors shaped
(batch, length, dim)with optional streaming outputs at every step. For a single path, add a leading dimension viaunsqueeze(0). - Sliding-window signatures and log-signatures that reuse streamed prefixes (Chen identity) instead of re-computing each window independently.
- Hall-basis utilities (
hall_basis,logsigdim,logsigkeys) plus Lyndon “words” helpers (lyndon_words,logsigdim_words,logsigkeys_words) for inspecting dimensions and basis labels. - Two log-signature coordinate systems: Signatory-style “words” (Lyndon, default) and Hall.
- Two log-signature backends: the default signature→log path, and an incremental sparse BCH implementation for depths up to 4 (falls back otherwise).
- The implementation of signatures is structured after keras_sig, but only focuses on pytorch.
- Dependencies are kept minimal.
Installation
Requires Python 3.13+ and PyTorch ≥ 2.9 (CPU or CUDA builds work). To install from pypi using pip:
pip install log-signatures-pytorch
From the repository root:
uv venv
source .venv/bin/activate
uv sync # installs runtime deps + project in editable mode
Verify PyTorch is available:
python - <<'PY'
import torch
print("torch version:", torch.__version__)
print("cuda available:", torch.cuda.is_available())
PY
Quick start
Signature and log-signature of a single path
import torch
from log_signatures_pytorch import signature, log_signature, logsigdim_words
path = torch.tensor([[0.0, 0.0], [1.0, 1.0], [2.0, 0.0]]).unsqueeze(0)
sig = signature(path, depth=2)
print(sig.shape) # torch.Size([1, 6]) = sum(width**k for k in 1..depth)
log_sig = log_signature(path, depth=2)
print(log_sig.shape) # torch.Size([1, 3]) = logsigdim_words(2, 2)
print("logsigdim_words:", logsigdim_words(2, 2)) # 3
# Lyndon words coordinates (Signatory-style)
log_sig_words = log_signature(path, depth=2, mode="words")
print(log_sig_words.shape) # torch.Size([1, 3])
Batched computation and streaming outputs
batch_paths = torch.tensor([
[[0.0, 0.0], [1.0, 1.0]],
[[0.0, 0.0], [2.0, 2.0]],
])
sig = signature(batch_paths, depth=2)
print(sig.shape) # torch.Size([2, 6])
log_sig_stream = log_signature(batch_paths, depth=2, stream=True)
print(log_sig_stream.shape) # torch.Size([2, 1, 3]) -> (batch, steps, logsigdim)
# Streaming for a single path returns one row per increment (batch=1)
sig_stream = signature(path, depth=2, stream=True)
print(sig_stream.shape) # torch.Size([1, 2, 6])
Sliding-window signatures and log-signatures
import torch
from log_signatures_pytorch import windowed_signature, windowed_log_signature
path = torch.tensor([[0.0, 0.0], [1.0, 1.0], [2.0, 0.0], [3.0, -1.0]]).unsqueeze(0)
width = path.shape[-1]
window_size = 4
hop_size = 2
win_sig = windowed_signature(path, depth=2, window_size=window_size, hop_size=hop_size)
print(win_sig.shape) # torch.Size([batch, num_windows, 6])
win_logsig = windowed_log_signature(
path, depth=2, window_size=window_size, hop_size=hop_size, mode="words"
)
print(win_logsig.shape) # torch.Size([batch, num_windows, logsigdim_words(width, 2)])
Hall basis helpers
from log_signatures_pytorch import hall_basis, logsigkeys
basis = hall_basis(width=2, depth=2)
print(basis) # [1, 2, (1, 2)]
keys = logsigkeys(width=2, depth=2)
print(keys) # ['1', '2', '[1,2]'] (matches esig format)
# Lyndon words helpers
from log_signatures_pytorch import lyndon_words, logsigkeys_words
words = lyndon_words(width=2, depth=3)
print(words) # [(1,), (2,), (1, 2), (1, 1, 2), (1, 2, 2)]
print(logsigkeys_words(width=2, depth=3))
Choosing computation mode
gpu_optimized: defaults to True when the input tensor is on CUDA. Set False to force the CPU scan path.method:log_signature(..., method="bch_sparse")uses the incremental BCH routine for depths supported byHallBCH(depth ≤ 4); otherwise it falls back to the default path.mode:log_signature(..., mode="words"|"hall")chooses the coordinate basis. Default is"words". BCH currently requiresmode="hall"; the default path supports both.
Signature outputs exclude the empty word (dimension is sum(width**k for k=1..depth)); use logsigdim(width, depth) to size log-signature outputs.
GPU compile recommendations
- For fixed shapes with many repeated calls,
torch.compilewith modereduce-overheadgives the fastest runtime for_batch_signature_gpu(~0.08–0.12 ms in our sweeps) and forlog_signature(similarly sized speedups). First-call compile time can be large—especially for log-signatures—so cache per shape if you need to reuse compiled artifacts. - For workloads with many varying shapes or when compile latency matters more than per-call speed, prefer
none(no compile) or the lighterdefaultmode.defaultis slower thanreduce-overheadat runtime but compiles much faster; this tradeoff is more pronounced for log-signatures. - The benchmark helper
benchmarks/benchmark_batch_signature_gpu.pysupports--target signature|log_signature,--compile-modes none reduce-overhead default,--measure-compile-time, and per-shape compile caching. CSVs land underbenchmarks/results/by default.
Testing and verification
- Run the suite:
pytest tests -q - Mathematical property checks are documented in
tests/mathematical_verification_guide.md.
Documentation
Documentation is built using MkDocs with mkdocstrings and Material theme. To build and serve the documentation:
# Build static site
uv run mkdocs build
# Serve locally (with auto-reload on changes)
uv run mkdocs serve
The documentation will be available at http://127.0.0.1:8000 when serving locally.
References
- esig: https://github.com/datasig-ac-uk/esig
- signatory: https://github.com/patrick-kidger/signatory
- keras-sig: https://github.com/remigenet/keras_sig
- Hall basis: "On the bases of free Lie algebras". M. Hall (1950)
License
MIT
Citation
If you use this software in your research or in your project, please cite it as follows:
BibTeX
@software{log_signatures_pytorch,
author = {Aune, Erlend},
title = {log-signatures-pytorch: Differentiable log-signature and signature kernels in PyTorch},
version = {0.1.x},
url = {https://github.com/froskekongen/log_signatures_pytorch},
year = {2025},
license = {MIT}
}
Plain text
Aune, Erlend. (2025). log-signatures-pytorch: Differentiable log-signature and signature kernels in PyTorch (Version 0.1.x). [Computer software]: https://github.com/froskekongen/log_signatures_pytorch
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file log_signatures_pytorch-0.1.8.tar.gz.
File metadata
- Download URL: log_signatures_pytorch-0.1.8.tar.gz
- Upload date:
- Size: 21.8 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
47453dd60876e9dc30f602622e87f25a6c93c8e0c2d0c7129ef8ed540349312f
|
|
| MD5 |
06ce609e6169e90970b47b6b24148409
|
|
| BLAKE2b-256 |
39f665eb6df6e990c8b44b78086bab23aba8048bfd16f27bdf8469835e2c783f
|
Provenance
The following attestation bundles were made for log_signatures_pytorch-0.1.8.tar.gz:
Publisher:
publish.yml on Froskekongen/log_signatures_pytorch
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
log_signatures_pytorch-0.1.8.tar.gz -
Subject digest:
47453dd60876e9dc30f602622e87f25a6c93c8e0c2d0c7129ef8ed540349312f - Sigstore transparency entry: 764384312
- Sigstore integration time:
-
Permalink:
Froskekongen/log_signatures_pytorch@093f774432521050dffaae18590bd587220ef611 -
Branch / Tag:
refs/tags/v0.1.8 - Owner: https://github.com/Froskekongen
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@093f774432521050dffaae18590bd587220ef611 -
Trigger Event:
release
-
Statement type:
File details
Details for the file log_signatures_pytorch-0.1.8-py3-none-any.whl.
File metadata
- Download URL: log_signatures_pytorch-0.1.8-py3-none-any.whl
- Upload date:
- Size: 27.7 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
4c71e436701f53b12f1a017474401219c8442c1a7bcc9d0baa807143db9a38eb
|
|
| MD5 |
9673329cedfbf87df7a1e90c08b22bdf
|
|
| BLAKE2b-256 |
5a88b40c3f0e6b3a87e7d098801c36058c547c9ce7e894617fb9ee583b883087
|
Provenance
The following attestation bundles were made for log_signatures_pytorch-0.1.8-py3-none-any.whl:
Publisher:
publish.yml on Froskekongen/log_signatures_pytorch
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
log_signatures_pytorch-0.1.8-py3-none-any.whl -
Subject digest:
4c71e436701f53b12f1a017474401219c8442c1a7bcc9d0baa807143db9a38eb - Sigstore transparency entry: 764384315
- Sigstore integration time:
-
Permalink:
Froskekongen/log_signatures_pytorch@093f774432521050dffaae18590bd587220ef611 -
Branch / Tag:
refs/tags/v0.1.8 - Owner: https://github.com/Froskekongen
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@093f774432521050dffaae18590bd587220ef611 -
Trigger Event:
release
-
Statement type: