Toroidal topology primitives for LLM coherence research (v7: replication update — inference-time bias null result, prompt hardening effective)
Project description
Topological Coherence
Toroidal attention constraints for reducing LLM hallucination
Sylvain Cormier | Paraxiom Research | 2026
Status (v7 — March 2026 Replication Update)
Inference-time logit bias: A comprehensive 6-phase replication found the v2 TruthfulQA improvements (+0.2pp to +2.8pp) were within LLM judge sampling variance. Exact T&I replication: −2.0pp, p=0.22 (not significant).
What works:
- Prompt hardening: −14pp hallucination reduction (p=0.05) — the active ingredient in Coherence Shield
- Training-time topology: 28x lower drift than random sparsity in toy model (topology matters, not just sparsity)
- Library primitives: Tonnetz geometry, attention masks, spectral gap, drift measurement all valid
Recommended direction: Training-time Karmonic spectral regularization (untested, promising based on toy model)
- Paper: DOI: 10.5281/zenodo.18516477
Installation
pip install topological-coherence
# With HuggingFace transformers support
pip install topological-coherence[hf]
Quick Start: Toroidal Logit Bias
Drop-in logit processor for any HuggingFace model — no fine-tuning required:
from topological_coherence import ToroidalLogitProcessor
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("gpt2")
tokenizer = AutoTokenizer.from_pretrained("gpt2")
processor = ToroidalLogitProcessor(grid_size=12, radius=2.0, alpha=0.3)
inputs = tokenizer("The quantum nature of", return_tensors="pt")
outputs = model.generate(
**inputs,
logits_processor=[processor],
max_new_tokens=100
)
print(tokenizer.decode(outputs[0]))
Core API
Tonnetz Geometry
from topological_coherence import Tonnetz, distance_matrix
# Create a 12x12 torus topology
t = Tonnetz(grid_size=12)
t.distance(0, 5) # L1 toroidal distance with wraparound
t.spectral_gap() # First eigenvalue of the torus Laplacian
# Vectorized distance matrix (numpy, fast)
dm = distance_matrix(n_tokens=64, grid_size=12) # (64, 64)
Attention Masks (3 variants)
from topological_coherence import ToroidalMask, sinkhorn_knopp
mask = ToroidalMask.hybrid(seq_len=64, radius=2.0, alpha=1.0) # default
mask = ToroidalMask.hard_cutoff(seq_len=64, radius=2.0) # binary
mask = ToroidalMask.soft_exponential(seq_len=64, alpha=1.0) # smooth decay
tensor = mask.to_tensor() # torch.Tensor for attention
ds = sinkhorn_knopp(tensor, n_iters=50) # project to doubly-stochastic
Drift Measurement
from topological_coherence import DriftMeter
meter = DriftMeter(threshold=2, grid_size=12)
meter.record(pred=5, target=8)
meter.record(pred=5, target=100)
print(f"Drift rate: {meter.rate():.3f}")
Toroidal Attention (PyTorch)
from topological_coherence import ToroidalAttention, TinyTransformer
# Drop-in attention replacement
attn = ToroidalAttention(d_model=64, n_heads=4, max_seq_len=64)
# Full demo transformer with swappable attention
model = TinyTransformer(
vocab_size=144, d_model=64, n_heads=4,
attention_type="toroidal" # or "baseline", "random"
)
Theory
Hallucination is a geometry problem. Unconstrained latent dynamics permit arbitrary drift through embedding space. Toroidal constraints provide a constant spectral gap that suppresses non-resonant modes:
λ₁ = 2 - 2cos(2π/N) = Θ(1) for fixed grid size N
This bounds semantic drift without reducing model capacity.
Hierarchy: mHC (Birkhoff) ⊂ ERLHS (Hamiltonian) ⊂ Karmonic (Toroidal + Spectral)
Links
- Paper (Zenodo)
- Toroidal Logit Bias Paper
- Live Demo (HuggingFace)
- Source (GitHub)
- Rust Crate (crates.io)
Citation
@misc{cormier2026topological,
author = {Cormier, Sylvain},
title = {Topological Constraints for Coherent Language Models},
year = {2026},
publisher = {Zenodo},
doi = {10.5281/zenodo.18187835}
}
License
Apache-2.0
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file topological_coherence-0.3.0.tar.gz.
File metadata
- Download URL: topological_coherence-0.3.0.tar.gz
- Upload date:
- Size: 30.0 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.14.3
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
13e421cc96b18b1ba34379274335fbb88322bb851a980638e0dce753f1ac362a
|
|
| MD5 |
160ac6bc1b8ba91e068390cd1d60e803
|
|
| BLAKE2b-256 |
27a7bd627d20ca61da9288ca09380a2893bfc049fddfdd20f260b064837505a7
|
File details
Details for the file topological_coherence-0.3.0-py3-none-any.whl.
File metadata
- Download URL: topological_coherence-0.3.0-py3-none-any.whl
- Upload date:
- Size: 21.7 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.14.3
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
383107ab66fc7a49b6c45713e26354d36665692e434796b2fc5d0adc526af929
|
|
| MD5 |
20ec1798dbefe93d1a9ec25178d2852a
|
|
| BLAKE2b-256 |
8510322ce089893834edbcadb200afaf34270f9059486bc30de93e5caac70f62
|