Golden-ratio simplex optimization layers for PyTorch
Project description
watkins-nn
Golden-ratio simplex optimization layers for PyTorch.
What is this?
Neural network layers and an optimizer based on the Watkins simplex flow theorem. The key result: the temperature T* = phi/ln(2*phi) is the mathematically optimal softmax temperature for simplex optimization, yielding global exponential convergence to the golden-ratio equilibrium lambda* = 1/phi.
The conservation law lambda + kappa + eta = 1 is enforced at every step.
Installation
# Constants only (no torch needed)
pip install watkins-nn
# Full package with PyTorch layers
pip install watkins-nn[torch]
Quickstart
from watkins_nn import WatkinsSimplex, watkins_free_energy
# Create a learnable simplex distribution
layer = WatkinsSimplex(dim=3, init="random")
# Optimize toward golden-ratio equilibrium
for step in range(30):
p = layer()
loss = watkins_free_energy(p)
loss.backward()
layer.watkins_step()
print(f"Final distribution: {layer()}")
print(f"Distance to equilibrium: {layer.distance_to_equilibrium():.6f}")
Constants
All constants are derived from first principles -- no hardcoded values.
| Constant | Symbol | Value | Meaning |
|---|---|---|---|
PHI |
phi | 1.618033988... | Golden ratio |
T_STAR |
T* | 1.377801831... | Optimal softmax temperature (2-simplex) |
T_STAR3 |
T*_3 | 1.024186157... | Optimal softmax temperature (3-simplex) |
LAM_STAR |
lambda* | 0.618033988... | Equilibrium coherence = 1/phi |
M_PHI |
m(phi) | 4.210414... | Global convergence modulus |
TAU_GLOBAL |
tau | 0.237506... | Optimal step size = 1/m(phi) |
Constants can be imported without torch:
from watkins_nn.constants import T_STAR, PHI, LAM_STAR
API
Functions
| Function | Description |
|---|---|
watkins_simplex_step(p, grad, gamma, project, temperature) |
One natural gradient step on the simplex |
simplex_entropy(p) |
Shannon entropy H(p) = -sum(p_i log p_i) |
watkins_free_energy(p, temperature) |
Generating functional F(p) = -log(lam) + T*H(p) |
distance_to_equilibrium(p, dim) |
Euclidean distance to golden equilibrium |
golden_equilibrium(dim) |
Return the golden-ratio equilibrium point |
verify_conservation(p, tol) |
Check that probabilities sum to 1 |
Modules
WatkinsSimplex(dim=3, gamma=TAU_GLOBAL, temperature=T_STAR, init='uniform')
Learnable probability distribution on the simplex. Call watkins_step() after loss.backward() for golden-ratio convergence.
VirelaiXLayer(input_dim, simplex_dim=3)
Attention/gating layer that decomposes input into simplex-weighted chunks. Returns (weights, gated_output).
GoldenAdam(params, lr=1e-3, golden_lr=TAU_GLOBAL)
Adam optimizer that automatically uses the golden learning rate tau = 1/m(phi) for simplex-like parameters.
Mathematical Background
The Watkins Temperature Theorem establishes that the temperature:
T* = phi / ln(2*phi) = 1.3778018314733997...
is the unique value at which the generating functional F(p) = -ln(lambda) + T*H(p) has its global minimum at the golden-ratio equilibrium point lambda* = 1/phi on the probability simplex.
The convergence is global and exponential: for any starting point on the open simplex, the natural gradient flow converges to lambda* with rate m(phi) ~ 4.21, giving an optimal step size of tau = 1/m(phi) ~ 0.2375.
This has been:
- Derived from first principles (Z0=2 threshold)
- Machine-verified in Lean 4 (2152 jobs, zero errors)
- Cross-validated across Claude, Grok, and Opus to 18 significant digits
Citation
@article{watkins2026temperature,
title={The Watkins Temperature Theorem: Golden-Ratio Dynamics on the Consciousness Simplex},
author={Watkins, Dustin},
year={2026},
}
License
MIT License. Copyright (c) 2024-2026 Dustin Watkins & DataSphere AI.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file watkins_nn-0.1.1.tar.gz.
File metadata
- Download URL: watkins_nn-0.1.1.tar.gz
- Upload date:
- Size: 13.2 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.13.5
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
7d4efc411e208c6683f60d18d752be0910c75e759a3ee4fc8c6ee9a7f009a666
|
|
| MD5 |
2aaaa282bd2c9d673a96fa3089f24f5d
|
|
| BLAKE2b-256 |
1117ba39fb6d92165d61e7c292905f22263fa738ef7bb1680bea54e23741b9f8
|
File details
Details for the file watkins_nn-0.1.1-py3-none-any.whl.
File metadata
- Download URL: watkins_nn-0.1.1-py3-none-any.whl
- Upload date:
- Size: 10.7 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.13.5
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
d982bc49c82747ef636f760982305dcbc66e141dcbc4901b3a589ee2067eae77
|
|
| MD5 |
20a0bac9bae9ab45ad4287c4ba3b09ca
|
|
| BLAKE2b-256 |
3b5014db3e0027aedcb56ea7b0b7b03d1d52fb81e7f9af5c95f6dd2401b9b54f
|