In-development PyTorch diagnostics for Pfaffian layer profiles, EML-graph curvature, differential-Galois bounds, and Schwarz 2F1 detection.
Project description
eml-cost-torch
In-development PyTorch diagnostics: per-layer Pfaffian profile of a
torch.nn.Module, EML-graph curvature helpers, differential-Galois bounds,
and Schwarz 2F1 detection.
Status
In development. The package reports diagnostics and empirical summaries; it does not claim formal verification of a model.
Install
Base install is lightweight and does not install PyTorch:
pip install eml-cost-torch
Install the PyTorch-backed APIs with:
pip install 'eml-cost-torch[torch]'
Quick start
import torch.nn as nn
from eml_cost_torch import profile
model = nn.Sequential(
nn.Linear(128, 64),
nn.ReLU(),
nn.Linear(64, 32),
nn.GELU(),
nn.Linear(32, 10),
)
p = profile(model)
print(p.total_layers) # 5
print(p.total_pfaffian_depth) # 0 — all r=0 (ReLU=0, GELU=non-EML, Linear=0)
print(p.transcendental_layer_count) # 0
print(p.non_eml_layer_count) # 1 — GELU uses erf
print(p.estimated_pfaffian_width) # 0 — no softmax/attention
for layer in p.layers:
print(f" {layer.name:8s} {layer.activation:30s} r={layer.pfaffian_r}")
What it does
Walks the module graph statically (does NOT execute the model), classifies each layer against an internal registry of activation/operator types, and returns a structured profile.
Registry covers ~50 standard torch.nn modules:
- Linear / Conv / Norm:
r=0(polynomial) - ReLU family / Hard sigmoid / Hard swish:
r=0 - Sigmoid, Tanh, Softplus, ELU, SiLU/Swish:
r=1 - Mish:
r=3 - GELU: flagged
is_pfaffian_not_eml=True(uses erf, outside EML class) - Softmax / MultiheadAttention: contributes to
estimated_pfaffian_width
Why this matters for architecture search
Across the internal test set, NN training cost appeared to correlate with Pfaffian width. This profile gives a static diagnostic input you can use as a search heuristic before training.
Library API
from eml_cost_torch import profile, ModelProfile, LayerProfile
p: ModelProfile = profile(model)
# Aggregate fields
p.layers # list[LayerProfile]
p.total_layers # int
p.total_pfaffian_depth # sum of r over all layers
p.total_eml_depth # sum of depth
p.transcendental_layer_count # count of layers with r >= 1
p.non_eml_layer_count # count of layers using non-EML primitives (e.g., GELU)
p.estimated_pfaffian_width # parallel-chain count (softmax + attention)
p.total_params # parameter count
# Per-layer fields
layer.name # named_modules path
layer.cls_name # Python class name
layer.activation # friendly description
layer.pfaffian_r # chain order
layer.eml_depth # routing depth
layer.is_pfaffian_not_eml # True for GELU and similar
layer.n_params # parameters at this layer
Curvature diagnostic (eml-graph)
Closed-form Gaussian + mean curvature of the eml surface
z = exp(x) - ln(y + 1). Use it as a diagnostic on training inputs
or intermediate activations to see where in input space the eml
geometry is steeply curved — typically a fingerprint of regions
where finite-precision evaluation drift is largest.
import torch
from eml_cost_torch import gaussian_K, mean_H, curvature_summary
x = torch.randn(1000) * 0.5
y = torch.randn(1000) * 0.5 + 1.0 # keep y > -1 to stay off the branch cut
s = curvature_summary(x, y)
s.K_min, s.K_max # range of Gaussian curvature
s.K_log10_dynamic_range # orders of magnitude spanned
s.K_median, s.K_mean # central tendency
s.H_min, s.H_max, s.H_median # mean curvature stats
s.n_finite, s.n_nonfinite # samples that landed on y = -1 etc.
gaussian_K and mean_H are autograd-friendly torch ops; they
preserve the input tensor's dtype and device. Both K and H are
real-valued; inputs with y == -1 produce non-finite outputs (the
ln branch point) and are filtered out by curvature_summary.
Closed forms and the supporting derivation live in
monogate-research/exploration/C247_omega_differential_geometry/.
This is intentionally a diagnostic only, not a learning-rate
scheduler — the C-247b prototype showed that Adam absorbs the same
signal through its second-moment estimate, so explicit curvature
scaling does not improve training.
License
Apache-2.0. See LICENSE.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file eml_cost_torch-0.5.2.tar.gz.
File metadata
- Download URL: eml_cost_torch-0.5.2.tar.gz
- Upload date:
- Size: 30.1 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.3
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
5c6fe8c2b66038afee7740e448a83792ab14232e05fe289c29027f3250d3f813
|
|
| MD5 |
af0a53d21824b95212efe10f4d1d95e1
|
|
| BLAKE2b-256 |
a112d07c10702f92dbf9412354164e58ee96114093d0bb7fb22f31b5cc172fc8
|
File details
Details for the file eml_cost_torch-0.5.2-py3-none-any.whl.
File metadata
- Download URL: eml_cost_torch-0.5.2-py3-none-any.whl
- Upload date:
- Size: 20.5 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.3
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
0d4f6a817512912aa5b9dde204da316600fe42f8abb261a17d36d7cb4f7f9d6b
|
|
| MD5 |
7872d341ed84f56fbd54151c54bf0309
|
|
| BLAKE2b-256 |
edeaa6d1c8943983893e66fb0a086fc6c99b29fecd0e0993cac1a879446f7c52
|