Energy Language — program neural networks by shaping energy landscapes
Project description
Qriton HLM — Energy Language
Program neural networks by shaping energy landscapes.
The first tool that lets you directly program a neural network's behavior — not train, not fine-tune, not prompt-engineer. Program.
"Energy minima as a programming language — in a completely new fashion." — John J. Hopfield, March 2026
$ qriton-hlm -c model.pt
hlm:model> capture 5 polite Thank you so much for your help
Captured L5: "Thank you so much for your help"
Energy: -12.34 | Basin: True (cos=0.97, 23 iters)
-> Added to concept 'polite' (1 samples)
hlm:model> capture 5 polite I really appreciate your patience
-> Added to concept 'polite' (2 samples)
hlm:model> inject-concept 5 polite 0.1
Before: 200 basins, concept is basin: False
After: 201 basins (+1), concept is basin: True
>> Concept successfully injected!
hlm:model> apply 5
hlm:model> generate Tell me about the weather
I'd be happy to share! The weather today is...
Install
pip install qriton-hlm
What is this?
Every AI framework today has one way to change model behavior: training. Qriton HLM adds a second: surgery.
This only works because HLM uses polynomial Hopfield dynamics that create multiple stable basins per layer. Transformers can't do this — their softmax attention collapses to a single attractor.
Operations — Energy Language
Observe (read the landscape)
| Command | What it does |
|---|---|
survey <layer> |
Map all basins in a layer |
survey-all |
Survey every layer |
verify <layer> <seed> |
Check if a point is a basin |
energy <layer> <seed> |
Measure energy at a point |
probe <layer> [basin_idx] |
What tokens does a basin activate? (reverse capture) |
landscape <layer> |
Full energy map with population bars |
Modify (change the landscape)
| Command | What it does |
|---|---|
inject <layer> <seed> [str] |
Create a new attractor |
remove <layer> <seed> [str] |
Destroy an attractor |
move <layer> <seed> [str] |
Relocate an attractor |
strengthen <layer> <seed> [f] |
Deepen an existing basin |
weaken <layer> <seed> [f] |
Make a basin shallower |
Concept (semantic operations)
| Command | What it does |
|---|---|
capture <layer> <concept> <text> |
Extract what a concept looks like in the model |
inject-concept <layer> <concept> [s] |
Program a captured concept as a new attractor |
remove-concept <layer> <concept> [s] |
Remove a concept from the model |
blend <a> <b> <new> [ratio] |
Mix two concepts (e.g. 70% polite + 30% formal) |
concepts |
List all captured concepts |
export-concept <name> <path> |
Save concept as portable file |
import-concept <path> |
Load concept from file |
Concept Algebra v0.10
| Command | What it does |
|---|---|
similarity <a> <b> |
Cosine similarity between two concept centroids |
add <a> <b> <new> |
Vector addition: A + B |
subtract <a> <b> <new> |
Vector subtraction: what's in A but not B |
analogy <a> <b> <c> <new> |
A is to B as C is to ? (king - man + woman = queen) |
compose <names> <weights> <new> |
Weighted sum of N concepts |
interpolate <a> <b> [steps] |
Measure similarity along A-to-B path |
Control (flow & persistence)
| Command | What it does |
|---|---|
load <path> |
Load a checkpoint |
apply <layer> |
Write modified W to live model |
restore <layer> |
Undo modifications |
restore-all |
Restore all layers |
save <path> |
Save modified W matrices |
status |
Show which layers are modified |
set <param> <value> |
Set parameter (beta, strength, ...) |
info |
Show model info |
Causal (discovery & intervention)
| Command | What it does |
|---|---|
causal scan <layer> [threshold] |
Discover causal links between basins (systematic knockout) |
causal intervene <layer> <basin> [op] |
do(X) — intervene on a basin, measure downstream effects |
causal counterfactual <layer> <basin> [mod] |
"What if X was different?" (non-destructive) |
Each basin = a causal node. Surgery = the do-operator. The energy landscape becomes a programmable structural causal model. See Causal Programming Docs for the full framework.
Consolidation (Sleep Cycle) v0.10
| Command | What it does |
|---|---|
consolidate <layer> [str_f] [prune_t] |
Memory consolidation: strengthen popular basins, prune weak ones |
dream [cycles] |
Full sleep cycle across all layers, multiple passes |
Mirrors biological memory consolidation during slow-wave sleep. High-population basins (frequently visited = important) get strengthened. Low-population basins (rarely visited = unimportant) get pruned.
Watermark (IP Protection) v0.10
| Command | What it does |
|---|---|
watermark inject <key> [marks] [str] |
Embed invisible signature basins using secret key |
watermark verify <key> [marks] |
Check if model carries your watermark |
watermark strip-test <key> [marks] |
Test watermark robustness (self-attack) |
Uses SHA-256 derived seeds across multiple layers. Low strength (0.03-0.05) = invisible to model quality. Same key always produces same watermark positions. Without the key, watermarks are indistinguishable from natural basins.
Safety (guards & audit)
| Command | What it does |
|---|---|
guard <type> <value> |
Set guard (max-basins, min-basins, strength-cap, cosine-drift, perplexity-delta) |
guards |
Show active guards |
diff <layer> |
Show W matrix change stats |
benchmark |
Measure perplexity impact of surgery |
history |
Show operation log with OK/BLOCKED status |
Guards are pre-execution checks. If violated, the operation does not start — weights are never touched. Override with --force --reason "justification" (logged permanently).
Verify & Generate
| Command | What it does |
|---|---|
generate <prompt> |
Generate text with current model |
Multi-Model v0.10
| Python API | What it does |
|---|---|
surgeon.compare(other, layer) |
Compare basin structure: shared, unique-self, unique-other |
surgeon.transplant(source, layer, concept) |
Copy concept from another model into this one |
Database Sync
| Python API | What it does |
|---|---|
HLMSync.sync_row(table, row) |
Sync one DB row as a named concept basin |
HLMSync.sync_batch(table, rows) |
Sync multiple rows |
HLMSync.full_sync_table(table) |
Sync all rows from a table |
HLMSync.delete_row(table, row_id) |
Remove a row's concept from the landscape |
HLMSync.checkpoint() |
Save state to disk |
BasinSurgeon.save_checkpoint(path) |
Save W matrices + concepts + history |
BasinSurgeon.load_session(path) |
Restore full session state |
Scripts
Write .hlm scripts to chain operations:
# make_polite.hlm
load model.pt
capture 5 polite Thank you so much
capture 5 polite I really appreciate it
capture 5 polite That's very kind of you
inject-concept 5 polite 0.1
apply 5
benchmark
generate Tell me about the weather
restore 5
Run with: qriton-hlm --script make_polite.hlm
Python API
from qriton_hlm import BasinSurgeon
surgeon = BasinSurgeon.from_checkpoint("model.pt", device="cuda")
# Capture what a concept looks like in the model
surgeon.capture(layer=5, text="Thank you so much", concept_name="polite")
surgeon.capture(layer=5, text="I really appreciate it", concept_name="polite")
# Inject that concept as a new attractor
result = surgeon.inject_concept(layer=5, concept_name="polite", strength=0.1)
print(f"Concept injected: {result['exists_after']}")
# Blend two concepts
surgeon.blend("polite", "formal", "professional", ratio=0.6)
# Export concept for sharing
surgeon.export_concept("polite", "polite.concept")
# Transplant concept from another model
other = BasinSurgeon.from_checkpoint("other_model.pt")
surgeon.transplant(other, layer=5, concept_name="humor")
# Apply to live model and benchmark
surgeon.apply(layer=5)
result = surgeon.benchmark()
print(f"PPL after surgery: {result['perplexity']:.2f}")
# Save session (W matrices + concepts + history)
surgeon.save_checkpoint("session.pt")
# Load it back later
surgeon.load_session("session.pt")
# Probe: what does basin #3 represent?
probe = surgeon.probe(layer=5, basin_idx=3)
print(f"Top tokens: {probe['top_tokens']}")
# Compare basins between models
diff = surgeon.compare(other, layer=5)
print(f"Shared: {diff['shared']}, unique: {diff['only_self']}")
# --- Causal discovery ---
# Discover causal graph: which basins cause which?
graph = surgeon.causal_scan(layer=5, threshold=0.15)
for edge in graph['edges']:
print(f"B{edge['source']} → B{edge['target']} drift={edge['drift']:.3f}")
# do(X) — intervene on basin 3, measure downstream effects
result = surgeon.causal_intervene(layer=5, basin_idx=3, operation='remove')
print(f"Affected: {result['num_affected']} basins")
# Counterfactual: what if basin 3 had been inverted? (non-destructive)
cf = surgeon.causal_counterfactual(layer=5, basin_idx=3, modification='invert')
print(f"Would affect: {cf['num_affected']} basins")
# --- Concept Algebra ---
# Similarity between concepts
sim = surgeon.concept_similarity("polite", "formal")
print(f"Similarity: {sim['similarity']:.4f}")
# Vector arithmetic on concepts
surgeon.concept_add("polite", "formal", "professional") # A + B
surgeon.concept_subtract("polite", "casual", "formality") # A - B
surgeon.concept_analogy("man", "king", "woman", "queen") # A:B::C:?
surgeon.concept_compose(["polite", "formal", "warm"], [0.5, 0.3, 0.2], "ideal_tone")
# Interpolation path
path = surgeon.concept_interpolate("polite", "rude", num_steps=10)
for step in path['steps']:
print(f" ratio={step['ratio']:.1f} sim_A={step['similarity_to_a']:.3f}")
# --- Consolidation (Sleep Cycle) ---
# Single layer: strengthen important basins, prune weak ones
report = surgeon.consolidate(layer=5, strengthen_factor=1.5, prune_threshold=0.3)
print(f"Basins: {report['basins_before']} -> {report['basins_after']}")
print(f"Strengthened: {report['strengthened']}, Pruned: {report['pruned']}")
# Full sleep: all layers, multiple passes
dream = surgeon.dream(cycles=3)
print(f"Dream complete: +{dream['total_strengthened']} -{dream['total_pruned']}")
# --- Watermark (IP Protection) ---
# Sign your model with an invisible watermark
surgeon.watermark_inject("my-secret-key-2026", num_marks=5, strength=0.05)
# Verify on any model (e.g., checking a suspected leak)
result = surgeon.watermark_verify("my-secret-key-2026", num_marks=5)
print(f"Verified: {result['verified']} ({result['confidence']:.0%} confidence)")
# -> Verified: True (100% confidence)
# Test robustness — try stripping your own watermarks
strip = surgeon.watermark_strip_attempt("my-secret-key-2026", num_marks=5)
print(f"Removed: {strip['removed']}, Survived: {strip['survived']}")
Database Sync — Neural Database Extension
Use HLM as an editable semantic layer on top of your SQL database:
from qriton_hlm.db import HLMSync
with HLMSync.from_config("hlm_sync_config.json") as syncer:
# Sync rows — each becomes a named concept basin
syncer.sync_row("Products", {"id": 1, "name": "Widget", "price": 9.99})
syncer.full_sync_table("Products")
syncer.delete_row("Products", 1)
print(syncer.stats)
# {'synced': 5, 'deleted': 1, 'errors': 0}
# Install with MSSQL support
pip install qriton-hlm[db]
SQL stays the source of truth. HLM becomes the editable knowledge layer. See the Database Sync docs for configuration and sync strategies.
Compatibility
Works with any PyTorch checkpoint that contains hopfield.W parameters:
- HLM2/HLM3 language models
- HLM-Spatial (LIDAR, Medical3D, Industrial3D)
- HLM-Audio (STT, TTS)
- Any model using PolyHopfieldLayer
Requirements
- Python >= 3.9
- PyTorch >= 2.0
- NumPy
License
Business Source License 1.1 — free for research, education, evaluation, and non-commercial use. Commercial use requires a license. Converts to Apache 2.0 on April 1, 2030.
See LICENSE and LICENSING.md for details.
Commercial licensing: license@qriton.com
Qriton Technologies S.R.L. — qriton.com
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file qriton_hlm-0.10.0.tar.gz.
File metadata
- Download URL: qriton_hlm-0.10.0.tar.gz
- Upload date:
- Size: 56.1 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.10.17
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
594ba187c36d72e77dc53b17ad160b93ce9f6f6ad08ece9e613a3e3dc997c6cb
|
|
| MD5 |
b01d66271bb6247a19d575a5cc446546
|
|
| BLAKE2b-256 |
8fb2a9348fd7a075e9c2ae8a60bdd8235a5c4ed92c201c11566c83497cb30647
|
File details
Details for the file qriton_hlm-0.10.0-py3-none-any.whl.
File metadata
- Download URL: qriton_hlm-0.10.0-py3-none-any.whl
- Upload date:
- Size: 48.6 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.10.17
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
de3878e819e29d1dd15c1f28266ec8da8301963ecf8b6b4911ba0b185bbc55b7
|
|
| MD5 |
f6d79667c35dcf71de778dfa1b4f339d
|
|
| BLAKE2b-256 |
f24afafb6fd8636b7d9c5512a028a504d2f5f5156d5db881bccff11640ed1e55
|