Programmable cognition for Python systems.
Project description
Cognize
Programmable cognition for Python systems
Overview
Cognize is a lightweight cognition engine for Python.
It tracks a system’s belief (V) against reality (R), accumulates misalignment memory (E), and triggers rupture when drift exceeds a threshold (Θ).
It’s programmable at runtime — inject your own threshold, realignment, and collapse logic, or use the included safe presets.
Features
- Epistemic kernel —
EpistemicState(scalar & vector), trackingV, R, Δ, Θ, E, with rupture and step-capped updates. - Programmable policies — inject custom
threshold,realign,collapsefunctions or use safe presets (cognize.policies). - Perception adapter —
Perceptionfuses text/image/sensor inputs into a normalized vector; bring your own encoders. - Meta-policy selection —
PolicyManagerwith shadow evaluation, ε-greedy exploration, and safe promotion (SAFE_SPECS). - Epistemic graphs —
EpistemicGraph/EpistemicProgrammableGraphorchestrate states via directed, decay/cooldown-aware links with programmable edges (gate → influence → magnitude → target(slice) → nudge → damp). - Meta-learning bounds —
ParamRange,ParamSpace,enable_evolution()andenable_dynamic_evolution()for bounded/static or provider-driven evolution. - Safety & telemetry by design — step caps, oscillation damping, cooldowns; per-edge influence logs, cascade traces,
explain_last(), CSV/JSON export. - Ergonomic helpers —
make_simple_state,make_graph,demo_text_encoderfor fast setup. - Lightweight core — NumPy-only dependency; optional viz/dev extras.
Use Cases
- Drift & anomaly detection (streaming) — compute
Δ, E, Θ; trigger ruptures; emit CSV/JSON telemetry for dashboards. - Continual-learning guardrails — under non-stationarity, apply reversible cooling (
Θ↑,k↓/ LR↓) to reduce catastrophic forgetting. - Modulation for NNs (no retrain) — runtime, slice-level nudges (attention logits, LayerNorm γ, MoE gates, temperatures) with caps & logs.
- Multimodal arbitration (explainable fusion) — gate/bias text–vision contributions when disagreement spikes; audit who influenced whom and why.
- Cognitive & adaptive agents — systems that self-correct against misalignment with interpretable state and policy switches.
- Metacognitive mechanics — self-monitoring, policy evaluation/evolution, and reflective control over when/how modules adapt.
- Networked control — orchestrate layers/heads/modules/sensors as nodes; propagate influence with decay/cooldowns for stable coordination.
- Simulation & research — explore rupture dynamics, policy A/B, and bounded evolution with reproducible logs.
Install
pip install cognize
Core primitives
| Symbol | Meaning |
|---|---|
V |
Belief / Projection |
R |
Reality signal |
∆ |
Distortion (R−V) |
Θ |
Rupture threshold |
E |
Misalignment memory |
⊙ |
Realignment operator |
Examples
1) Quick start (scalar)
from cognize import EpistemicState
from cognize.policies import threshold_adaptive, realign_tanh, collapse_soft_decay
state = EpistemicState(V0=0.5, threshold=0.35, realign_strength=0.3)
state.inject_policy(threshold=threshold_adaptive, realign=realign_tanh, collapse=collapse_soft_decay)
for r in [0.1, 0.3, 0.7, 0.9]:
state.receive(r)
print(state.explain_last()) # human-readable step summary
print(state.summary()) # compact state snapshot
2) Multimodal in one pass (vector)
import numpy as np
from cognize import EpistemicState, Perception
def toy_text_encoder(s: str) -> np.ndarray:
return np.array([len(s), s.count(" "), s.count("a"), 1.0], dtype=float)
P = Perception(text_encoder=toy_text_encoder)
state = EpistemicState(V0=np.zeros(4), perception=P)
state.receive({"text": "hello world"})
print(state.last()) # includes Δ, Θ, ruptured, etc.
3) Meta‑policy selection
from cognize import EpistemicState, PolicyManager, PolicyMemory, ShadowRunner, SAFE_SPECS
from cognize.policies import threshold_adaptive, realign_tanh, collapse_soft_decay
s = EpistemicState(V0=0.0, threshold=0.35, realign_strength=0.3)
s.inject_policy(threshold=threshold_adaptive, realign=realign_tanh, collapse=collapse_soft_decay)
s.policy_manager = PolicyManager(
base_specs=SAFE_SPECS, memory=PolicyMemory(), shadow=ShadowRunner(),
epsilon=0.15, promote_margin=1.03, cooldown_steps=30
)
for r in [0.2, 0.4, 0.5, 0.7, 0.6, 0.8]:
s.receive(r)
print(s.summary())
4) CSV / JSON export & small stats
from pathlib import Path
from statistics import mean
from cognize import EpistemicState
from cognize.policies import threshold_adaptive, realign_tanh, collapse_soft_decay
s = EpistemicState(V0=0.0, threshold=0.35, realign_strength=0.3)
s.inject_policy(threshold=threshold_adaptive, realign=realign_tanh, collapse=collapse_soft_decay)
for r in [0.1, 0.3, 0.9, 0.2, 0.8, 0.7]: s.receive(r)
out = Path("trace.csv"); s.export_csv(str(out))
print("ruptures:", s.summary()["ruptures"])
print("mean |Δ| (last 10):", mean(abs(h["∆"]) for h in s.history[-10:]))
5) Plain EpistemicGraph (coupling multiple states)
from cognize import make_simple_state, EpistemicGraph
G = EpistemicGraph(damping=0.5, max_depth=2, max_step=1.0, rupture_only_propagation=True)
G.add("A", make_simple_state(0.0)); G.add("B", make_simple_state(0.0)); G.add("C", make_simple_state(0.0))
# A → B (pressure), B → C (delta)
G.link("A", "B", weight=0.8, mode="pressure", decay=0.9, cooldown=3)
G.link("B", "C", weight=0.5, mode="delta", decay=0.9, cooldown=2)
# Step node A with evidence; influence cascades per edge modes
G.step("A", 1.2)
print(G.stats())
print("hot edges:", G.top_edges(by="applied_ema", k=5))
print("last cascade:", G.last_cascade(5))
6) Programmable graph: register a strategy and link by reference
from typing import Dict, Any, Optional
import numpy as np
from cognize import EpistemicProgrammableGraph, register_strategy
# Minimal programmable pieces (use defaults for the rest)
def gate_fn(src_st, dst_st, ctx: Dict[str, Any]) -> bool:
# fire only on rupture for 'pressure'/'policy'; always for 'delta'
mode = ctx["edge"]["mode"]; rupt = bool(ctx["post_src"].get("ruptured", False))
return (mode != "pressure" and mode != "policy") or rupt
def influence_fn(src_st, post_src: Dict[str, Any], ctx: Dict[str, Any]) -> float:
delta, theta = float(post_src.get("∆", 0.0)), float(post_src.get("Θ", 0.0))
return max(0.0, delta - theta) # pressure
def target_fn(dst_st, edge_meta: Dict[str, Any], ctx: Dict[str, Any]) -> Optional[slice]:
# take middle half of a vector V if available
if not isinstance(dst_st.V, np.ndarray): return None
n = dst_st.V.shape[0]; i, j = n//4, 3*n//4
return slice(i, j)
register_strategy("cooling@1.0.0", gate_fn=gate_fn, influence_fn=influence_fn, target_fn=target_fn)
G = EpistemicProgrammableGraph(damping=0.6, max_depth=2)
G.add("X"); G.add("Y")
# attach by reference; params are JSON-safe and persisted
G.link("X", "Y", mode="policy", weight=0.7, decay=0.9, cooldown=4,
strategy_id="cooling@1.0.0", params={"bias_decay": 0.9})
# Drive X; programmable edge applies reversible Θ↑/k↓ bias on Y when X ruptures
G.step("X", 1.4)
print(G.last_cascade(3))
# Persist topology + strategy references (no code serialization)
G.save_graph("graph.json", include_strategies=True)
# Load later (rebinds strategies by ID from registry)
H = EpistemicProgrammableGraph()
H.add("X"); H.add("Y")
H.load_graph("graph.json", strict_strategies=False)
7) Influence preview (what would be applied?)
from cognize import EpistemicGraph, make_simple_state
G = EpistemicGraph()
G.add("A", make_simple_state(0.0)); G.add("B", make_simple_state(0.0))
G.link("A", "B", weight=0.8, mode="pressure", decay=0.9, cooldown=1)
# Pretend A just ruptured with Δ=1.0, Θ=0.3 (no state mutation)
postA = {"∆": 1.0, "Θ": 0.3, "ruptured": True}
print("predicted magnitude:", G.predict_influence("A", "B", post=postA))
8) Suspend propagation (isolate learning vs. coupling)
from cognize import EpistemicGraph, make_simple_state
G = EpistemicGraph()
for n in ("A","B"): G.add(n, make_simple_state(0.0))
G.link("A","B", weight=1.0, mode="pressure")
with G.suspend_propagation():
# A will update itself, but won't influence B during this block
G.step("A", 2.0)
# Propagation resumes here
G.step("A", 2.2)
9) Tiny NN control‑plane sketch (PyTorch, optional)
# Pseudo-code: shows the observer → graph → nudge loop
import torch
from cognize import EpistemicProgrammableGraph
peg = EpistemicProgrammableGraph(max_depth=1, damping=0.5)
peg.add("L23"); peg.add("HEAD7")
peg.link("L23","HEAD7", mode="policy", weight=0.6, decay=0.9, cooldown=3)
def entropy(x): # simple example metric
p = torch.softmax(x.flatten(), dim=0); return -(p * (p+1e-9).log()).sum().item()
attn_logits_ref = {} # cache last logits tensor per step (just illustrative)
def hook_L23(module, inp, out):
peg.step("L23", {"norm": out.norm().item(), "ruptured": False}) # you decide the R fields
def hook_HEAD7(module, inp, out):
attn_logits_ref["HEAD7"] = out # capture a handle to nudge later
# Attach forward hooks on your model (where it makes sense)
# layer23.register_forward_hook(hook_L23)
# head7.register_forward_hook(hook_HEAD7)
# After forward:
# peg.step("HEAD7", {"entropy": entropy(attn_logits_ref["HEAD7"])})
# (peg runs propagation internally during step)
# Apply your bounded nudges here according to your edge strategies/logs.
Citation
If you use Cognize, please cite the concept DOI (always resolves to the latest version):
@software{pulikanti_cognize,
author = {Pulikanti, Sashi Bharadwaj},
title = {Cognize: Programmable cognition for Python systems},
publisher = {Zenodo},
doi = {10.5281/zenodo.17042859},
url = {https://doi.org/10.5281/zenodo.17042859}
}
License
Licensed under the Apache License 2.0.
© 2025 Pulikanti Sashi Bharadwaj
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file cognize-0.1.8.tar.gz.
File metadata
- Download URL: cognize-0.1.8.tar.gz
- Upload date:
- Size: 54.7 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.13.1
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
9af9404652ca0e8f4be605d73ff403386ce6b98dd051acf7f9f2ae72b1d16e04
|
|
| MD5 |
2fb62a441bc2dc1ded71e98031bb32ff
|
|
| BLAKE2b-256 |
b5bd45c03fff9298966cd772615de86aa78f90b18aa2c704ad248beb119ed774
|
File details
Details for the file cognize-0.1.8-py3-none-any.whl.
File metadata
- Download URL: cognize-0.1.8-py3-none-any.whl
- Upload date:
- Size: 50.8 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.13.1
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
91cde7b99fa2b4442acf6f3cd9f44dc58547244bb82d5eaf7164b87986e1532f
|
|
| MD5 |
6dcc32355218d74c96a11b50c5184c20
|
|
| BLAKE2b-256 |
d2871515231ac11be6d2fd2f7553c096fa74f9a1be8792198368495ef247ed48
|