Reasoning is not a property of the model — it is an emergent dynamic of external control.
Project description
Cognitive Heteronomy for LLMs
Reasoning is not a property of the model — it is an emergent dynamic of external control.
An SDK that rejects the illusion of autonomous LLM reasoning. Instead of treating language models as cognitive agents, Meta-Reasoning introduces cognitive heteronomy: reasoning is governed, observed, and mutated from the outside.
The model doesn't think. It executes. The thinking happens in the architecture around it.
Core Thesis
LLMs are generative substrates, not minds. What is commonly called "reasoning" is pattern replay — not deliberation. This SDK externalizes all meta-cognitive functions into a Cognitive Controller that:
- Observes the form of reasoning (not its content)
- Measures trajectory, redundancy, stall, and premature convergence
- Mutates the reasoning process through formal constraint operators
- Records cognitive trajectories in an Epistemic Ledger
No self-reflection. No "think step by step". No autonomous agents.
Architecture
Level 1 — Generative Substrate (LLM)
Produces text and structures. Decides nothing. Stateless by design.
Level 2 — Cognitive Controller
The heart. Semantically blind — it doesn't evaluate truth, it evaluates cognitive form:
- Entropy of reasoning moves
- Strategy repetition index
- Depth without novelty
- Constraint violation rate
- Premature closure score
Level 3 — Epistemic Ledger
Not RAG. Not content memory. A structural trace of:
- Cognitive transformations attempted
- Strategies that produced stall
- Failure maps that prevent regression
Key Concepts
Structured Output Protocol
Every LLM generation must include a formal reasoning trace:
{
"content": "...",
"reasoning_trace": {
"moves": ["assumption", "deduction", "analogy"],
"depth": 4,
"confidence_markers": 2,
"abstraction_level": "medium"
}
}
Cognitive Move Taxonomy
A finite, observable alphabet:
assumption · deduction · induction · abduction · analogy · contradiction · enumeration · compression · narrative_simulation
Mutation Operators
The controller doesn't say "reason better". It says:
- BAN: "deduction is forbidden"
- REQUIRE: "you must use analogy"
- LIMIT_DEPTH: "max 2 reasoning steps"
- FORCE_COMPRESSION: "reduce to 2 concepts"
- INVERT_CAUSALITY: "reverse the causal direction"
- REQUIRE_CONTRADICTION: "find an internal contradiction"
Improvisation emerges from constraint, not freedom — like jazz.
Failure as First-Class Output
The system does not optimize for correct answers. Failure is informative:
- Every collapsed trajectory is recorded
- Every stall enriches the ledger
- The system learns which cognitive spaces to avoid
Installation
pip install -e .
Or with dev dependencies:
pip install -e ".[dev]"
Quick Start
Without an API key (mock backend)
python -m examples.mock_example
With OpenAI
export OPENAI_API_KEY=<your-key>
python -m examples.openai_example
Programmatic usage
from meta_reasoning import CognitiveEngine
class MyBackend:
def generate(self, messages):
# Call your LLM here, return {"content": "..."}
...
engine = CognitiveEngine(backend=MyBackend(), max_cycles=5)
result = engine.run("Your task here")
for cycle in result.cycles:
print(f"Cycle {cycle.cycle}: {cycle.outcome}")
print(f" Moves: {[m.value for m in cycle.output.reasoning_trace.moves]}")
print(f" Entropy: {cycle.metrics.entropy:.2f}")
# Save the epistemic ledger for analysis
engine.ledger.save("session.json")
Running Tests
pip install -e ".[dev]"
pytest tests/ -v
Project Structure
meta_reasoning/
├── __init__.py # Public API
├── types.py # Cognitive moves, traces, mutations, metrics
├── substrate.py # Level 1 — LLM interface
├── controller.py # Level 2 — Cognitive Controller
├── ledger.py # Level 3 — Epistemic Ledger
├── metrics.py # Semantically-blind cognitive metrics
├── mutations.py # Mutation operator generation
└── engine.py # The governed cognitive loop
Related Work & Philosophy
For a detailed comparison with Chain-of-Thought, Tree-of-Thoughts, Meta-Reasoning Prompting, Reflexion, Self-Refine, ReAct, and other approaches — including a comparative table — see the full Related Work page on the project website.
The short version: every existing approach keeps the LLM as the cognitive subject. We don't. The model is a substrate. The reasoning is governed from outside.
License
AGPL-3.0 -- See LICENSE for details.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file meta_reasoning-0.0.1.tar.gz.
File metadata
- Download URL: meta_reasoning-0.0.1.tar.gz
- Upload date:
- Size: 26.7 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
74d95ca406b53ec6cc67210e9435b2a36a945f2747f953271ad7a0d7a082df06
|
|
| MD5 |
358ee441e0710701ca453cf60cafdb60
|
|
| BLAKE2b-256 |
18e1c9ca09b2f1488cba59280bc38883b27d2ea3f413d38c2066b54d637dcb6d
|
File details
Details for the file meta_reasoning-0.0.1-py3-none-any.whl.
File metadata
- Download URL: meta_reasoning-0.0.1-py3-none-any.whl
- Upload date:
- Size: 26.0 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
0f28cb8d98b484f25cee7eeddb09110800b8d980e0b9f65b1ea60cf885c7a060
|
|
| MD5 |
06d2addf56edc51c7a72525257282a93
|
|
| BLAKE2b-256 |
3f2b27ed464c1d361d77a557b43912e23ee73a56d003f7c3126247d839a2e555
|