Skip to main content

Reasoning is not a property of the model — it is an emergent dynamic of external control.

Project description

Meta-Reasoning

Cognitive Heteronomy for LLMs

PyPI Python License


Reasoning is not a property of the model — it is an emergent dynamic of external control.

An SDK that rejects the illusion of autonomous LLM reasoning. Instead of treating language models as cognitive agents, Meta-Reasoning introduces cognitive heteronomy: reasoning is governed, observed, and mutated from the outside.

The model doesn't think. It executes. The thinking happens in the architecture around it.


Core Thesis

LLMs are generative substrates, not minds. What is commonly called "reasoning" is pattern replay — not deliberation. This SDK externalizes all meta-cognitive functions into a Cognitive Controller that:

  • Observes the form of reasoning (not its content)
  • Measures trajectory, redundancy, stall, and premature convergence
  • Mutates the reasoning process through formal constraint operators
  • Records cognitive trajectories in an Epistemic Ledger

No self-reflection. No "think step by step". No autonomous agents.

Architecture

Architecture

Level 1 — Generative Substrate (LLM)

Produces text and structures. Decides nothing. Stateless by design.

Level 2 — Cognitive Controller

The heart. Semantically blind — it doesn't evaluate truth, it evaluates cognitive form:

  • Entropy of reasoning moves
  • Strategy repetition index
  • Depth without novelty
  • Constraint violation rate
  • Premature closure score

Level 3 — Epistemic Ledger

Not RAG. Not content memory. A structural trace of:

  • Cognitive transformations attempted
  • Strategies that produced stall
  • Failure maps that prevent regression

Key Concepts

Structured Output Protocol

Every LLM generation must include a formal reasoning trace:

{
  "content": "...",
  "reasoning_trace": {
    "moves": ["assumption", "deduction", "analogy"],
    "depth": 4,
    "confidence_markers": 2,
    "abstraction_level": "medium"
  }
}

Cognitive Move Taxonomy

A finite, observable alphabet: assumption · deduction · induction · abduction · analogy · contradiction · enumeration · compression · narrative_simulation

Mutation Operators

The controller doesn't say "reason better". It says:

  • BAN: "deduction is forbidden"
  • REQUIRE: "you must use analogy"
  • LIMIT_DEPTH: "max 2 reasoning steps"
  • FORCE_COMPRESSION: "reduce to 2 concepts"
  • INVERT_CAUSALITY: "reverse the causal direction"
  • REQUIRE_CONTRADICTION: "find an internal contradiction"

Improvisation emerges from constraint, not freedom — like jazz.

Failure as First-Class Output

The system does not optimize for correct answers. Failure is informative:

  • Every collapsed trajectory is recorded
  • Every stall enriches the ledger
  • The system learns which cognitive spaces to avoid

Installation

pip install -e .

Or with dev dependencies:

pip install -e ".[dev]"

Quick Start

Without an API key (mock backend)

python -m examples.mock_example

With OpenAI

export OPENAI_API_KEY=<your-key>
python -m examples.openai_example

Programmatic usage

from meta_reasoning import CognitiveEngine

class MyBackend:
    def generate(self, messages):
        # Call your LLM here, return {"content": "..."}
        ...

engine = CognitiveEngine(backend=MyBackend(), max_cycles=5)
result = engine.run("Your task here")

for cycle in result.cycles:
    print(f"Cycle {cycle.cycle}: {cycle.outcome}")
    print(f"  Moves: {[m.value for m in cycle.output.reasoning_trace.moves]}")
    print(f"  Entropy: {cycle.metrics.entropy:.2f}")

# Save the epistemic ledger for analysis
engine.ledger.save("session.json")

Running Tests

pip install -e ".[dev]"
pytest tests/ -v

Project Structure

meta_reasoning/
├── __init__.py        # Public API
├── types.py           # Cognitive moves, traces, mutations, metrics
├── substrate.py       # Level 1 — LLM interface
├── controller.py      # Level 2 — Cognitive Controller
├── ledger.py          # Level 3 — Epistemic Ledger
├── metrics.py         # Semantically-blind cognitive metrics
├── mutations.py       # Mutation operator generation
└── engine.py          # The governed cognitive loop

Related Work & Philosophy

For a detailed comparison with Chain-of-Thought, Tree-of-Thoughts, Meta-Reasoning Prompting, Reflexion, Self-Refine, ReAct, and other approaches — including a comparative table — see the full Related Work page on the project website.

The short version: every existing approach keeps the LLM as the cognitive subject. We don't. The model is a substrate. The reasoning is governed from outside.

License

AGPL-3.0 -- See LICENSE for details.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

meta_reasoning-0.0.2.tar.gz (26.7 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

meta_reasoning-0.0.2-py3-none-any.whl (26.0 kB view details)

Uploaded Python 3

File details

Details for the file meta_reasoning-0.0.2.tar.gz.

File metadata

  • Download URL: meta_reasoning-0.0.2.tar.gz
  • Upload date:
  • Size: 26.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.7

File hashes

Hashes for meta_reasoning-0.0.2.tar.gz
Algorithm Hash digest
SHA256 b00a0885f69eab4df33322f992ae5100860058aa10b01d65249edd104a0e77b5
MD5 cab231dac6c468feed8ba788e9dc275b
BLAKE2b-256 5ec8038bef9f3c67a567d4c2e3a958a707468d01ad58afeaac9b13fdad5c1e5e

See more details on using hashes here.

File details

Details for the file meta_reasoning-0.0.2-py3-none-any.whl.

File metadata

  • Download URL: meta_reasoning-0.0.2-py3-none-any.whl
  • Upload date:
  • Size: 26.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.7

File hashes

Hashes for meta_reasoning-0.0.2-py3-none-any.whl
Algorithm Hash digest
SHA256 0c6ac796b931f8c21368f3cc735c362121acf434d613087cfdd0a5fe68df8422
MD5 7997c9bdfc9289dd5ed3c9750303f6b0
BLAKE2b-256 60ec2983d3916a0ff1f8b4412e33bf74fe4f0fb3c72f64427a992c9baba57911

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page