Skip to main content

Epistemic modeling toolkit for quantifying information costs when valid perspectives disagree

Project description

Contrakit

Contrakit banner

GitHub Stars GitHub Forks GitHub Issues PyPI Python License: MIT Docs

When multiple experts give conflicting advice about the same problem, most systems try to force artificial consensus or pick a single "winner."

Contrakit takes a different approach: it measures exactly how much those perspectives actually contradict—in bits.

What is Contrakit?

Most tools treat disagreement as error—something to iron out until every model or expert agrees. But not all clashes are noise. Some are structural: valid perspectives that simply refuse to collapse into one account. Contrakit is the first Python toolkit to measure that irreducible tension, and to treat it as information—just as Shannon treated randomness. Our work has shown it's not only measurable, but it's useful too.

It tells you things like:

  1. How close can all perspectives get to a single account? (agreement $α^\star$)
  2. How expensive is it to pretend they agree? (contradiction bits $K(P)$)
  3. Which contexts drive the conflict? (witness weights $λ^\star$)

Think of it as an information-theoretic microscope for disagreement. Just as entropy priced randomness, Contrakit prices contradiction—so you can see exactly what it costs to flatten diverse perspectives into one.

Quickstart

⚠️ Under Construction: This project is currently under active development. Currently i'm in the process of translating all of my Coq formalizations, notebooks, and personal scripts into API functionality and documentation. The core functionality is ready to use, but APIs, documentation, and features will change.

Install:

pip install contrakit

Quickstart:

from contrakit import Observatory

# 1) Model perspectives
obs = Observatory.create(symbols=["Yes","No"])
Y = obs.concept("Outcome")
with obs.lens("ExpertA") as A: A.perspectives[Y] = {"Yes": 0.8, "No": 0.2}
with obs.lens("ExpertB") as B: B.perspectives[Y] = {"Yes": 0.3, "No": 0.7}

# 2) Export behavior and quantify reconcilability
behavior = (A | B).to_behavior()  # compose lenses → behavior
print("alpha*:", round(behavior.alpha_star, 3))  # 0.965 (high agreement)
print("K(P):  ", round(behavior.contradiction_bits, 3), "bits")  # 0.051 bits (low cost)

# 3) Where to look next (witness design)
witness = behavior.least_favorable_lambda()
print("lambda*:", witness)  # ~0.5 each expert (balanced conflict)

Why This Matters

Computational systems have long handled multiple perspectives—but only by forcing consensus or averaging them away. What has been missing is a way to measure epistemic tension itself: to treat contradiction not as noise, but as structured information.

Without this, information is lost. Standard models can’t register paradox as paradox; they flatten it. Contrakit flips the script: when experts or models disagree, you don’t lose information—you gain direction. Each contradiction becomes a gradient pointing toward the boundaries of current understanding. You don’t just resolve conflicts—you use them to build better models.

The loop is simple: perspectives clash → Contrakit measures the clash → $λ*$ shows you where to investigate → your next reasoning step is guided by the structure of the disagreement itself.

By quantifying epistemic tension, Contrakit shows not only how well multiple viewpoints can be reconciled, but what each viewpoint is capable of—how far it can stretch, where it breaks, and what it leaves out. In this way, contradiction becomes more than a clash; it becomes the lens that reveals what a “viewpoint” really is, and the information that drives resolution.

The K(P) Tax

Contrakit’s measure of epistemic tension isn’t ad-hoc. It follows from six simple axioms about how perspectives should combine. From these, a unique formula emerges: contradiction bits K(P), built from the Bhattacharyya overlap between distributions. That’s why the measure behaves consistently across domains—from distributed consensus to ensemble learning to quantum contextuality.

And contradiction isn’t free. The same tension that guides reasoning also imposes an exact tax: across compression, communication, and simulation, disagreement costs K(P) bits per symbol. In practice, this means real performance deficits in any engineering task that must reconcile contextual data—unless you use the signal of contradiction itself to guide resolution.

Task Impact
Compression/shared representation $+K(P)$ extra bits needed
Communication with disagreement $-K(P)$ bits of capacity lost
Simulation with conflicting models $×(2^{2K(P)} - 1)$ variance penalty

We can now use $λ^\star$ to target measurements and understand where this will have the most impact. Reduce $K(P)$ by mixing in feasible "compromise" distributions.

API Reference

Examples

# Epistemic modeling examples
poetry run python examples/intuitions/day_or_night.py      # Observer perspective conflicts
poetry run python examples/statistics/simpsons_paradox.py # Statistical paradox resolution

# Quantum contextuality (writes analysis PNGs to figures/)
poetry run python -m examples.quantum.run

Installing from Source

# Clone the repository
$ git clone https://github.com/off-by-some/contrakit.git && cd contrakit

# Install dependencies
$ poetry install

# Run tests
$ poetry run pytest -q

A Mathematical Theory of Contradiction

Contrakit is powered by a formal framework introduced in A Mathematical Theory of Contradiction. The paper lays out the six axioms, derives the unique measure K(P), and proves its consequences across compression, communication, and simulation. If you'd like to see the mathematics in full details, make suggestions, comments, or contribute check out docs/paper/

License

Dual-licensed: MIT for code (LICENSE), CC BY 4.0 for docs/figures (LICENSE-CC-BY-4.0).

Citation

@software{bridges2025contrakit,
  author = {Bridges, Cassidy},
  title  = {Contrakit: A Python Library for Contradiction},
  year   = {2025},
  url    = {https://github.com/off-by-some/contrakit},
  license= {MIT, CC-BY-4.0}
}

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

contrakit-0.0.2.tar.gz (3.1 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

contrakit-0.0.2-py3-none-any.whl (61.5 kB view details)

Uploaded Python 3

File details

Details for the file contrakit-0.0.2.tar.gz.

File metadata

  • Download URL: contrakit-0.0.2.tar.gz
  • Upload date:
  • Size: 3.1 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/2.2.1 CPython/3.10.19 Darwin/25.1.0

File hashes

Hashes for contrakit-0.0.2.tar.gz
Algorithm Hash digest
SHA256 ed9b5bd93b0b888aa3fd5d4f5e9c9377f437895eb6888079b08842178a8d8523
MD5 d450c2faa88e8e84628d463dfbc7d634
BLAKE2b-256 37546ee6a0f5e47cc8b6d5c2cf1a9854cba673508790e0a5eea6e051e3a678be

See more details on using hashes here.

File details

Details for the file contrakit-0.0.2-py3-none-any.whl.

File metadata

  • Download URL: contrakit-0.0.2-py3-none-any.whl
  • Upload date:
  • Size: 61.5 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/2.2.1 CPython/3.10.19 Darwin/25.1.0

File hashes

Hashes for contrakit-0.0.2-py3-none-any.whl
Algorithm Hash digest
SHA256 09b3b1a5855360d8d104c894c21aa18840fbbdd39dd889dc054730f87af33dd9
MD5 4aa79b0bf45123fefb62b699108d04a0
BLAKE2b-256 1b18d589e528c4a57ea143752d79c0495beac2fef412fef5429b3bcafd2b9cd8

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page