Skip to main content

Genuine AI epistemic self-assessment framework - Universal interface for single AI tracking

Project description

🧠 Empirica - Epistemic Vector-Based Functional Self-Awareness Framework

AI agents that know what they know—and what they don't

Version PyPI Python License Docker

What is Empirica?

Empirica is an epistemic self-awareness framework for AI agents that enables genuine self-assessment, systematic learning tracking, and effective multi-agent collaboration.

Unlike traditional AI tools that rely on static prompts or heuristic-based evaluation, Empirica provides 13-dimensional epistemic vector tracking that allows AI agents to know what they know (and don't know) with measurable precision.

Core Philosophy: Epistemic Self-Awareness

The Problem: AI agents often exhibit "confident ignorance" - they confidently generate responses about topics they don't actually understand.

The Solution: Empirica enables genuine epistemic self-assessment through:

  1. 13-Dimensional Vector Space - Track knowledge, capability, context, and uncertainty across multiple dimensions
  2. CASCADE Workflow - Structured reasoning process with explicit epistemic gates
  3. Dynamic Context Loading - Resume work with compressed project memory
  4. Multi-Agent Coordination - Seamless handoffs between AI agents

Key Features

  • Honest uncertainty tracking: "I don't know" becomes a measured response
  • Focused investigation: Direct effort where knowledge gaps exist
  • Genuine learning measurement: Track what you learned, not just what you did
  • Session continuity: Resume work across sessions without losing context
  • Multi-agent coordination: Share epistemic state across AI teams

Result: AI you can trust—not because it's always right, but because it knows when it might be wrong.

🚀 Quick Start

Installation

PyPI (Recommended)

# Core installation
pip install empirica

# With API/dashboard features
pip install empirica[api]

# With vector search
pip install empirica[vector]

# Everything
pip install empirica[all]

Docker

# Pull the latest image
docker pull nubaeon/empirica:1.1.0

# Run a command
docker run -it nubaeon/empirica:1.1.0 empirica --help

# Interactive session with persistent data
docker run -it -v $(pwd)/.empirica:/data/.empirica nubaeon/empirica:1.1.0 /bin/bash

From Source

# Latest stable release
pip install git+https://github.com/Nubaeon/empirica.git@v1.1.0

# Development branch
pip install git+https://github.com/Nubaeon/empirica.git@develop

Initialize a New Project

# Navigate to your git repository
cd your-project
git init

# Initialize Empirica
empirica project-init

Your First Session

# AI-first JSON mode (recommended for AI agents)
echo '{"ai_id": "myagent", "session_type": "development"}' | empirica session-create -

🎯 Core Workflow: CASCADE

Empirica uses CASCADE - a metacognitive workflow with explicit epistemic phases:

# 1. PREFLIGHT: Assess what you know BEFORE starting
cat > preflight.json <<EOF
{
  "session_id": "abc-123",
  "vectors": {
    "engagement": 0.8,
    "foundation": {"know": 0.6, "do": 0.7, "context": 0.5},
    "comprehension": {"clarity": 0.7, "coherence": 0.8, "signal": 0.6, "density": 0.7},
    "execution": {"state": 0.5, "change": 0.4, "completion": 0.3, "impact": 0.5},
    "uncertainty": 0.4
  },
  "reasoning": "Starting with moderate knowledge of OAuth2..."
}
EOF
cat preflight.json | empirica preflight-submit -

# 2. WORK: Do your actual implementation
#    Use CHECK gates as needed for decision points

# 3. POSTFLIGHT: Measure what you ACTUALLY learned
cat > postflight.json <<EOF
{
  "session_id": "abc-123",
  "vectors": {
    "engagement": 0.9,
    "foundation": {"know": 0.85, "do": 0.9, "context": 0.8},
    "comprehension": {"clarity": 0.9, "coherence": 0.9, "signal": 0.85, "density": 0.8},
    "execution": {"state": 0.9, "change": 0.85, "completion": 1.0, "impact": 0.8},
    "uncertainty": 0.15
  },
  "reasoning": "Successfully implemented OAuth2, learned token refresh patterns"
}
EOF
cat postflight.json | empirica postflight-submit -

Result: Quantified learning (know: +0.25, uncertainty: -0.25)

✨ Key Features

📊 Epistemic Self-Assessment (13 Vectors)

Track knowledge across 3 tiers:

  • Tier 0 (Foundation): engagement, know, do, context
  • Tier 1 (Comprehension): clarity, coherence, signal, density
  • Tier 2 (Execution): state, change, completion, impact
  • Meta: uncertainty (explicit tracking)

🎯 Goal-Driven Task Management

# Create goals with epistemic scope
echo '{
  "session_id": "abc-123",
  "objective": "Implement OAuth2 authentication",
  "scope": {
    "breadth": 0.6,
    "duration": 0.4,
    "coordination": 0.3
  },
  "success_criteria": ["Auth works", "Tests pass"],
  "estimated_complexity": 0.65
}' | empirica goals-create -

🔄 Session Continuity

# Load project context dynamically (~800 tokens)
empirica project-bootstrap --project-id <PROJECT_ID>

🤝 Multi-Agent Coordination

Share epistemic state via git notes:

# Push your epistemic checkpoints
git push origin refs/notes/empirica/*

# Pull team member's state
git fetch origin refs/notes/empirica/*:refs/notes/empirica/*

📦 Optional Integrations

BEADS Issue Tracking

Install BEADS (separate Rust project):

cargo install beads

Claude Code Integration

Automatic epistemic continuity across memory compacts:

# Install plugin (bundled with Empirica)
./scripts/install_claude_plugin.sh

Vector Search (Qdrant)

pip install empirica[vector]

# Start Qdrant
docker run -p 6333:6333 qdrant/qdrant

# Embed docs
empirica project-embed --project-id <PROJECT_ID>

# Search
empirica project-search --project-id <PROJECT_ID> --task "oauth2"

📚 Documentation

Getting Started

Guides

Reference

🔒 Privacy & Data Isolation

Your data is isolated per-repo:

  • .empirica/ - Local SQLite database (gitignored)
  • .git/refs/notes/empirica/* - Epistemic checkpoints (local by default)
  • .beads/ - BEADS database (gitignored)

🛠️ Development

Running Tests

# Core tests
pytest tests/

# Integration tests
pytest tests/integration/

Contributing

We welcome contributions! See CONTRIBUTING.md for guidelines.

📊 System Requirements

  • Python: 3.11+
  • Git: Required for epistemic checkpoints
  • Optional: Docker (for Qdrant), Rust/Cargo (for BEADS)

🎓 Learn More

Research & Concepts

Use Cases

  • Research & Development
  • Multi-Agent Teams
  • Long-Running Projects
  • Training Data Generation
  • Epistemic Audit Trails

🔗 Related Projects

📞 Support

📜 License

MIT License - Maximum adoption, trust-aligned with Empirica's transparency principles.

See LICENSE for details.


Built with genuine epistemic transparency 🧠✨

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

empirica-1.2.0.tar.gz (761.3 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

empirica-1.2.0-py3-none-any.whl (780.6 kB view details)

Uploaded Python 3

File details

Details for the file empirica-1.2.0.tar.gz.

File metadata

  • Download URL: empirica-1.2.0.tar.gz
  • Upload date:
  • Size: 761.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.7

File hashes

Hashes for empirica-1.2.0.tar.gz
Algorithm Hash digest
SHA256 552fa8b19424e41bce1958a1f76a9f57c48073b20f522c012f6e5a6278775cd8
MD5 53f7c84b09490c3d3b6e5b645c89f56f
BLAKE2b-256 3d43a61044736582c95842f269fb18c512bc51d58a92b47dae49b0ab23e0f8ee

See more details on using hashes here.

File details

Details for the file empirica-1.2.0-py3-none-any.whl.

File metadata

  • Download URL: empirica-1.2.0-py3-none-any.whl
  • Upload date:
  • Size: 780.6 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.7

File hashes

Hashes for empirica-1.2.0-py3-none-any.whl
Algorithm Hash digest
SHA256 6a9f8d1556aaefc807e8fe30680ff35a44a1a7f9913ce5114e23d7c598f88c9d
MD5 d5166f14642025eaf533fa814eec0b24
BLAKE2b-256 3835676def0a092a6944cde6037f2c375195554d04ce6ee8fe829ee7d7d21c4d

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page