Skip to main content

Genuine AI epistemic self-assessment framework - Universal interface for single AI tracking

Project description

🧠 Empirica - Epistemic Vector-Based Functional Self-Awareness Framework

AI agents that know what they know—and what they don't

Version PyPI Python License Docker

What is Empirica?

Empirica is an epistemic self-awareness framework for AI agents that enables genuine self-assessment, systematic learning tracking, and effective multi-agent collaboration.

Unlike traditional AI tools that rely on static prompts or heuristic-based evaluation, Empirica provides 13-dimensional epistemic vector tracking that allows AI agents to know what they know (and don't know) with measurable precision.

Core Philosophy: Epistemic Self-Awareness

The Problem: AI agents often exhibit "confident ignorance" - they confidently generate responses about topics they don't actually understand.

The Solution: Empirica enables genuine epistemic self-assessment through:

  1. 13-Dimensional Vector Space - Track knowledge, capability, context, and uncertainty across multiple dimensions
  2. CASCADE Workflow - Structured reasoning process with explicit epistemic gates
  3. Dynamic Context Loading - Resume work with compressed project memory
  4. Multi-Agent Coordination - Seamless handoffs between AI agents

Key Features

  • Honest uncertainty tracking: "I don't know" becomes a measured response
  • Focused investigation: Direct effort where knowledge gaps exist
  • Genuine learning measurement: Track what you learned, not just what you did
  • Session continuity: Resume work across sessions without losing context
  • Multi-agent coordination: Share epistemic state across AI teams

Result: AI you can trust—not because it's always right, but because it knows when it might be wrong.

🚀 Quick Start

Installation

PyPI (Recommended)

# Core installation
pip install empirica

# With API/dashboard features
pip install empirica[api]

# With vector search
pip install empirica[vector]

# Everything
pip install empirica[all]

Docker

# Pull the latest image
docker pull nubaeon/empirica:1.1.0

# Run a command
docker run -it nubaeon/empirica:1.1.0 empirica --help

# Interactive session with persistent data
docker run -it -v $(pwd)/.empirica:/data/.empirica nubaeon/empirica:1.1.0 /bin/bash

From Source

# Latest stable release
pip install git+https://github.com/Nubaeon/empirica.git@v1.1.0

# Development branch
pip install git+https://github.com/Nubaeon/empirica.git@develop

Initialize a New Project

# Navigate to your git repository
cd your-project
git init

# Initialize Empirica
empirica project-init

Your First Session

# AI-first JSON mode (recommended for AI agents)
echo '{"ai_id": "myagent", "session_type": "development"}' | empirica session-create -

🎯 Core Workflow: CASCADE

Empirica uses CASCADE - a metacognitive workflow with explicit epistemic phases:

# 1. PREFLIGHT: Assess what you know BEFORE starting
cat > preflight.json <<EOF
{
  "session_id": "abc-123",
  "vectors": {
    "engagement": 0.8,
    "foundation": {"know": 0.6, "do": 0.7, "context": 0.5},
    "comprehension": {"clarity": 0.7, "coherence": 0.8, "signal": 0.6, "density": 0.7},
    "execution": {"state": 0.5, "change": 0.4, "completion": 0.3, "impact": 0.5},
    "uncertainty": 0.4
  },
  "reasoning": "Starting with moderate knowledge of OAuth2..."
}
EOF
cat preflight.json | empirica preflight-submit -

# 2. WORK: Do your actual implementation
#    Use CHECK gates as needed for decision points

# 3. POSTFLIGHT: Measure what you ACTUALLY learned
cat > postflight.json <<EOF
{
  "session_id": "abc-123",
  "vectors": {
    "engagement": 0.9,
    "foundation": {"know": 0.85, "do": 0.9, "context": 0.8},
    "comprehension": {"clarity": 0.9, "coherence": 0.9, "signal": 0.85, "density": 0.8},
    "execution": {"state": 0.9, "change": 0.85, "completion": 1.0, "impact": 0.8},
    "uncertainty": 0.15
  },
  "reasoning": "Successfully implemented OAuth2, learned token refresh patterns"
}
EOF
cat postflight.json | empirica postflight-submit -

Result: Quantified learning (know: +0.25, uncertainty: -0.25)

✨ Key Features

📊 Epistemic Self-Assessment (13 Vectors)

Track knowledge across 3 tiers:

  • Tier 0 (Foundation): engagement, know, do, context
  • Tier 1 (Comprehension): clarity, coherence, signal, density
  • Tier 2 (Execution): state, change, completion, impact
  • Meta: uncertainty (explicit tracking)

🎯 Goal-Driven Task Management

# Create goals with epistemic scope
echo '{
  "session_id": "abc-123",
  "objective": "Implement OAuth2 authentication",
  "scope": {
    "breadth": 0.6,
    "duration": 0.4,
    "coordination": 0.3
  },
  "success_criteria": ["Auth works", "Tests pass"],
  "estimated_complexity": 0.65
}' | empirica goals-create -

🔄 Session Continuity

# Load project context dynamically (~800 tokens)
empirica project-bootstrap --project-id <PROJECT_ID>

🤝 Multi-Agent Coordination

Share epistemic state via git notes:

# Push your epistemic checkpoints
git push origin refs/notes/empirica/*

# Pull team member's state
git fetch origin refs/notes/empirica/*:refs/notes/empirica/*

📦 Optional Integrations

BEADS Issue Tracking

Install BEADS (separate Rust project):

cargo install beads

Claude Code Integration

Automatic epistemic continuity across memory compacts:

# Install plugin (bundled with Empirica)
./scripts/install_claude_plugin.sh

Vector Search (Qdrant)

pip install empirica[vector]

# Start Qdrant
docker run -p 6333:6333 qdrant/qdrant

# Embed docs
empirica project-embed --project-id <PROJECT_ID>

# Search
empirica project-search --project-id <PROJECT_ID> --task "oauth2"

📚 Documentation

Getting Started

Guides

Reference

🔒 Privacy & Data Isolation

Your data is isolated per-repo:

  • .empirica/ - Local SQLite database (gitignored)
  • .git/refs/notes/empirica/* - Epistemic checkpoints (local by default)
  • .beads/ - BEADS database (gitignored)

🛠️ Development

Running Tests

# Core tests
pytest tests/

# Integration tests
pytest tests/integration/

Contributing

We welcome contributions! See CONTRIBUTING.md for guidelines.

📊 System Requirements

  • Python: 3.11+
  • Git: Required for epistemic checkpoints
  • Optional: Docker (for Qdrant), Rust/Cargo (for BEADS)

🎓 Learn More

Research & Concepts

Use Cases

  • Research & Development
  • Multi-Agent Teams
  • Long-Running Projects
  • Training Data Generation
  • Epistemic Audit Trails

🔗 Related Projects

📞 Support

📜 License

MIT License - Maximum adoption, trust-aligned with Empirica's transparency principles.

See LICENSE for details.


Built with genuine epistemic transparency 🧠✨

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

empirica-1.1.3.tar.gz (727.3 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

empirica-1.1.3-py3-none-any.whl (745.8 kB view details)

Uploaded Python 3

File details

Details for the file empirica-1.1.3.tar.gz.

File metadata

  • Download URL: empirica-1.1.3.tar.gz
  • Upload date:
  • Size: 727.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.7

File hashes

Hashes for empirica-1.1.3.tar.gz
Algorithm Hash digest
SHA256 766eb8f7b1bc3a147792473fdb6d7ed73eb2183dc6bf7256eb1d437f44e9d0df
MD5 3c72cf57d2dcb98fd4dfec99a61b6797
BLAKE2b-256 74fbbd71ed07d5fc7003e85bb86634bcfcef8a75069a4c64d6a05acaf9384917

See more details on using hashes here.

File details

Details for the file empirica-1.1.3-py3-none-any.whl.

File metadata

  • Download URL: empirica-1.1.3-py3-none-any.whl
  • Upload date:
  • Size: 745.8 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.7

File hashes

Hashes for empirica-1.1.3-py3-none-any.whl
Algorithm Hash digest
SHA256 d4df2d0f15eab92c1f5ddaea437976c031735c3c7e218fa2c95c31c44fe675df
MD5 bd1c443b0144355fb7813dd5c4635ef8
BLAKE2b-256 9d25d7cb3813e4556040f4648cce6c0c7081d8e55ba74eaf3e7934fa436aa789

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page