Genuine AI epistemic self-assessment framework - Universal interface for single AI tracking
Project description
๐ง Empirica - Epistemic Vector-Based Functional Self-Awareness Framework
AI agents that know what they knowโand what they don't
What's New in 1.2.3
- ๐ฆ Sentinel Safety Gates - Human-in-the-loop gates (PROCEED/HALT/BRANCH/REVISE) that bound AI autonomy
- ๐ค Multi-Agent Spawning - Spawn epistemic sub-agents with
agent-spawn, aggregate findings withagent-aggregate - ๐ญ Persona System - AI identity profiles that emerge from successful epistemic patterns (
persona-list,persona-promote) - ๐ง 108 CLI Commands - Full coverage including assessment, trajectory projection, and drift detection
- ๐ 3-Layer Signaling - Moon phase indicators (๐โ๐) for epistemic health at a glance
- ๐ Trajectory Projection - Project epistemic learning curves with
trajectory-project - ๐ข Turtle Principle - Recursive grounding: every claim must trace to verifiable source
โก Claude Code Quick Start
What You Do vs What Claude Does
| You Do (Once) | Claude Does (Automatic) |
|---|---|
pip install empirica |
Runs project-bootstrap to load prior learnings |
Add snippet to ~/.claude/CLAUDE.md |
Logs findings as it works |
| Work normally | Runs CHECK gates before risky actions |
| Saves what it learned at session end |
Setup (2 minutes)
Step 1: Install
pip install empirica
Step 2: Add to ~/.claude/CLAUDE.md
Copy this to ~/.claude/CLAUDE.md (Claude Code) or your AI's system prompt:
# Empirica - Epistemic Self-Assessment
You have Empirica installed. Use it to track what you know and learn.
## Quick Start
```bash
# Start session
empirica session-create --ai-id claude-code --output json
# Before work: What do I know?
empirica preflight-submit -
# After work: What did I learn?
empirica postflight-submit -
Core Vectors (0.0-1.0)
- know: Domain knowledge (0.7+ = ready)
- uncertainty: Doubt level (0.35- = ready)
- context: Information access
- do: Execution capability
Log As You Work
empirica finding-log --finding "Discovered X" --impact 0.7
empirica unknown-log --unknown "Need to investigate Y"
When Uncertain
Run CHECK gate: empirica check-submit -
Full docs: empirica --help
**Full setup guide:** [CLAUDE_CODE_SETUP.md](docs/guides/CLAUDE_CODE_SETUP.md) (hooks, MCP, troubleshooting)
**Full system prompt:** [CLAUDE.md](docs/system-prompts/CLAUDE.md) | [CANONICAL_CORE.md](docs/system-prompts/CANONICAL_CORE.md)
### Step 3: (Optional) MCP for Claude Desktop
Add to `claude_desktop_config.json`:
```json
{
"mcpServers": {
"empirica": {
"command": "empirica-mcp",
"env": { "EMPIRICA_AI_ID": "claude-desktop" }
}
}
}
Docker
docker pull nubaeon/empirica:1.2.3
How It Works Day-to-Day
You don't need to type Empirica commands. Just talk to Claude normally:
| You Say (Natural Language) | Claude Does (Behind the Scenes) |
|---|---|
| "Continue working on the auth refactor" | Runs project-bootstrap โ loads what it learned last session |
| "I'm not sure about this approach" | Runs check-submit โ assesses if it knows enough to proceed |
| "Good work, let's wrap up" | Runs postflight-submit โ saves learnings for next time |
What is project-bootstrap?
When Claude starts a session, project-bootstrap loads ~800 tokens of structured context:
๐ Epistemic State: know=0.85, uncertainty=0.15
๐ฏ Active Goals: Refactor auth module (in_progress)
๐ก Recent Findings: "Auth uses JWT with 15min expiry"
โ Open Unknowns: "Token rotation mechanism unclear"
This replaces 200k tokens of conversation history with just the important bits.
Will Claude ignore the commands?
Sometimes, especially mid-task. But after a memory compact (when context summarizes), Claude naturally looks for contextโthat's when bootstrap shines. The CLAUDE.md instructions make this reliable.
Live Metacognitive Signal
With Claude Code hooks enabled, you see Claude's epistemic state in real-time:
[empirica] โก79% โ โก PRAXIC โ POSTFLIGHT โ K:80% U:20% C:90% โ ฮ K:+0.30 U:-0.30 C:+0.30 โ โ stable
What this tells you:
- โก79% - Overall epistemic confidence
- PRAXIC - Claude is in action mode (vs NOETIC = investigation mode)
- POSTFLIGHT - Just completed a task and logged learnings
- K:80% U:20% - 80% knowledge, 20% uncertainty (healthy state)
- ฮ K:+0.30 - Gained 30% knowledge this session (learning delta)
- โ stable - No epistemic drift detected
Why this matters: You can see when Claude is uncertain before it acts, when it's learning, and when it might be drifting from reality. No more guessing if Claude actually knows what it's doing.
What is Empirica?
Empirica is an epistemic self-awareness framework for AI agents that enables genuine self-assessment, systematic learning tracking, and effective multi-agent collaboration.
Unlike traditional AI tools that rely on static prompts or heuristic-based evaluation, Empirica provides 13-dimensional epistemic vector tracking that allows AI agents to know what they know (and don't know) with measurable precision.
Core Philosophy: Epistemic Self-Awareness
The Problem: AI agents often exhibit "confident ignorance" - they confidently generate responses about topics they don't actually understand.
The Solution: Empirica enables genuine epistemic self-assessment through:
- 13-Dimensional Vector Space - Track knowledge, capability, context, and uncertainty across multiple dimensions
- CASCADE Workflow - Structured reasoning process with explicit epistemic gates
- Dynamic Context Loading - Resume work with compressed project memory
- Multi-Agent Coordination - Seamless handoffs between AI agents
Key Features
- โ Honest uncertainty tracking: "I don't know" becomes a measured response
- โ Focused investigation: Direct effort where knowledge gaps exist
- โ Genuine learning measurement: Track what you learned, not just what you did
- โ Session continuity: Resume work across sessions without losing context
- โ Multi-agent coordination: Share epistemic state across AI teams
Result: AI you can trustโnot because it's always right, but because it knows when it might be wrong.
๐ Quick Start
Installation
PyPI (Recommended)
# Core installation
pip install empirica
# With API/dashboard features
pip install empirica[api]
# With vector search
pip install empirica[vector]
# Everything
pip install empirica[all]
Docker
# Pull the latest image
docker pull nubaeon/empirica:1.2.3
# Run a command
docker run -it nubaeon/empirica:1.2.3 empirica --help
# Interactive session with persistent data
docker run -it -v $(pwd)/.empirica:/data/.empirica nubaeon/empirica:1.2.3 /bin/bash
From Source
# Latest stable release
pip install git+https://github.com/Nubaeon/empirica.git@v1.2.3
# Development branch
pip install git+https://github.com/Nubaeon/empirica.git@develop
Initialize a New Project
# Navigate to your git repository
cd your-project
git init
# Initialize Empirica
empirica project-init
Your First Session
# AI-first JSON mode (recommended for AI agents)
echo '{"ai_id": "myagent", "session_type": "development"}' | empirica session-create -
๐ฏ Core Workflow: CASCADE
Empirica uses CASCADE - a metacognitive workflow with explicit epistemic phases:
# 1. PREFLIGHT: Assess what you know BEFORE starting
cat > preflight.json <<EOF
{
"session_id": "abc-123",
"vectors": {
"engagement": 0.8,
"foundation": {"know": 0.6, "do": 0.7, "context": 0.5},
"comprehension": {"clarity": 0.7, "coherence": 0.8, "signal": 0.6, "density": 0.7},
"execution": {"state": 0.5, "change": 0.4, "completion": 0.3, "impact": 0.5},
"uncertainty": 0.4
},
"reasoning": "Starting with moderate knowledge of OAuth2..."
}
EOF
cat preflight.json | empirica preflight-submit -
# 2. WORK: Do your actual implementation
# Use CHECK gates as needed for decision points
# 3. POSTFLIGHT: Measure what you ACTUALLY learned
cat > postflight.json <<EOF
{
"session_id": "abc-123",
"vectors": {
"engagement": 0.9,
"foundation": {"know": 0.85, "do": 0.9, "context": 0.8},
"comprehension": {"clarity": 0.9, "coherence": 0.9, "signal": 0.85, "density": 0.8},
"execution": {"state": 0.9, "change": 0.85, "completion": 1.0, "impact": 0.8},
"uncertainty": 0.15
},
"reasoning": "Successfully implemented OAuth2, learned token refresh patterns"
}
EOF
cat postflight.json | empirica postflight-submit -
Result: Quantified learning (know: +0.25, uncertainty: -0.25)
โจ Key Features
๐ Epistemic Self-Assessment (13 Vectors)
Track knowledge across 3 tiers:
- Tier 0 (Foundation): engagement, know, do, context
- Tier 1 (Comprehension): clarity, coherence, signal, density
- Tier 2 (Execution): state, change, completion, impact
- Meta: uncertainty (explicit tracking)
๐ฏ Goal-Driven Task Management
# Create goals with epistemic scope
echo '{
"session_id": "abc-123",
"objective": "Implement OAuth2 authentication",
"scope": {
"breadth": 0.6,
"duration": 0.4,
"coordination": 0.3
},
"success_criteria": ["Auth works", "Tests pass"],
"estimated_complexity": 0.65
}' | empirica goals-create -
๐ Session Continuity
# Load project context dynamically (~800 tokens)
empirica project-bootstrap --project-id <PROJECT_ID>
๐ค Multi-Agent Coordination
Spawn epistemic sub-agents:
# Spawn a sub-agent for parallel investigation
empirica agent-spawn --session-id <ID> --task "Investigate auth patterns" --depth medium
# Sub-agent reports back
empirica agent-report --session-id <SUB_ID> --findings '[...]' --confidence 0.8
# Aggregate findings from multiple agents
empirica agent-aggregate --parent-session-id <ID> --merge-strategy weighted
Share epistemic state via git notes:
# Push your epistemic checkpoints
git push origin refs/notes/empirica/*
# Pull team member's state
git fetch origin refs/notes/empirica/*:refs/notes/empirica/*
๐ฆ Sentinel Safety Gates
Bounded AI autonomy with human oversight:
# Check if operation is safe to proceed
empirica sentinel-check --operation '{"type": "code_generation", "scope": "high"}' --session-id <ID>
# Returns: PROCEED | HALT | BRANCH | REVISE
# Orchestrate multi-step workflow with gates
empirica sentinel-orchestrate --workflow workflow.json --session-id <ID>
Gate types:
PROCEED- Safe to continue autonomouslyHALT- Requires human approval before continuingBRANCH- Spawn investigation before proceedingREVISE- Modify approach and resubmit
๐ญ Persona System
AI identity profiles that emerge from successful patterns:
# List available personas
empirica persona-list
# Find persona matching current task
empirica persona-find --task "security audit" --session-id <ID>
# Promote traits based on successful outcomes
empirica persona-promote --persona-id researcher --trait thoroughness --evidence "Found 3 critical bugs"
๐ Drift Detection & Trajectory
Monitor epistemic health and project learning curves:
# Check for behavioral drift
empirica check-drift --session-id <ID>
# Project epistemic trajectory
empirica trajectory-project --session-id <ID> --horizon 5
# Assess current epistemic state
empirica assess-state --session-id <ID> --include-history
Moon phase indicators for health at a glance:
- ๐ Critical (coverage < 25%)
- ๐ Low (25-50%)
- ๐ Moderate (50-75%)
- ๐ Good (75-90%)
- ๐ Excellent (90%+)
๐ฆ Optional Integrations
BEADS Issue Tracking
Install BEADS (separate Rust project):
cargo install beads
MCP Server (Model Context Protocol)
For AI tools that support MCP:
# Install MCP server
pip install empirica-mcp
# Run server
empirica-mcp
Features: 57 tools including 9 Human Copilot tools for enhanced human oversight.
Claude Code Integration
Automatic epistemic continuity across memory compacts:
# Install plugin (bundled with Empirica)
./scripts/install_claude_plugin.sh
Vector Search (Qdrant)
pip install empirica[vector]
# Start Qdrant
docker run -p 6333:6333 qdrant/qdrant
# Embed docs
empirica project-embed --project-id <PROJECT_ID>
# Search
empirica project-search --project-id <PROJECT_ID> --task "oauth2"
๐ Documentation
Getting Started
- ๐ First-Time Setup
- ๐ Empirica Explained Simply
Guides
- ๐ฏ CASCADE Workflow
- ๐ Epistemic Vectors
Reference
- ๐ CLI Commands
- ๐๏ธ Database Schema
๐ Privacy & Data Isolation
Your data is isolated per-repo:
- โ
.empirica/- Local SQLite database (gitignored) - โ
.git/refs/notes/empirica/*- Epistemic checkpoints (local by default) - โ
.beads/- BEADS database (gitignored)
๐ ๏ธ Development
Running Tests
# Core tests
pytest tests/
# Integration tests
pytest tests/integration/
Contributing
We welcome contributions! See CONTRIBUTING.md for guidelines.
๐ System Requirements
- Python: 3.11+
- Git: Required for epistemic checkpoints
- Optional: Docker (for Qdrant), Rust/Cargo (for BEADS)
๐ Learn More
Research & Concepts
Use Cases
- Research & Development
- Multi-Agent Teams
- Long-Running Projects
- Training Data Generation
- Epistemic Audit Trails
๐ Related Projects
- Empirica MCP - Model Context Protocol server for Empirica integration
- Empirica EPRE - Epistemic Pattern Recognition Engine
๐ Support
- Issues: GitHub Issues
- Discussions: GitHub Discussions
- Documentation: docs/
๐ License
MIT License - Maximum adoption, trust-aligned with Empirica's transparency principles.
See LICENSE for details.
Built with genuine epistemic transparency ๐ง โจ
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file empirica-1.2.4.tar.gz.
File metadata
- Download URL: empirica-1.2.4.tar.gz
- Upload date:
- Size: 787.4 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
2f451110b23307bbd2b8a7f5918ae6fd78ee09153a2dfecac94270b382a7b135
|
|
| MD5 |
6ca7c15575e9810e91f1a8b0eb8aead0
|
|
| BLAKE2b-256 |
6d09e26c7e9eb400b9a2443eeb9e8e16be406ce65eabb03d517cde49e23a34b4
|
File details
Details for the file empirica-1.2.4-py3-none-any.whl.
File metadata
- Download URL: empirica-1.2.4-py3-none-any.whl
- Upload date:
- Size: 848.4 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
2223cb9c1b422c0a48c078d68e136cc52ea14c9f3332b9c518db518800dd9553
|
|
| MD5 |
013cebf6f204ab2548e8b5263c69b6a4
|
|
| BLAKE2b-256 |
02da20da2093d0325a8d94d61e0e8bb6329f10e221bcd7ee1d341beaca462b67
|