AI collaboration framework with persistent memory, anticipatory intelligence, code inspection, and multi-agent orchestration
Project description
Empathy Framework
The AI collaboration framework that predicts problems before they happen.
pip install empathy-framework[full]
What's New in v3.2.x
Unified CLI & Developer Experience
- Unified Typer CLI — One
empathycommand with Rich output, subcommand groups, and cheatsheet - Dev Container Support — One-click VS Code dev environment with Docker Compose
- Python 3.13 Support — Test matrix now covers 3.10-3.13 across macOS, Linux, Windows
Documentation Overhaul
- Diátaxis Framework — Restructured docs into Tutorials, How-to, Explanation, Reference
- Improved Navigation — Clearer paths from learning to mastery
- Fixed Asset Loading — CSS now loads correctly on all documentation pages
Previous (v3.1.x)
- Smart Router — Natural language wizard dispatch: "Fix security in auth.py" → SecurityWizard
- Memory Graph — Cross-wizard knowledge sharing across sessions
- Auto-Chaining — Wizards automatically trigger related wizards
- Resilience Patterns — Retry, Circuit Breaker, Timeout, Health Checks
Previous (v3.0.x)
- Multi-Model Provider System — Anthropic, OpenAI, Ollama, or Hybrid mode
- 80-96% Cost Savings — Smart tier routing: cheap models detect, best models decide
- VSCode Dashboard — 10 integrated workflows with input history persistence
Quick Start (2 Minutes)
1. Install
pip install empathy-framework[full]
2. Configure Provider
# Auto-detect your API keys and configure
python -m empathy_os.models.cli provider
# Or set explicitly
python -m empathy_os.models.cli provider --set anthropic
python -m empathy_os.models.cli provider --set hybrid # Best of all providers
3. Use It
from empathy_os import EmpathyOS
os = EmpathyOS()
result = await os.collaborate(
"Review this code for security issues",
context={"code": your_code}
)
print(result.current_issues) # What's wrong now
print(result.predicted_issues) # What will break in 30-90 days
print(result.prevention_steps) # How to prevent it
Why Empathy?
| Feature | Empathy | SonarQube | GitHub Copilot |
|---|---|---|---|
| Predicts future issues | 30-90 days ahead | No | No |
| Persistent memory | Redis + patterns | No | No |
| Multi-provider support | Claude, GPT-4, Ollama | N/A | GPT only |
| Cost optimization | 80-96% savings | N/A | No |
| Your data stays local | Yes | Cloud | Cloud |
| Free for small teams | ≤5 employees | No | No |
Become a Power User
Level 1: Basic Usage
pip install empathy-framework
- Works out of the box with sensible defaults
- Auto-detects your API keys
Level 2: Cost Optimization
# Enable hybrid mode for 80-96% cost savings
python -m empathy_os.models.cli provider --set hybrid
| Tier | Model | Use Case | Cost |
|---|---|---|---|
| Cheap | GPT-4o-mini / Haiku | Summarization, simple tasks | $0.15-0.25/M |
| Capable | GPT-4o / Sonnet | Bug fixing, code review | $2.50-3.00/M |
| Premium | o1 / Opus | Architecture, complex decisions | $15/M |
Level 3: Multi-Model Workflows
from empathy_llm_toolkit import EmpathyLLM
llm = EmpathyLLM(provider="anthropic", enable_model_routing=True)
# Automatically routes to appropriate tier
await llm.interact(user_id="dev", user_input="Summarize this", task_type="summarize") # → Haiku
await llm.interact(user_id="dev", user_input="Fix this bug", task_type="fix_bug") # → Sonnet
await llm.interact(user_id="dev", user_input="Design system", task_type="coordinate") # → Opus
Level 4: VSCode Integration
Install the Empathy VSCode extension for:
- Real-time Dashboard — Health score, costs, patterns
- One-Click Workflows — Research, code review, debugging
- Visual Cost Tracking — See savings in real-time
- See also:
docs/dashboard-costs-by-tier.mdfor interpreting the By tier (7 days) cost breakdown.
- See also:
Level 5: Custom Agents
from empathy_os.agents import AgentFactory
# Create domain-specific agents with inherited memory
security_agent = AgentFactory.create(
domain="security",
memory_enabled=True,
anticipation_level=4
)
CLI Reference
Provider Configuration
python -m empathy_os.models.cli provider # Show current config
python -m empathy_os.models.cli provider --set anthropic # Single provider
python -m empathy_os.models.cli provider --set hybrid # Best-of-breed
python -m empathy_os.models.cli provider --interactive # Setup wizard
python -m empathy_os.models.cli provider -f json # JSON output
Model Registry
python -m empathy_os.models.cli registry # Show all models
python -m empathy_os.models.cli registry --provider openai # Filter by provider
python -m empathy_os.models.cli costs --input-tokens 50000 # Estimate costs
Telemetry & Analytics
python -m empathy_os.models.cli telemetry # Summary
python -m empathy_os.models.cli telemetry --costs # Cost savings report
python -m empathy_os.models.cli telemetry --providers # Provider usage
python -m empathy_os.models.cli telemetry --fallbacks # Fallback stats
Memory Control
empathy-memory serve # Start Redis + API server
empathy-memory status # Check system status
empathy-memory stats # View statistics
empathy-memory patterns # List stored patterns
Code Inspection
empathy-inspect . # Run full inspection
empathy-inspect . --format sarif # GitHub Actions format
empathy-inspect . --fix # Auto-fix safe issues
empathy-inspect . --staged # Only staged changes
XML-Enhanced Prompts
Enable structured XML prompts for consistent, parseable LLM responses:
# .empathy/workflows.yaml
xml_prompt_defaults:
enabled: false # Set true to enable globally
workflow_xml_configs:
security-audit:
enabled: true
enforce_response_xml: true
template_name: "security-audit"
code-review:
enabled: true
template_name: "code-review"
Built-in templates: security-audit, code-review, research, bug-analysis, perf-audit, refactor-plan, test-gen, doc-gen, release-prep, dependency-check
from empathy_os.prompts import get_template, XmlResponseParser, PromptContext
# Use a built-in template
template = get_template("security-audit")
context = PromptContext.for_security_audit(code="def foo(): pass")
prompt = template.render(context)
# Parse XML responses
parser = XmlResponseParser(fallback_on_error=True)
result = parser.parse(llm_response)
print(result.summary, result.findings, result.checklist)
Smart Router
Route natural language requests to the right wizard automatically:
from empathy_os.routing import SmartRouter
router = SmartRouter()
# Natural language routing
decision = router.route_sync("Fix the security vulnerability in auth.py")
print(f"Primary: {decision.primary_wizard}") # → security-audit
print(f"Also consider: {decision.secondary_wizards}") # → [code-review]
print(f"Confidence: {decision.confidence}")
# File-based suggestions
suggestions = router.suggest_for_file("requirements.txt") # → [dependency-check]
# Error-based suggestions
suggestions = router.suggest_for_error("NullReferenceException") # → [bug-predict, test-gen]
Memory Graph
Cross-wizard knowledge sharing - wizards learn from each other:
from empathy_os.memory import MemoryGraph, EdgeType
graph = MemoryGraph()
# Add findings from any wizard
bug_id = graph.add_finding(
wizard="bug-predict",
finding={
"type": "bug",
"name": "Null reference in auth.py:42",
"severity": "high"
}
)
# Connect related findings
fix_id = graph.add_finding(wizard="code-review", finding={"type": "fix", "name": "Add null check"})
graph.add_edge(bug_id, fix_id, EdgeType.FIXED_BY)
# Find similar past issues
similar = graph.find_similar({"name": "Null reference error"})
# Traverse relationships
related_fixes = graph.find_related(bug_id, edge_types=[EdgeType.FIXED_BY])
Auto-Chaining
Wizards automatically trigger related wizards based on findings:
# .empathy/wizard_chains.yaml
chains:
security-audit:
auto_chain: true
triggers:
- condition: "high_severity_count > 0"
next: dependency-check
approval_required: false
- condition: "vulnerability_type == 'injection'"
next: code-review
approval_required: true
bug-predict:
triggers:
- condition: "risk_score > 0.7"
next: test-gen
templates:
full-security-review:
steps: [security-audit, dependency-check, code-review]
pre-release:
steps: [test-gen, security-audit, release-prep]
from empathy_os.routing import ChainExecutor
executor = ChainExecutor()
# Check what chains would trigger
result = {"high_severity_count": 5}
triggers = executor.get_triggered_chains("security-audit", result)
# → [ChainTrigger(next="dependency-check"), ...]
# Execute a template
template = executor.get_template("full-security-review")
# → ["security-audit", "dependency-check", "code-review"]
Prompt Engineering Wizard
Analyze, generate, and optimize prompts:
from coach_wizards import PromptEngineeringWizard
wizard = PromptEngineeringWizard()
# Analyze existing prompts
analysis = wizard.analyze_prompt("Fix this bug")
print(f"Score: {analysis.overall_score}") # → 0.13 (poor)
print(f"Issues: {analysis.issues}") # → ["Missing role", "No output format"]
# Generate optimized prompts
prompt = wizard.generate_prompt(
task="Review code for security vulnerabilities",
role="a senior security engineer",
constraints=["Focus on OWASP top 10"],
output_format="JSON with severity and recommendation"
)
# Optimize tokens (reduce costs)
result = wizard.optimize_tokens(verbose_prompt)
print(f"Reduced: {result.token_reduction:.0%}") # → 20% reduction
# Add chain-of-thought scaffolding
enhanced = wizard.add_chain_of_thought(prompt, "debug")
Install Options
# Recommended (all features)
pip install empathy-framework[full]
# Minimal
pip install empathy-framework
# Specific providers
pip install empathy-framework[anthropic]
pip install empathy-framework[openai]
pip install empathy-framework[llm] # Both
# Development
git clone https://github.com/Smart-AI-Memory/empathy-framework.git
cd empathy-framework && pip install -e .[dev]
What's Included
| Component | Description |
|---|---|
| Empathy OS | Core engine for human↔AI and AI↔AI collaboration |
| Smart Router | Natural language wizard dispatch with LLM classification |
| Memory Graph | Cross-wizard knowledge sharing (bugs, fixes, patterns) |
| Auto-Chaining | Wizards trigger related wizards based on findings |
| Multi-Model Router | Smart routing across providers and tiers |
| Memory System | Redis short-term + encrypted long-term patterns |
| 17 Coach Wizards | Security, performance, testing, docs, prompt engineering |
| 10 Cost-Optimized Workflows | Multi-tier pipelines with XML prompts |
| Healthcare Suite | SBAR, SOAP notes, clinical protocols (HIPAA) |
| Code Inspection | Unified pipeline with SARIF/GitHub Actions support |
| VSCode Extension | Visual dashboard for memory and workflows |
| Telemetry & Analytics | Cost tracking, usage stats, optimization insights |
The 5 Levels of AI Empathy
| Level | Name | Behavior | Example |
|---|---|---|---|
| 1 | Reactive | Responds when asked | "Here's the data you requested" |
| 2 | Guided | Asks clarifying questions | "What format do you need?" |
| 3 | Proactive | Notices patterns | "I pre-fetched what you usually need" |
| 4 | Anticipatory | Predicts future needs | "This query will timeout at 10k users" |
| 5 | Transformative | Builds preventing structures | "Here's a framework for all future cases" |
Empathy operates at Level 4 — predicting problems before they manifest.
Environment Setup
# Required: At least one provider
export ANTHROPIC_API_KEY="sk-ant-..." # For Claude models
export OPENAI_API_KEY="sk-..." # For GPT models
# Optional: Redis for memory
export REDIS_URL="redis://localhost:6379"
# Or use a .env file (auto-detected)
echo 'ANTHROPIC_API_KEY=sk-ant-...' >> .env
Get Involved
- Star this repo if you find it useful
- Join Discussions — Questions, ideas, show what you built
- Read the Book — Deep dive into the philosophy
- Full Documentation — API reference, examples, guides
Project Evolution
For those interested in the development history and architectural decisions:
- Development Logs — Execution plans, phase completions, and progress tracking
- Architecture Docs — System design, memory architecture, and integration plans
- Marketing Materials — Pitch decks, outreach templates, and commercial readiness
- Guides — Publishing tutorials, MkDocs setup, and distribution policies
License
Fair Source License 0.9 — Free for students, educators, and teams ≤5 employees. Commercial license ($99/dev/year) for larger organizations. Details →
Built by Smart AI Memory · Documentation · Examples · Issues
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file empathy_framework-3.2.5.tar.gz.
File metadata
- Download URL: empathy_framework-3.2.5.tar.gz
- Upload date:
- Size: 1.2 MB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
74d2c2b32bce6a95e96060fd6f54650c4439c22b95431fe39c1e9d71e8ad9795
|
|
| MD5 |
f8f66e9e4159858bccd9c10665d81482
|
|
| BLAKE2b-256 |
bde156af050522caa596b37bcd4a09d52ee6402e533f54657dc1fbb2a00f8b88
|
Provenance
The following attestation bundles were made for empathy_framework-3.2.5.tar.gz:
Publisher:
publish-pypi.yml on Smart-AI-Memory/empathy-framework
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
empathy_framework-3.2.5.tar.gz -
Subject digest:
74d2c2b32bce6a95e96060fd6f54650c4439c22b95431fe39c1e9d71e8ad9795 - Sigstore transparency entry: 779253528
- Sigstore integration time:
-
Permalink:
Smart-AI-Memory/empathy-framework@32d3adcf9401c4bb5bbaea225d3d8e702d552145 -
Branch / Tag:
refs/heads/main - Owner: https://github.com/Smart-AI-Memory
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish-pypi.yml@32d3adcf9401c4bb5bbaea225d3d8e702d552145 -
Trigger Event:
workflow_dispatch
-
Statement type:
File details
Details for the file empathy_framework-3.2.5-py3-none-any.whl.
File metadata
- Download URL: empathy_framework-3.2.5-py3-none-any.whl
- Upload date:
- Size: 334.3 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
16e38645aedce2fa85cf78354a90b5c1d4004f54cc291b540e7d49f2edbb0d49
|
|
| MD5 |
3d64c53c62fbdfe4f1fd4a26c94e41b5
|
|
| BLAKE2b-256 |
eec96ddb4d54a21d7ccec537db5f6c9c262ebbe6eaa0ba9b1caaef5ce08ef432
|
Provenance
The following attestation bundles were made for empathy_framework-3.2.5-py3-none-any.whl:
Publisher:
publish-pypi.yml on Smart-AI-Memory/empathy-framework
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
empathy_framework-3.2.5-py3-none-any.whl -
Subject digest:
16e38645aedce2fa85cf78354a90b5c1d4004f54cc291b540e7d49f2edbb0d49 - Sigstore transparency entry: 779253533
- Sigstore integration time:
-
Permalink:
Smart-AI-Memory/empathy-framework@32d3adcf9401c4bb5bbaea225d3d8e702d552145 -
Branch / Tag:
refs/heads/main - Owner: https://github.com/Smart-AI-Memory
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish-pypi.yml@32d3adcf9401c4bb5bbaea225d3d8e702d552145 -
Trigger Event:
workflow_dispatch
-
Statement type: