Skip to main content

AI collaboration framework with persistent memory, anticipatory intelligence, code inspection, and multi-agent orchestration

Project description

Empathy Framework

The AI collaboration framework that predicts problems before they happen.

PyPI Tests Coverage License Python

pip install empathy-framework[full]

What's New in v3.5.x

Memory API Security Hardening (v3.5.0)

  • Input Validation — Pattern IDs, agent IDs, and classifications validated to prevent path traversal and injection attacks
  • API Key Authentication — Bearer token and X-API-Key header support with SHA-256 hash comparison
  • Rate Limiting — Per-IP sliding window rate limiting (100 req/min default)
  • HTTPS/TLS Support — Optional SSL certificate configuration for encrypted connections
  • CORS Restrictions — Configurable allowed origins (localhost-only by default)
  • Request Size Limits — 1MB body limit to prevent DoS attacks

Previous (v3.4.x)

  • Trust Circuit Breaker — Automatic degradation when model reliability drops
  • Pattern Catalog System — Searchable pattern library with similarity matching
  • Memory Control Panel — VSCode sidebar for Redis and pattern management

Previous (v3.3.x)

  • Formatted Reports — Every workflow includes formatted_report with consistent structure
  • Enterprise-Safe Doc-Gen — Auto-scaling tokens, cost guardrails, file export
  • Unified Typer CLI — One empathy command with Rich output
  • Python 3.13 Support — Test matrix covers 3.10-3.13 across all platforms

Previous (v3.1.x)

  • Smart Router — Natural language wizard dispatch: "Fix security in auth.py" → SecurityWizard
  • Memory Graph — Cross-wizard knowledge sharing across sessions
  • Auto-Chaining — Wizards automatically trigger related wizards
  • Resilience Patterns — Retry, Circuit Breaker, Timeout, Health Checks

Previous (v3.0.x)

  • Multi-Model Provider System — Anthropic, OpenAI, Google Gemini, Ollama, or Hybrid mode
  • 80-96% Cost Savings — Smart tier routing: cheap models detect, best models decide
  • VSCode Dashboard — 10 integrated workflows with input history persistence

Quick Start (2 Minutes)

1. Install

pip install empathy-framework[full]

2. Configure Provider

# Auto-detect your API keys and configure
python -m empathy_os.models.cli provider

# Or set explicitly
python -m empathy_os.models.cli provider --set anthropic
python -m empathy_os.models.cli provider --set hybrid  # Best of all providers

3. Use It

from empathy_os import EmpathyOS

os = EmpathyOS()
result = await os.collaborate(
    "Review this code for security issues",
    context={"code": your_code}
)

print(result.current_issues)      # What's wrong now
print(result.predicted_issues)    # What will break in 30-90 days
print(result.prevention_steps)    # How to prevent it

Why Empathy?

Feature Empathy SonarQube GitHub Copilot
Predicts future issues 30-90 days ahead No No
Persistent memory Redis + patterns No No
Multi-provider support Claude, GPT-4, Gemini, Ollama N/A GPT only
Cost optimization 80-96% savings N/A No
Your data stays local Yes Cloud Cloud
Free for small teams ≤5 employees No No

Become a Power User

Level 1: Basic Usage

pip install empathy-framework
  • Works out of the box with sensible defaults
  • Auto-detects your API keys

Level 2: Cost Optimization

# Enable hybrid mode for 80-96% cost savings
python -m empathy_os.models.cli provider --set hybrid
Tier Model Use Case Cost
Cheap GPT-4o-mini / Haiku Summarization, simple tasks $0.15-0.25/M
Capable GPT-4o / Sonnet Bug fixing, code review $2.50-3.00/M
Premium o1 / Opus Architecture, complex decisions $15/M

Level 3: Multi-Model Workflows

from empathy_llm_toolkit import EmpathyLLM

llm = EmpathyLLM(provider="anthropic", enable_model_routing=True)

# Automatically routes to appropriate tier
await llm.interact(user_id="dev", user_input="Summarize this", task_type="summarize")     # → Haiku
await llm.interact(user_id="dev", user_input="Fix this bug", task_type="fix_bug")         # → Sonnet
await llm.interact(user_id="dev", user_input="Design system", task_type="coordinate")     # → Opus

Level 4: VSCode Integration

Install the Empathy VSCode extension for:

  • Real-time Dashboard — Health score, costs, patterns
  • One-Click Workflows — Research, code review, debugging
  • Visual Cost Tracking — See savings in real-time
    • See also: docs/dashboard-costs-by-tier.md for interpreting the By tier (7 days) cost breakdown.
  • Memory Control Panel (Beta) — Manage Redis and pattern storage
    • View Redis status and memory usage
    • Browse and export stored patterns
    • Run system health checks
    • Configure auto-start in empathy.config.yml
memory:
  enabled: true
  auto_start_redis: true

Level 5: Custom Agents

from empathy_os.agents import AgentFactory

# Create domain-specific agents with inherited memory
security_agent = AgentFactory.create(
    domain="security",
    memory_enabled=True,
    anticipation_level=4
)

CLI Reference

Provider Configuration

python -m empathy_os.models.cli provider                    # Show current config
python -m empathy_os.models.cli provider --set anthropic    # Single provider
python -m empathy_os.models.cli provider --set hybrid       # Best-of-breed
python -m empathy_os.models.cli provider --interactive      # Setup wizard
python -m empathy_os.models.cli provider -f json            # JSON output

Model Registry

python -m empathy_os.models.cli registry                    # Show all models
python -m empathy_os.models.cli registry --provider openai  # Filter by provider
python -m empathy_os.models.cli costs --input-tokens 50000  # Estimate costs

Telemetry & Analytics

python -m empathy_os.models.cli telemetry                   # Summary
python -m empathy_os.models.cli telemetry --costs           # Cost savings report
python -m empathy_os.models.cli telemetry --providers       # Provider usage
python -m empathy_os.models.cli telemetry --fallbacks       # Fallback stats

Memory Control

empathy-memory serve    # Start Redis + API server
empathy-memory status   # Check system status
empathy-memory stats    # View statistics
empathy-memory patterns # List stored patterns

Code Inspection

empathy-inspect .                     # Run full inspection
empathy-inspect . --format sarif      # GitHub Actions format
empathy-inspect . --fix               # Auto-fix safe issues
empathy-inspect . --staged            # Only staged changes

XML-Enhanced Prompts

Enable structured XML prompts for consistent, parseable LLM responses:

# .empathy/workflows.yaml
xml_prompt_defaults:
  enabled: false  # Set true to enable globally

workflow_xml_configs:
  security-audit:
    enabled: true
    enforce_response_xml: true
    template_name: "security-audit"
  code-review:
    enabled: true
    template_name: "code-review"

Built-in templates: security-audit, code-review, research, bug-analysis, perf-audit, refactor-plan, test-gen, doc-gen, release-prep, dependency-check

from empathy_os.prompts import get_template, XmlResponseParser, PromptContext

# Use a built-in template
template = get_template("security-audit")
context = PromptContext.for_security_audit(code="def foo(): pass")
prompt = template.render(context)

# Parse XML responses
parser = XmlResponseParser(fallback_on_error=True)
result = parser.parse(llm_response)
print(result.summary, result.findings, result.checklist)

Enterprise Doc-Gen

Generate comprehensive documentation for large projects with enterprise-safe defaults:

from empathy_os.workflows import DocumentGenerationWorkflow

# Enterprise-safe configuration
workflow = DocumentGenerationWorkflow(
    export_path="docs/generated",     # Auto-save to disk
    max_cost=5.0,                     # Cost guardrail ($5 default)
    chunked_generation=True,          # Handle large projects
    graceful_degradation=True,        # Partial results on errors
)

result = await workflow.execute(
    source_code=your_code,
    doc_type="api_reference",
    audience="developers"
)

# Access the formatted report
print(result.final_output["formatted_report"])

# Large outputs are chunked for display
if "output_chunks" in result.final_output:
    for chunk in result.final_output["output_chunks"]:
        print(chunk)

# Full docs saved to disk
print(f"Saved to: {result.final_output.get('export_path')}")

Smart Router

Route natural language requests to the right wizard automatically:

from empathy_os.routing import SmartRouter

router = SmartRouter()

# Natural language routing
decision = router.route_sync("Fix the security vulnerability in auth.py")
print(f"Primary: {decision.primary_wizard}")      # → security-audit
print(f"Also consider: {decision.secondary_wizards}")  # → [code-review]
print(f"Confidence: {decision.confidence}")

# File-based suggestions
suggestions = router.suggest_for_file("requirements.txt")  # → [dependency-check]

# Error-based suggestions
suggestions = router.suggest_for_error("NullReferenceException")  # → [bug-predict, test-gen]

Memory Graph

Cross-wizard knowledge sharing - wizards learn from each other:

from empathy_os.memory import MemoryGraph, EdgeType

graph = MemoryGraph()

# Add findings from any wizard
bug_id = graph.add_finding(
    wizard="bug-predict",
    finding={
        "type": "bug",
        "name": "Null reference in auth.py:42",
        "severity": "high"
    }
)

# Connect related findings
fix_id = graph.add_finding(wizard="code-review", finding={"type": "fix", "name": "Add null check"})
graph.add_edge(bug_id, fix_id, EdgeType.FIXED_BY)

# Find similar past issues
similar = graph.find_similar({"name": "Null reference error"})

# Traverse relationships
related_fixes = graph.find_related(bug_id, edge_types=[EdgeType.FIXED_BY])

Auto-Chaining

Wizards automatically trigger related wizards based on findings:

# .empathy/wizard_chains.yaml
chains:
  security-audit:
    auto_chain: true
    triggers:
      - condition: "high_severity_count > 0"
        next: dependency-check
        approval_required: false
      - condition: "vulnerability_type == 'injection'"
        next: code-review
        approval_required: true

  bug-predict:
    triggers:
      - condition: "risk_score > 0.7"
        next: test-gen

templates:
  full-security-review:
    steps: [security-audit, dependency-check, code-review]
  pre-release:
    steps: [test-gen, security-audit, release-prep]
from empathy_os.routing import ChainExecutor

executor = ChainExecutor()

# Check what chains would trigger
result = {"high_severity_count": 5}
triggers = executor.get_triggered_chains("security-audit", result)
# → [ChainTrigger(next="dependency-check"), ...]

# Execute a template
template = executor.get_template("full-security-review")
# → ["security-audit", "dependency-check", "code-review"]

Prompt Engineering Wizard

Analyze, generate, and optimize prompts:

from coach_wizards import PromptEngineeringWizard

wizard = PromptEngineeringWizard()

# Analyze existing prompts
analysis = wizard.analyze_prompt("Fix this bug")
print(f"Score: {analysis.overall_score}")  # → 0.13 (poor)
print(f"Issues: {analysis.issues}")        # → ["Missing role", "No output format"]

# Generate optimized prompts
prompt = wizard.generate_prompt(
    task="Review code for security vulnerabilities",
    role="a senior security engineer",
    constraints=["Focus on OWASP top 10"],
    output_format="JSON with severity and recommendation"
)

# Optimize tokens (reduce costs)
result = wizard.optimize_tokens(verbose_prompt)
print(f"Reduced: {result.token_reduction:.0%}")  # → 20% reduction

# Add chain-of-thought scaffolding
enhanced = wizard.add_chain_of_thought(prompt, "debug")

Install Options

# Recommended (all features)
pip install empathy-framework[full]

# Minimal
pip install empathy-framework

# Specific providers
pip install empathy-framework[anthropic]  # Claude
pip install empathy-framework[openai]     # GPT-4, Ollama (OpenAI-compatible)
pip install empathy-framework[google]     # Gemini
pip install empathy-framework[llm]        # All providers

# Development
git clone https://github.com/Smart-AI-Memory/empathy-framework.git
cd empathy-framework && pip install -e .[dev]

What's Included

Component Description
Empathy OS Core engine for human↔AI and AI↔AI collaboration
Smart Router Natural language wizard dispatch with LLM classification
Memory Graph Cross-wizard knowledge sharing (bugs, fixes, patterns)
Auto-Chaining Wizards trigger related wizards based on findings
Multi-Model Router Smart routing across providers and tiers
Memory System Redis short-term + encrypted long-term patterns
17 Coach Wizards Security, performance, testing, docs, prompt engineering
10 Cost-Optimized Workflows Multi-tier pipelines with formatted reports & XML prompts
Healthcare Suite SBAR, SOAP notes, clinical protocols (HIPAA)
Code Inspection Unified pipeline with SARIF/GitHub Actions support
VSCode Extension Visual dashboard for memory and workflows
Telemetry & Analytics Cost tracking, usage stats, optimization insights

The 5 Levels of AI Empathy

Level Name Behavior Example
1 Reactive Responds when asked "Here's the data you requested"
2 Guided Asks clarifying questions "What format do you need?"
3 Proactive Notices patterns "I pre-fetched what you usually need"
4 Anticipatory Predicts future needs "This query will timeout at 10k users"
5 Transformative Builds preventing structures "Here's a framework for all future cases"

Empathy operates at Level 4 — predicting problems before they manifest.


Environment Setup

# Required: At least one provider
export ANTHROPIC_API_KEY="sk-ant-..."   # For Claude models  # pragma: allowlist secret
export OPENAI_API_KEY="sk-..."          # For GPT models  # pragma: allowlist secret
export GOOGLE_API_KEY="..."             # For Gemini models  # pragma: allowlist secret

# Optional: Redis for memory
export REDIS_URL="redis://localhost:6379"

# Or use a .env file (auto-detected)
echo 'ANTHROPIC_API_KEY=sk-ant-...' >> .env

Get Involved


Project Evolution

For those interested in the development history and architectural decisions:

  • Development Logs — Execution plans, phase completions, and progress tracking
  • Architecture Docs — System design, memory architecture, and integration plans
  • Marketing Materials — Pitch decks, outreach templates, and commercial readiness
  • Guides — Publishing tutorials, MkDocs setup, and distribution policies

License

Fair Source License 0.9 — Free for students, educators, and teams ≤5 employees. Commercial license ($99/dev/year) for larger organizations. Details →


Built by Smart AI Memory · Documentation · Examples · Issues

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

empathy_framework-3.5.3.tar.gz (1.3 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

empathy_framework-3.5.3-py3-none-any.whl (339.4 kB view details)

Uploaded Python 3

File details

Details for the file empathy_framework-3.5.3.tar.gz.

File metadata

  • Download URL: empathy_framework-3.5.3.tar.gz
  • Upload date:
  • Size: 1.3 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for empathy_framework-3.5.3.tar.gz
Algorithm Hash digest
SHA256 055c6275e73a3aea8b61c6d231bb5edc436886b20d52e93aa7f3e0fe608fb527
MD5 096c9e4af4b18ca77e17c9a8b6d75a91
BLAKE2b-256 ae51d8edb39644369f578bf0d2762d8b9698c2def8dc2b35a397a9bc4eedff1f

See more details on using hashes here.

Provenance

The following attestation bundles were made for empathy_framework-3.5.3.tar.gz:

Publisher: publish-pypi.yml on Smart-AI-Memory/empathy-framework

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file empathy_framework-3.5.3-py3-none-any.whl.

File metadata

File hashes

Hashes for empathy_framework-3.5.3-py3-none-any.whl
Algorithm Hash digest
SHA256 4df7f49689505fcbe13fffe0cc497ef9dc978f94985d5a03a9f63bde94d71c0e
MD5 8006df285bd5c882f3c68c4094ca578e
BLAKE2b-256 d42d12922c790d838bf6fe3e69258d297b1f3edb6362d68336015f23773d7562

See more details on using hashes here.

Provenance

The following attestation bundles were made for empathy_framework-3.5.3-py3-none-any.whl:

Publisher: publish-pypi.yml on Smart-AI-Memory/empathy-framework

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page