Skip to main content

AI collaboration framework with persistent memory, anticipatory intelligence, code inspection, and multi-agent orchestration

Project description

Empathy Framework

The AI collaboration framework that predicts problems before they happen.

PyPI Tests License Python GitHub stars

pip install empathy-framework
empathy-memory serve

Why Empathy?

Memory That Persists

  • Dual-layer architecture — Redis for millisecond short-term ops, pattern storage for long-term knowledge
  • AI that learns across sessions — Patterns discovered today inform decisions tomorrow
  • Cross-team knowledge sharing — What one agent learns, all agents can use
  • Git-native storage — Optimized for GitHub, works with any VCS (GitLab, Bitbucket, Azure DevOps, self-hosted)

Enterprise-Ready

  • Your data stays local — Nothing leaves your infrastructure
  • Compliance built-in — HIPAA, GDPR, SOC2 patterns included
  • Automatic documentation — AI-first docs that serve humans and machines

Anticipatory Intelligence

  • Predicts 30-90 days ahead — Security vulnerabilities, performance degradation, compliance gaps
  • Prevents, not reacts — Eliminate entire categories of problems before they become urgent
  • 3-4x productivity gains — Not 20% faster; whole workflows disappear

Build Better Agents

  • Agent toolkit — Build custom agents that inherit memory, trust, and anticipation
  • 30+ production wizards — Security, performance, testing, docs—use or extend
  • 5-level progression built-in — Your agents evolve from reactive to anticipatory automatically

Human↔AI & AI↔AI Orchestration

  • Empathy OS — Manages trust, feedback loops, and collaboration state
  • Multi-agent coordination — Specialized agents working in concert
  • Conflict resolution — Principled negotiation when agents disagree

Performance & Cost

  • 80-96% LLM cost reduction — Smart routing: cheap models detect, best models decide
  • Sub-millisecond coordination — Redis-backed real-time signaling between agents
  • Works with any LLM — Claude, GPT-4, Ollama, or your own

Quick Example

from empathy_os import EmpathyOS

os = EmpathyOS()

# Analyze code for current AND future issues
result = await os.collaborate(
    "Review this deployment pipeline for problems",
    context={"code": pipeline_code, "team_size": 10}
)

# Get predictions, not just analysis
print(result.current_issues)      # What's wrong now
print(result.predicted_issues)    # What will break in 30-90 days
print(result.prevention_steps)    # How to prevent it

Cost Optimization with ModelRouter

Save 80-96% on API costs by routing tasks to appropriate model tiers:

from empathy_llm_toolkit import EmpathyLLM

# Enable smart model routing
llm = EmpathyLLM(
    provider="anthropic",
    enable_model_routing=True
)

# Summarization → Haiku ($0.25/M tokens)
await llm.interact(user_id="dev", user_input="Summarize this", task_type="summarize")

# Bug fixing → Sonnet ($3/M tokens)
await llm.interact(user_id="dev", user_input="Fix this bug", task_type="fix_bug")

# Architecture → Opus ($15/M tokens)
await llm.interact(user_id="dev", user_input="Design the system", task_type="architectural_decision")

The 5 Levels of AI Empathy

Level Name Behavior Example
1 Reactive Responds when asked "Here's the data you requested"
2 Guided Asks clarifying questions "What format do you need?"
3 Proactive Notices patterns "I pre-fetched what you usually need"
4 Anticipatory Predicts future needs "This query will timeout at 10k users"
5 Transformative Builds preventing structures "Here's a framework for all future cases"

Empathy operates at Level 4 - predicting problems before they manifest.

Comparison

Empathy SonarQube GitHub Copilot
Predicts future issues ✅ 30-90 days ahead
Persistent memory ✅ Redis + patterns
Cross-domain learning ✅ Healthcare → Software
Multi-agent orchestration ✅ Built-in
Source available ✅ Fair Source 0.9
Data stays local ✅ Your infrastructure ❌ Cloud ❌ Cloud
Free for small teams ✅ ≤5 employees

Get Involved

Star this repo if you find it useful

💬 Join Discussions - Questions, ideas, show what you built

📖 Read the Book - Deep dive into the philosophy and implementation

📚 Full Documentation - API reference, examples, guides

Install Options

# Basic
pip install empathy-framework

# With all features (recommended)
pip install empathy-framework[full]

# Development
git clone https://github.com/Smart-AI-Memory/empathy.git
cd empathy && pip install -e .[dev]

What's Included

  • Empathy OS — Core engine for managing human↔AI and AI↔AI collaboration
  • Memory System — Redis short-term + encrypted long-term pattern storage
  • 30+ Production Wizards — Security, performance, testing, docs, accessibility, compliance
  • Healthcare Suite — SBAR, SOAP notes, clinical protocols (HIPAA compliant)
  • LLM Toolkit — Works with Claude, GPT-4, Ollama; smart model routing
  • Memory Control Panel — CLI (empathy-memory) and REST API for managing everything
  • IDE Plugins — VS Code extension for visual memory management

Memory Control Panel

Manage AI memory with a simple CLI:

# Start everything (Redis + API server)
empathy-memory serve

# Check system status
empathy-memory status

# View statistics
empathy-memory stats

# Run health check
empathy-memory health

# List stored patterns
empathy-memory patterns

The API server runs at http://localhost:8765 with endpoints for status, stats, patterns, and Redis control.

VS Code Extension: A visual panel for monitoring memory is available in vscode-memory-panel/.

Code Inspection Pipeline (New in v2.2.9)

Unified code quality with cross-tool intelligence:

# Run inspection
empathy-inspect .

# Multiple output formats
empathy-inspect . --format json       # For CI/CD
empathy-inspect . --format sarif      # For GitHub Actions
empathy-inspect . --format html       # Visual dashboard

# Filter targets
empathy-inspect . --staged            # Only staged changes
empathy-inspect . --changed           # Only modified files

# Auto-fix safe issues
empathy-inspect . --fix

# Suppress false positives
empathy-inspect . --baseline-init     # Create baseline file
empathy-inspect . --no-baseline       # Show all findings

Pipeline phases:

  1. Static Analysis (parallel) — Lint, security, debt, test quality
  2. Dynamic Analysis (conditional) — Code review, debugging
  3. Cross-Analysis — Correlate findings across tools
  4. Learning — Extract patterns for future use
  5. Reporting — Unified health score

GitHub Actions SARIF integration:

- run: empathy-inspect . --format sarif --output results.sarif
- uses: github/codeql-action/upload-sarif@v2
  with:
    sarif_file: results.sarif

Full documentation →

License

Fair Source License 0.9 - Free for students, educators, and teams ≤5 employees. Commercial license ($99/dev/year) for larger organizations. Details →


Built by Smart AI Memory · Documentation · Examples · Issues

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

empathy_framework-2.3.0.tar.gz (1.1 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

empathy_framework-2.3.0-py3-none-any.whl (294.4 kB view details)

Uploaded Python 3

File details

Details for the file empathy_framework-2.3.0.tar.gz.

File metadata

  • Download URL: empathy_framework-2.3.0.tar.gz
  • Upload date:
  • Size: 1.1 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.10.11

File hashes

Hashes for empathy_framework-2.3.0.tar.gz
Algorithm Hash digest
SHA256 07a0808acdbdf1b9885c1882059048b70d96a5517066c755727903d15a725a2f
MD5 b03874da30544cea59bcb6675111a783
BLAKE2b-256 12be8be1530f601e67a988eac823ca94eb07c18ec3d14478178fdd1350d316d5

See more details on using hashes here.

File details

Details for the file empathy_framework-2.3.0-py3-none-any.whl.

File metadata

File hashes

Hashes for empathy_framework-2.3.0-py3-none-any.whl
Algorithm Hash digest
SHA256 954eab359b02d1d691c65bd5dda02ed6f0da0bf268ecea96638b5f50610285ab
MD5 54c1d5d981d6145ce87b0954d57fcf3a
BLAKE2b-256 5595d7edbf108c13b923b52787292ba2b68e78a59be4b6955545b479067753e9

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page