Skip to main content

AURORA: Adaptive Unified Reasoning and Orchestration Architecture with MCP Integration

Project description

   █████╗ ██╗   ██╗██████╗  ██████╗ ██████╗  █████╗
  ██╔══██╗██║   ██║██╔══██╗██╔═══██╗██╔══██╗██╔══██╗
  ███████║██║   ██║██████╔╝██║   ██║██████╔╝███████║
  ██╔══██║██║   ██║██╔══██╗██║   ██║██╔══██╗██╔══██║
  ██║  ██║╚██████╔╝██║  ██║╚██████╔╝██║  ██║██║  ██║
  ╚═╝  ╚═╝ ╚═════╝ ╚═╝  ╚═╝ ╚═════╝ ╚═╝  ╚═╝╚═╝  ╚═╝
Code-aware memory and intelligence for AI coding assistants.

Python 3.10+ License: MIT PyPI version


What Aurora Does

Aurora indexes your codebase, tracks what's hot (recently/frequently used), warm, or cool (decaying), and gives you LSP-powered code intelligence — dead code detection, impact analysis, usage tracking — as MCP tools your AI assistant can call.

It's not an orchestration framework. It's the memory and code intelligence layer that makes any AI coding assistant smarter about your specific codebase.

  • Code-aware memory — Indexes code as structured chunks (classes, methods, functions). Ranks by recency, frequency, and relevance using ACT-R activation decay. What you use stays hot. What you don't fades.
  • LSP intelligence — Dead code detection, impact analysis, usage tracking, import mapping. Exposed as MCP tools — your AI assistant calls them directly.
  • Private and local — No API keys for core features. No data leaves your machine. SQLite storage.
  • Works with any AI tool — Claude Code, Cursor, or anything that supports MCP.
pip install aurora-actr

Memory

aur mem — Index your codebase and search it with activation-based ranking.

What gets indexed:

  • Code — Tree-sitter parses Python, JS/TS, Go, Java into class/method/function chunks
  • Git signals — Recent changes rank higher
  • LSP enrichment — Usage count, complexity, risk level per symbol
  • Docs — Markdown files indexed for semantic search
# Index your project
aur mem index .

# Search with activation-based ranking
aur mem search "soar reasoning" --show-scores
Found 5 results for 'soar reasoning'

┏━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━┳━━━━━━━━┳━━━━━┳━━━━━━━━━┓
┃ Type   ┃ File                   ┃ Name                 ┃ Lines      ┃ Risk   ┃ Git ┃   Score ┃
┡━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━╇━━━━━━━━╇━━━━━╇━━━━━━━━━┩
│ code   │ core.py                │ generate_goals_json  │ 1091-1175  │ MED    │ 8d  │   0.619 │
│ code   │ soar.py                │ <chunk>              │ 1473-1855  │ -      │ 1d  │   0.589 │
│ code   │ orchestrator.py        │ SOAROrchestrator._c… │ 2141-2257  │ HIGH   │ 1d  │   0.532 │
│ code   │ test_goals_startup_pe… │ TestGoalsCommandSta… │ 190-273    │ LOW    │ 1d  │   0.517 │
│ code   │ goals.py               │ <chunk>              │ 437-544    │ -      │ 7d  │   0.486 │
└────────┴────────────────────────┴──────────────────────┴────────────┴────────┴─────┴─────────┘
Avg scores: Activation 0.916 | Semantic 0.867 | Hybrid 0.801

Score breakdown — each result shows exactly why it ranked where it did:

┌─ core.py | code | generate_goals_json (Lines 1091-1175) ─────────────────────────────────────┐
│ Final Score: 0.619                                                                           │
│  ├─ BM25:       0.895 * (exact keyword match on 'goals')                                     │
│  ├─ Semantic:   0.865 (high conceptual relevance)                                            │
│  ├─ Activation: 0.014 (accessed 7x, 7 commits, last used 1 week ago)                         │
│  ├─ Git:        7 commits, modified 8d ago, 1769419365                                       │
│  ├─ Files:      core.py, test_goals_json.py                                                  │
│  └─ Used by:    2 files, 2 refs, complexity 44%, risk MED                                    │
└──────────────────────────────────────────────────────────────────────────────────────────────┘

Code Intelligence (MCP)

Aurora exposes code intelligence as MCP tools. Your AI assistant calls these directly — no terminal needed.

Tool Action Speed What it does
lsp check ~1s How many things depend on this symbol? Check before editing.
lsp impact ~2s Full impact analysis — who calls it, from where, risk level
lsp deadcode 2-20s Find unused symbols in a directory
lsp imports <1s What files import this module?
lsp related ~50ms What does this function call? (outgoing dependencies)
mem_search - <1s Semantic search with LSP enrichment

Risk levels: LOW (0-2 refs) | MED (3-10) | HIGH (11+)

When to use:

  • Before editing: lsp check to see what depends on it
  • Before refactoring: lsp impact to assess blast radius
  • After changes: lsp deadcode to clean up orphaned code
  • Finding code: mem_search for semantic results instead of grep

Language support: Python (full), JavaScript/TypeScript, Go, Java (LSP refs + tree-sitter indexing).

See Code Intelligence Guide for details.


Friction Analysis

aur friction — Analyze your coding sessions to find where you get stuck.

aur friction ~/.claude/projects
Per-Project:
my-app         56% BAD (40/72)  median: 16.0
api-service    40% BAD (2/5)    median: 0.5
web-client      0% BAD (0/1)    median: 0.0

Session Extremes:
WORST: aurora/0203-1630-11eb903a  peak=225  turns=127
BEST:  liteagents/0202-2121-8d8608e1  peak=0  turns=4

Verdict: USEFUL
Intervention predictability: 93%

Identifies sessions where you got stuck and extracts learned rules to add to CLAUDE.md or your AI tool's instructions — preventing the same mistakes.


Goal Decomposition

aur goals — Break a goal into subgoals, matched to agents:

$ aur goals "improve the speed of aur mem search" -t claude
╭──────────────────────────────── Plan Decomposition Summary ─────────────────────────────────╮
│ Subgoals: 5                                                                                 │
│                                                                                             │
│   [++] Locate and identify the 'aur mem search' code: @code-developer                      │
│   [+] Analyze startup logic for bottlenecks: @code-developer (ideal: @performance-engineer) │
│   [++] Review architecture for lazy loading, caching: @system-architect                     │
│   [++] Implement optimizations: @code-developer                                             │
│   [++] Measure and validate with benchmarks: @quality-assurance                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯

Quick Start

# Install
pip install aurora-actr

# Initialize project (once)
cd your-project/
aur init

# Index codebase
aur mem index .

# Search your code
aur mem search "authentication flow"

# Check usage before editing
# (via MCP in Claude Code / Cursor — aurora auto-registers as MCP server)

MCP setup: Aurora registers as an MCP server during aur init. Your AI tool can call lsp and mem_search directly.


Orchestration

Memory and code intelligence feed into Aurora's orchestration layer. Goals get decomposed into subgoals, routed to agents, and executed with circuit breakers and recovery. The SOAR pipeline runs eight phases — assess, retrieve, decompose, verify, collect, synthesize, record, respond — using your indexed codebase as context for every decision.

# Research with memory-aware orchestration
aur soar "what would break if I removed the retry module?" -t claude

# Execute task lists with guardrails
aur spawn tasks.md --verbose

Works with Claude Code, Cursor, Aider, Cline, Windsurf, Gemini CLI, and 20+ other AI coding tools. Configuration is per-project:

aur init --tools=claude,cursor

Commands

Command What it does
aur init Initialize Aurora in project
aur doctor Check installation and dependencies
aur mem index . Index code and docs
aur mem search "query" Search memory with activation ranking
aur goals "goal" Decompose goal, match agents
aur soar "question" Multi-agent research with memory context
aur spawn tasks.md Execute task list with guardrails
aur friction <dir> Analyze session friction patterns

Documentation


License

MIT License — See LICENSE

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

aurora_actr-0.18.0.tar.gz (623.5 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

aurora_actr-0.18.0-py3-none-any.whl (775.2 kB view details)

Uploaded Python 3

File details

Details for the file aurora_actr-0.18.0.tar.gz.

File metadata

  • Download URL: aurora_actr-0.18.0.tar.gz
  • Upload date:
  • Size: 623.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.14.3

File hashes

Hashes for aurora_actr-0.18.0.tar.gz
Algorithm Hash digest
SHA256 62a33f7372906b100ac45d93ec82064f6de19c431366746a76f2e4eaeecc9c5a
MD5 ea183a7b428193bcf8b60e5871432f68
BLAKE2b-256 01c07c89597299b39bc42f92780a12641b70bf09b9b478afec06c2eeffd34747

See more details on using hashes here.

File details

Details for the file aurora_actr-0.18.0-py3-none-any.whl.

File metadata

  • Download URL: aurora_actr-0.18.0-py3-none-any.whl
  • Upload date:
  • Size: 775.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.14.3

File hashes

Hashes for aurora_actr-0.18.0-py3-none-any.whl
Algorithm Hash digest
SHA256 49d4eac7fc5242100585d3a8c3f17a44f81f185f153a71bab71d98dc7192f88d
MD5 bdf1a47eaf51180b0b5d467ab69c298c
BLAKE2b-256 76337e49d0ee1eaf3a3b654c8d6585318d511e17d55cd40efb0b57a4f50b6b28

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page