Skip to main content

Arc Memory - Local bi-temporal knowledge graph for code repositories

Project description

Arc: The Memory Layer for Engineering Teams

Arc Logo

Website Tests PyPI Python License Documentation

Arc is the memory layer for engineering teams — it records why every change was made, predicts the blast-radius of new code before you merge, and feeds that context to agents so they can handle long-range refactors safely.

What The Arc SDK Does

  1. Record the why. Arc's Temporal Knowledge Graph ingests commits, PRs, issues, and ADRs to preserve architectural intent and decision history—entirely on your machine.

  2. Model the system. From that history Arc derives a causal graph of services, data flows, and constraints—a lightweight world-model that stays in sync with the codebase.

  3. Capture causal relationships. Arc tracks decision → implication → code-change chains, enabling multi-hop reasoning to show why decisions were made and their predicted impact.

  4. Enhance PR reviews. Arc's GitHub Actions integration surfaces decision trails and blast-radius hints directly in PR comments, giving reviewers instant context before they hit "Approve."

How Arc Memory Differs

Arc Memory takes a fundamentally different approach from traditional code analysis tools:

Temporal Understanding

Unlike static code analysis tools, Arc captures why code evolved the way it did, preserving institutional knowledge even as teams change. The bi-temporal knowledge graph tracks not just what changed, but the reasoning and decisions behind those changes across time.

Predictive Insights

Arc predicts the blast radius of code modifications before you merge, reducing incidents and regressions. By analyzing the causal relationships between components, Arc can identify which parts of your system might be affected by a change, helping you make more informed decisions.

Agent-Ready Architecture

Arc's knowledge graph powers intelligent agents that can review code with historical context, navigate incidents with causal understanding, and implement self-healing improvements. The framework-agnostic design treats agent interactions as function calls for maximum composability, allowing integration with any agent framework.

Quick Start

# Install Arc Memory
pip install arc-memory[github]

# Build a knowledge graph from your repository
cd /path/to/your/repo
arc build --github

Check out the example agents and demo applications to see Arc Memory in action.

Core Features

Knowledge Graph

# Build with GitHub and Linear data
arc build --github --linear

Decision Trails

# Show decision trail for a specific file and line
arc why file path/to/file.py 42

# Ask natural language questions
arc why query "What decision led to using SQLite instead of PostgreSQL?"

GitHub Actions Integration

# Export knowledge graph for GitHub Actions
arc export <commit-sha> export.json --compress

SDK for Developers

from arc_memory import Arc

# Initialize Arc with your repository path
arc = Arc(repo_path="./")

# Ask a question about your codebase
result = arc.query("What were the major changes in the last release?")
print(f"Answer: {result.answer}")

# Find out why a specific piece of code exists
decision_trail = arc.get_decision_trail("src/auth/login.py", 42)

Documentation

Why It Matters

  • Faster onboarding for new team members
  • Reduced knowledge loss when developers leave
  • More efficient code reviews with contextual insights
  • Safer refactoring with impact prediction
  • Better agent coordination through shared memory

SDK for Developers and Agents

Arc Memory provides a clean, Pythonic SDK that enables both developers and AI agents to programmatically access the knowledge graph:

from arc_memory import Arc

# Initialize Arc with your repository path
arc = Arc(repo_path="./")

# Ask a question about your codebase
result = arc.query("What were the major changes in the last release?")
print(f"Answer: {result.answer}")

# Find out why a specific piece of code exists
decision_trail = arc.get_decision_trail("src/core/auth.py", 42)
for entry in decision_trail:
    print(f"Decision: {entry.title}")
    print(f"Rationale: {entry.rationale}")

# Analyze the potential impact of a change
impact = arc.analyze_component_impact("file:src/api/endpoints.py")
for component in impact:
    print(f"Affected: {component.title} (Impact score: {component.impact_score})")

The SDK follows a framework-agnostic design with adapters for popular frameworks like LangChain and OpenAI, making it easy to integrate Arc Memory into your development workflows or AI applications.

Privacy

Telemetry is disabled by default. Arc Memory respects your privacy and will only collect anonymous usage data if you explicitly opt in.

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

arc_memory-0.7.0.tar.gz (11.6 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

arc_memory-0.7.0-py3-none-any.whl (245.3 kB view details)

Uploaded Python 3

File details

Details for the file arc_memory-0.7.0.tar.gz.

File metadata

  • Download URL: arc_memory-0.7.0.tar.gz
  • Upload date:
  • Size: 11.6 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.12.9

File hashes

Hashes for arc_memory-0.7.0.tar.gz
Algorithm Hash digest
SHA256 3c94bd59fb2b3634f83491a43b1bb2ea20edaa084f1b3c10d1867182baa0e460
MD5 51f7c238f874d6470501bee1e226cc81
BLAKE2b-256 14782af7d8dd15985ef63df73f75d06ff15ce68a5ff8a76e7b931781d81de5c2

See more details on using hashes here.

File details

Details for the file arc_memory-0.7.0-py3-none-any.whl.

File metadata

  • Download URL: arc_memory-0.7.0-py3-none-any.whl
  • Upload date:
  • Size: 245.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.12.9

File hashes

Hashes for arc_memory-0.7.0-py3-none-any.whl
Algorithm Hash digest
SHA256 b406d2146dbbc396a3c95f4f64bd6143ebc21d5d6a89e27de987f565691ca9bd
MD5 3413246e64419262b49342a693bc3bfb
BLAKE2b-256 78f686f45ab125e649496729048663d07b3ab7283e3a049a4a6fa1968ead8ccd

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page