Skip to main content

Arc Memory - Local bi-temporal knowledge graph for code repositories

Project description

Arc Memory SDK

Arc Logo

Website Tests PyPI Python License Documentation

At Arc, we're building the foundational memory layer for modern software engineering. Our mission is simple but powerful: ensure engineering teams never lose the critical "why" behind their code. We bridge the gap between human decisions and machine understanding, becoming the temporal source-of-truth for every engineering team and their AI agents.

Overview

Arc Memory is our foundational SDK that embeds a local, bi-temporal knowledge graph (TKG) in every developer's workspace. It surfaces verifiable decision trails during code-review and exposes the same provenance to any LLM-powered agent through VS Code's Agent Mode.

Arc Memory Ecosystem

The Arc Memory SDK is part of a broader ecosystem that connects your team's collective history to AI assistants:

Arc Memory Ecosystem Diagram

How It Works

  • Data Sources (G-Suite, Slack, Notion, GitHub, Jira) seamlessly feed into the Arc Memory SDK, which captures organizational decisions in a local-first Temporal Knowledge Graph (TKG).

  • The Arc MCP Server provides persistent, cross-repository decision context, enabling memory-aware multi-step reasoning for integrated tooling.

  • Through the VS Code Extension, developers interact directly with decision trails embedded into code reviews, and leverage verifiable citations and team-wide search across organizational decisions.

Together, these components form the foundation for organizational intelligence, empowering sophisticated reasoning and enabling AI agents to build robust world models from your team's collective history.

Arc Memory Features

  • Extensible Plugin Architecture - Easily add new data sources beyond Git, GitHub, and ADRs
  • Comprehensive Knowledge Graph - Build a local graph from Git commits, GitHub PRs, issues, and ADRs
  • Trace History Algorithm - Fast BFS algorithm to trace history from file+line to related entities
  • High Performance - Trace history queries complete in under 200ms (typically ~100μs)
  • Incremental Builds - Efficiently update the graph with only new data
  • Rich CLI - Command-line interface for building graphs and tracing history
  • MCP Integration - Connect to AI assistants via Anthropic's Model Context Protocol
  • Privacy-First - All data stays on your machine; no code or IP leaves your repo
  • CI Integration - Team-wide graph updates through CI workflows

Installation

Arc Memory requires Python 3.10 or higher and is compatible with Python 3.10, 3.11, and 3.12.

pip install arc-memory

Or using UV:

uv pip install arc-memory

Quick Start

# Authenticate with GitHub
arc auth gh

# Build the full knowledge graph
arc build

# Or update incrementally
arc build --incremental

# Check the graph status
arc doctor

# Trace history for a specific file and line
arc trace file path/to/file.py 42

# Trace with more hops in the graph
arc trace file path/to/file.py 42 --max-hops 3

Documentation

CLI Commands

  • Authentication - GitHub authentication commands
  • Build - Building the knowledge graph
  • Trace - Tracing history for files and lines
  • Doctor - Checking graph status and diagnostics

Usage Examples

API Documentation

For additional documentation, visit arc.computer.

Architecture

Arc Memory consists of three components:

  1. arc-memory (this SDK) - Python SDK and CLI for graph building and querying

    • Plugin Architecture - Extensible system for adding new data sources
    • Trace History Algorithm - BFS-based algorithm for traversing the knowledge graph
    • CLI Commands - Interface for building graphs and tracing history
  2. arc-memory-mcp - MCP server exposing the knowledge graph to AI assistants

    • Available at github.com/Arc-Computer/arc-mcp-server
    • Implements Anthropic's Model Context Protocol (MCP) for standardized AI tool access
    • Provides tools like arc_trace_history, arc_get_entity_details, and more
  3. vscode-arc-hover - VS Code extension for displaying decision trails (in development)

    • Will integrate with the MCP server to display trace history
    • Will provide hover cards with decision trails

See our Architecture Decision Records for more details on design decisions, including:

Development

Setup

# Clone the repository
git clone https://github.com/arc-computer/arc-memory.git
cd arc-memory

# Create a virtual environment with UV
uv venv

# Activate the environment
source .venv/bin/activate  # On Unix/macOS
.venv\Scripts\activate     # On Windows

# Install dependencies
uv pip install -e ".[dev]"

# Install pre-commit hooks
pre-commit install

Testing

# Run unit tests
python -m unittest discover

# Run integration tests
python -m unittest discover tests/integration

# Run performance benchmarks
python tests/benchmark/benchmark.py --repo-size small

Creating a Plugin

Arc Memory uses a plugin architecture to support additional data sources. To create a new plugin:

  1. Create a class that implements the IngestorPlugin protocol
  2. Register your plugin using entry points
  3. Package and distribute your plugin

For detailed instructions and examples, see:

Basic example:

from arc_memory.plugins import IngestorPlugin
from arc_memory.schema.models import Node, Edge, NodeType, EdgeRel

class MyCustomPlugin(IngestorPlugin):
    def get_name(self) -> str:
        return "my-custom-source"

    def get_node_types(self) -> List[str]:
        return ["custom_node"]

    def get_edge_types(self) -> List[str]:
        return [EdgeRel.MENTIONS]

    def ingest(self, last_processed=None):
        # Your implementation here
        return nodes, edges, metadata

Register in pyproject.toml:

[project.entry-points."arc_memory.plugins"]
my-custom-source = "my_package.my_module:MyCustomPlugin"

Performance

Arc Memory is designed for high performance, with trace history queries completing in under 200ms (typically ~100μs). See our performance benchmarks for more details.

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

arc_memory-0.2.1.tar.gz (1.7 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

arc_memory-0.2.1-py3-none-any.whl (47.1 kB view details)

Uploaded Python 3

File details

Details for the file arc_memory-0.2.1.tar.gz.

File metadata

  • Download URL: arc_memory-0.2.1.tar.gz
  • Upload date:
  • Size: 1.7 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.12.9

File hashes

Hashes for arc_memory-0.2.1.tar.gz
Algorithm Hash digest
SHA256 7e43c377f6d24577aae62bf469ce0d549c974385e54be0eb3961838302eddb56
MD5 96958300f454eae7578bd611a5564927
BLAKE2b-256 5f551a173b5b9f50c0bda8606930e44c2ff83ab31ce0eb5affc440d295b20668

See more details on using hashes here.

File details

Details for the file arc_memory-0.2.1-py3-none-any.whl.

File metadata

  • Download URL: arc_memory-0.2.1-py3-none-any.whl
  • Upload date:
  • Size: 47.1 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.12.9

File hashes

Hashes for arc_memory-0.2.1-py3-none-any.whl
Algorithm Hash digest
SHA256 78b4a250c6b78f46cc79172866f514c0b0c903f33dda73c976911c1ef1dfc981
MD5 8a5e3a4e3049b846396df3c9333de7d8
BLAKE2b-256 adc26f01d4e3bcafdd91cad59d11df6f4ff03339227604f8b07bf18b6d72d15e

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page