Skip to main content

A Graph-Theoretic Memory Kernel for Agentic AI Systems

Project description

ContextOS

A Graph-Theoretic Memory Kernel for Agentic AI Systems

"Beyond RAG: Stateful memory for AI agents that actually remembers."

Python 3.10+ License: MIT


What is ContextOS?

ContextOS is a framework for building AI agents with persistent, structured memory. Unlike standard RAG (Retrieval-Augmented Generation) which treats documents as flat vectors, ContextOS models memory as a graph where:

  • Nodes are memories (semantic facts, episodic events, procedural rules)
  • Edges encode relationships (temporal, causal, associative)
  • Retrieval uses hybrid scoring: PageRank centrality + Vector similarity

This enables multi-hop reasoning that pure vector search cannot achieve.

Key Features

  • 🧠 CoALA Memory Architecture - Semantic, Episodic, and Procedural memory types
  • 🔗 Graph-Native Storage - NetworkX topology + ChromaDB vectors
  • Hybrid Retrieval - PageRank centrality × semantic similarity
  • 💾 Persistent by Default - Memory survives restarts
  • 🔌 LangChain Compatible - Works with any LLM provider

Installation

pip install agentic-memory

Or from source:

git clone https://github.com/ARYAN2302/ContextOS.git
cd context-os
pip install -e .

Quick Start

Simple API (Recommended)

from agentic_memory import ContextClient, MemoryType

# Initialize (loads existing memory if available)
client = ContextClient()

# Add memories
client.add_memory("User prefers dark mode", MemoryType.SEMANTIC)
client.add_memory("User asked about Python yesterday", MemoryType.EPISODIC)

# Compile context for a query
context = client.compile("What are the user's preferences?")
print(context)

Full Chat Loop

from agentic_memory import ContextClient

client = ContextClient()

def my_llm(system_prompt: str, user_query: str) -> str:
    # Your LLM call here (OpenAI, Groq, Anthropic, etc.)
    return llm.invoke(system_prompt + user_query)

# Run a full RAG loop with automatic memory logging
response = client.chat("What should I work on today?", llm_callable=my_llm)

Low-Level API

from agentic_memory import ContextGraph, ContextNode, ContextEdge, MemoryType, ContextCompiler

# Direct graph access
kernel = ContextGraph()
node = ContextNode(content="Important fact", type=MemoryType.SEMANTIC)
kernel.add_node(node)

# Add relationships
edge = ContextEdge(source=node1.id, target=node2.id, relation="CAUSES")
kernel.add_edge(edge)

# Compile context
compiler = ContextCompiler(kernel)
context = compiler.compile("query", token_budget=500, alpha=50, beta=50)

Architecture

context_os/
├── client.py           # ContextClient - main entry point
├── core/
│   ├── schema.py       # Pydantic models (ContextNode, ContextEdge, MemoryType)
│   └── graph.py        # Hybrid storage (NetworkX + ChromaDB)
├── memory/
│   ├── ingestor.py     # LLM-powered memory classification
│   └── compiler.py     # PageRank + Vector hybrid retrieval
└── utils/
    └── text.py         # Text processing utilities

The Hybrid Scoring Formula

relevance(node, query) = (α × semantic_similarity) + (β × pagerank_centrality × time_decay)
  • α (alpha): Weight for semantic similarity (vector search)
  • β (beta): Weight for graph centrality (structural importance)
  • time_decay: Recency factor for episodic memories

Benchmarks

Needle-in-a-Haystack (NIAH)

ContextOS retrieves a "needle" fact from 100+ distractor memories with 100% recall.

cd experiments && python niah_benchmark.py

Ablation Study

Configuration Multi-Hop Accuracy
Vector Only (RAG) 50%
Graph Only 50%
ContextOS (Hybrid) 100%

Configuration

client = ContextClient(
    storage_path="my_memory.json",    # Graph persistence
    chroma_path="my_vectors/",         # Vector store
    auto_persist=True                  # Save on every change
)

# Retrieval tuning
context = client.compile(
    query="...",
    token_budget=1000,    # Max tokens in context
    alpha=50.0,           # Vector weight
    beta=50.0             # Graph weight
)

Requirements

  • Python 3.10+
  • NetworkX
  • ChromaDB
  • Sentence-Transformers
  • Pydantic
  • LangChain (optional, for LLM integration)

License

MIT License - See LICENSE for details.


Acknowledgments

Inspired by the CoALA architecture for cognitive agents.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

agentic_memory-0.1.2.tar.gz (12.8 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

agentic_memory-0.1.2-py3-none-any.whl (11.9 kB view details)

Uploaded Python 3

File details

Details for the file agentic_memory-0.1.2.tar.gz.

File metadata

  • Download URL: agentic_memory-0.1.2.tar.gz
  • Upload date:
  • Size: 12.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.7

File hashes

Hashes for agentic_memory-0.1.2.tar.gz
Algorithm Hash digest
SHA256 ced9386f4e228e083567649bd472b4f46a0c0ed5c2902a1dca5df71f12032674
MD5 c30ffb1055f8c5602591ef6f48ea776e
BLAKE2b-256 a2d33c0483bf8995254faac1686bbd982ba195efe2032033b10b0378812a58b6

See more details on using hashes here.

File details

Details for the file agentic_memory-0.1.2-py3-none-any.whl.

File metadata

  • Download URL: agentic_memory-0.1.2-py3-none-any.whl
  • Upload date:
  • Size: 11.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.7

File hashes

Hashes for agentic_memory-0.1.2-py3-none-any.whl
Algorithm Hash digest
SHA256 e5a0d4842917e685ac97fd28e31bb19f003923fbd29cb285481d10b043903f25
MD5 eb52df86760566786de6664a84a37ecb
BLAKE2b-256 ef2cee9474e6a56de5283a2e5c5e60273ba29b2e65d56869e08be8a86a049b18

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page