A Graph-Theoretic Memory Kernel for Agentic AI Systems
Project description
ContextOS
A Graph-Theoretic Memory Kernel for Agentic AI Systems
"Beyond RAG: Stateful memory for AI agents that actually remembers."
What is ContextOS?
ContextOS is a framework for building AI agents with persistent, structured memory. Unlike standard RAG (Retrieval-Augmented Generation) which treats documents as flat vectors, ContextOS models memory as a graph where:
- Nodes are memories (semantic facts, episodic events, procedural rules)
- Edges encode relationships (temporal, causal, associative)
- Retrieval uses hybrid scoring:
PageRank centrality + Vector similarity
This enables multi-hop reasoning that pure vector search cannot achieve.
Key Features
- 🧠 CoALA Memory Architecture - Semantic, Episodic, and Procedural memory types
- 🔗 Graph-Native Storage - NetworkX topology + ChromaDB vectors
- ⚡ Hybrid Retrieval - PageRank centrality × semantic similarity
- 💾 Persistent by Default - Memory survives restarts
- 🔌 LangChain Compatible - Works with any LLM provider
Installation
pip install context-os
Or from source:
git clone https://github.com/adarshthakur/context-os.git
cd context-os
pip install -e .
Quick Start
Simple API (Recommended)
from context_os import ContextClient, MemoryType
# Initialize (loads existing memory if available)
client = ContextClient()
# Add memories
client.add_memory("User prefers dark mode", MemoryType.SEMANTIC)
client.add_memory("User asked about Python yesterday", MemoryType.EPISODIC)
# Compile context for a query
context = client.compile("What are the user's preferences?")
print(context)
Full Chat Loop
from context_os import ContextClient
client = ContextClient()
def my_llm(system_prompt: str, user_query: str) -> str:
# Your LLM call here (OpenAI, Groq, Anthropic, etc.)
return llm.invoke(system_prompt + user_query)
# Run a full RAG loop with automatic memory logging
response = client.chat("What should I work on today?", llm_callable=my_llm)
Low-Level API
from context_os import ContextGraph, ContextNode, ContextEdge, MemoryType, ContextCompiler
# Direct graph access
kernel = ContextGraph()
node = ContextNode(content="Important fact", type=MemoryType.SEMANTIC)
kernel.add_node(node)
# Add relationships
edge = ContextEdge(source=node1.id, target=node2.id, relation="CAUSES")
kernel.add_edge(edge)
# Compile context
compiler = ContextCompiler(kernel)
context = compiler.compile("query", token_budget=500, alpha=50, beta=50)
Architecture
context_os/
├── client.py # ContextClient - main entry point
├── core/
│ ├── schema.py # Pydantic models (ContextNode, ContextEdge, MemoryType)
│ └── graph.py # Hybrid storage (NetworkX + ChromaDB)
├── memory/
│ ├── ingestor.py # LLM-powered memory classification
│ └── compiler.py # PageRank + Vector hybrid retrieval
└── utils/
└── text.py # Text processing utilities
The Hybrid Scoring Formula
relevance(node, query) = (α × semantic_similarity) + (β × pagerank_centrality × time_decay)
- α (alpha): Weight for semantic similarity (vector search)
- β (beta): Weight for graph centrality (structural importance)
- time_decay: Recency factor for episodic memories
Benchmarks
Needle-in-a-Haystack (NIAH)
ContextOS retrieves a "needle" fact from 100+ distractor memories with 100% recall.
cd experiments && python niah_benchmark.py
Ablation Study
| Configuration | Multi-Hop Accuracy |
|---|---|
| Vector Only (RAG) | 50% |
| Graph Only | 50% |
| ContextOS (Hybrid) | 100% |
Configuration
client = ContextClient(
storage_path="my_memory.json", # Graph persistence
chroma_path="my_vectors/", # Vector store
auto_persist=True # Save on every change
)
# Retrieval tuning
context = client.compile(
query="...",
token_budget=1000, # Max tokens in context
alpha=50.0, # Vector weight
beta=50.0 # Graph weight
)
Requirements
- Python 3.10+
- NetworkX
- ChromaDB
- Sentence-Transformers
- Pydantic
- LangChain (optional, for LLM integration)
License
MIT License - See LICENSE for details.
Acknowledgments
Inspired by the CoALA architecture for cognitive agents.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file agentic_memory-0.1.1.tar.gz.
File metadata
- Download URL: agentic_memory-0.1.1.tar.gz
- Upload date:
- Size: 10.7 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
6ccafec20000f5b60f164c32ef63d0e11e295807727626b66844bfe0af56ad4e
|
|
| MD5 |
10503ffda1adebe3cc72cc9c990c503e
|
|
| BLAKE2b-256 |
811d0c70d69c08a0e97ed17a869eb00cdd21c80b9d41db6dd0e6a2e7fddfe458
|
File details
Details for the file agentic_memory-0.1.1-py3-none-any.whl.
File metadata
- Download URL: agentic_memory-0.1.1-py3-none-any.whl
- Upload date:
- Size: 11.8 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
cbbfa0f4e5c4ab780b35ade749fbb1f24b403aaf47c92d2916cd59e3a29dad4c
|
|
| MD5 |
43a581ca3c1cb4fdb907a7443565df2d
|
|
| BLAKE2b-256 |
49bb4a362dabe0afdd3d550d05fb53aef7ede540cda1d2b82e9e96c3db55182b
|