The Agent Provenance Graph for LangGraph agents. Timestamped facts, auditable decisions, deterministic trust. Prove what agents knew, trace why they knew it, coordinate without an LLM in the loop. $0 per operation.
Project description
hyperstack-langgraph
The Agent Provenance Graph for AI agents — the only memory layer where agents can prove what they knew, trace why they knew it, and coordinate without an LLM in the loop. $0 per operation at any scale.
Timestamped facts. Auditable decisions. Deterministic trust. Build agents you can trust at $0/operation.
Knowledge graph memory for LangGraph agents. Developer-controlled, zero LLM cost, time-travel debugging.
Install
pip install hyperstack-langgraph langchain-core langgraph
Current version: v1.5.3
Quick Start (3 lines)
from hyperstack_langgraph import create_memory_agent
from langchain_openai import ChatOpenAI
agent = create_memory_agent(ChatOpenAI(model="gpt-4o"))
That's it. Your agent now has persistent knowledge graph memory. It will:
- Search memory at the start of every conversation
- Store important facts when decisions are made (with user confirmation)
- Traverse the graph to answer "what depends on X?" or "who decided Y?"
Environment Variables
export HYPERSTACK_API_KEY=hs_your_key # Get free at cascadeai.dev/hyperstack
export HYPERSTACK_WORKSPACE=default
export OPENAI_API_KEY=sk-... # For your LLM
Usage: Add Memory Tools to Existing Agent
from hyperstack_langgraph import create_hyperstack_tools
from langgraph.prebuilt import create_react_agent
from langchain_openai import ChatOpenAI
# Create memory tools
memory_tools = create_hyperstack_tools()
# Add to your existing tools
my_tools = [my_calculator, my_web_search] + memory_tools
# Create agent with memory
agent = create_react_agent(ChatOpenAI(model="gpt-4o"), my_tools)
# Use it
result = agent.invoke(
{"messages": [{"role": "user", "content": "What do we know about our auth setup?"}]},
config={"configurable": {"thread_id": "session-1"}}
)
Usage: Full Agent with Session Memory
from hyperstack_langgraph import create_memory_agent
from langchain_openai import ChatOpenAI
from langgraph.checkpoint.memory import MemorySaver
agent = create_memory_agent(
ChatOpenAI(model="gpt-4o"),
checkpointer=MemorySaver(), # Session memory (optional)
)
config = {"configurable": {"thread_id": "project-alpha"}}
# First message — agent searches HyperStack for context
agent.invoke({"messages": [{"role": "user", "content": "Let's work on the auth system"}]}, config)
# Agent remembers within session (MemorySaver) AND across sessions (HyperStack)
agent.invoke({"messages": [{"role": "user", "content": "We decided to use Clerk"}]}, config)
Usage: Direct API Client
Use HyperStackClient (not HyperStackMemory).
from hyperstack_langgraph import HyperStackClient
client = HyperStackClient()
# Store
client.store("use-clerk", "Use Clerk for Auth", "Chose Clerk over Auth0",
card_type="decision", keywords=["clerk", "auth"],
links=[{"target": "alice", "relation": "decided"}])
# Search
client.search("authentication")
# Get
client.get("use-clerk")
# Graph traversal
client.graph("use-clerk", depth=2)
# Blockers / impact
client.blockers("deploy-prod")
client.impact("use-clerk")
# Time-travel (reconstruct at timestamp)
client.graph("use-clerk", depth=2, at="2026-02-01T00:00:00Z")
# Report success/failure (updates utility scores on edges)
client.feedback(card_slugs=["use-clerk", "auth-api"], outcome="success")
# Ingest conversation transcript
client.auto_remember("We decided to use Clerk. Alice owns auth.")
# Memory hub: working / semantic / episodic
cards = client.hs_memory(surface="semantic")
# Batch store
client.bulk_store([{"slug": "p1", "title": "Project A", "body": "..."}, ...])
# Agentic routing (deterministic, no LLM)
can_do = client.can("auth-api", action="deploy")
steps = client.plan("auth-api", goal="add 2FA")
REST API
Always use X-API-Key, never Authorization: Bearer:
curl -X POST "https://hyperstack-cloud.vercel.app/api/cards" \
-H "X-API-Key: hs_your_key" \
-H "Content-Type: application/json" \
-d '{"slug":"use-clerk","title":"Use Clerk","body":"Auth decision"}'
Card Fields
| Field | Description |
|---|---|
confidence |
0.0–1.0 |
truthStratum |
draft | hypothesis | confirmed |
verifiedBy |
e.g. "human:deeq" |
verifiedAt |
Auto-set server-side |
memoryType |
working | semantic | episodic |
ttl |
Working memory expiry |
sourceAgent |
Auto-stamped after identify() |
Tools Provided
| Tool | Description |
|---|---|
hyperstack_search |
Search memory for relevant context |
hyperstack_store |
Save a fact, decision, preference, or person |
hyperstack_graph |
Traverse knowledge graph (impact, blockers, decision trails) |
hyperstack_list |
List all stored memories |
hyperstack_delete |
Remove outdated memories |
Backend Features
- Conflict detection — structural, no LLM, auto-detects contradicting cards
- Staleness cascade — upstream changes mark dependents stale
- Three memory surfaces — working (TTL), semantic (permanent), episodic (30-day decay)
- Decision replay — reconstruct agent state at decision time + hindsight detection
- Time-travel —
graph()withat=timestamp - Self-hosting — Docker +
HYPERSTACK_BASE_URLenv var
Why HyperStack?
- Provenance tracking — timestamped facts, auditable decisions
- Decision replay — reconstruct what agents knew at decision time
- Deterministic trust — no LLM in the loop for coordination
- $0 per operation — Mem0/Zep charge ~$0.002 per op; HyperStack: $0
- 30-second setup — No Neo4j, no Docker, no OpenSearch. One API key, done.
Pricing
| Plan | Cards | Price |
|---|---|---|
| Free | 50 | $0 — ALL features including graph |
| Pro | 500+ | $29/mo |
| Team | 500, 5 API keys | $59/mo |
| Business | 2,000, 20 members | $149/mo |
Get a free API key at cascadeai.dev/hyperstack
License
MIT
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file hyperstack_langgraph-1.5.4.tar.gz.
File metadata
- Download URL: hyperstack_langgraph-1.5.4.tar.gz
- Upload date:
- Size: 17.0 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.10
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
43085fd956590f04ffe36143684ca41924bf0cc9a203dd68824fa8501594080a
|
|
| MD5 |
b89cfa623aaa0f94ed1a437cf2e18d34
|
|
| BLAKE2b-256 |
4106b23326f8482dec2ca22781fde6516fac76ea976770e3e23e031374e6dad6
|
File details
Details for the file hyperstack_langgraph-1.5.4-py3-none-any.whl.
File metadata
- Download URL: hyperstack_langgraph-1.5.4-py3-none-any.whl
- Upload date:
- Size: 14.4 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.10
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
70d0b3fb9f1dcfaa59cb43fe5529b431feaed396319261e7cc4a52beae6c2d45
|
|
| MD5 |
1749eb7595752164cd26b004dffc6d67
|
|
| BLAKE2b-256 |
040fd1b495bcc48c5fa737e833ad00564bc23e970bac7ae46c10938930402b98
|