Knowledge graph memory for LangGraph agents. Portable memory across tools, multi-agent coordination, zero LLM cost, time-travel debugging.
Project description
hyperstack-langgraph
Knowledge graph memory for LangGraph agents. Developer-controlled, zero LLM cost, time-travel debugging.
Install
pip install hyperstack-langgraph langchain-core langgraph
Quick Start (3 lines)
from hyperstack_langgraph import create_memory_agent
from langchain_openai import ChatOpenAI
agent = create_memory_agent(ChatOpenAI(model="gpt-4o"))
That's it. Your agent now has persistent knowledge graph memory. It will:
- Search memory at the start of every conversation
- Store important facts when decisions are made (with user confirmation)
- Traverse the graph to answer "what depends on X?" or "who decided Y?"
Environment Variables
export HYPERSTACK_API_KEY=hs_your_key # Get free at cascadeai.dev/hyperstack
export HYPERSTACK_WORKSPACE=default
export OPENAI_API_KEY=sk-... # For your LLM
Usage: Add Memory Tools to Existing Agent
from hyperstack_langgraph import create_hyperstack_tools
from langgraph.prebuilt import create_react_agent
from langchain_openai import ChatOpenAI
# Create memory tools
memory_tools = create_hyperstack_tools()
# Add to your existing tools
my_tools = [my_calculator, my_web_search] + memory_tools
# Create agent with memory
agent = create_react_agent(ChatOpenAI(model="gpt-4o"), my_tools)
# Use it
result = agent.invoke(
{"messages": [{"role": "user", "content": "What do we know about our auth setup?"}]},
config={"configurable": {"thread_id": "session-1"}}
)
Usage: Full Agent with Session Memory
from hyperstack_langgraph import create_memory_agent
from langchain_openai import ChatOpenAI
from langgraph.checkpoint.memory import MemorySaver
agent = create_memory_agent(
ChatOpenAI(model="gpt-4o"),
checkpointer=MemorySaver(), # Session memory (optional)
)
config = {"configurable": {"thread_id": "project-alpha"}}
# First message — agent searches HyperStack for context
agent.invoke({"messages": [{"role": "user", "content": "Let's work on the auth system"}]}, config)
# Agent remembers within session (MemorySaver) AND across sessions (HyperStack)
agent.invoke({"messages": [{"role": "user", "content": "We decided to use Clerk"}]}, config)
Usage: Direct API Client
from hyperstack_langgraph import HyperStackClient
client = HyperStackClient()
# Store
client.store("use-clerk", "Use Clerk for Auth", "Chose Clerk over Auth0",
card_type="decision", keywords=["clerk", "auth"],
links=[{"target": "alice", "relation": "decided"}])
# Search
client.search("authentication")
# Graph traversal
client.graph("use-clerk", depth=2)
# Time-travel (Pro+)
client.graph("use-clerk", depth=2, at="2026-02-01T00:00:00Z")
Tools Provided
| Tool | Description |
|---|---|
hyperstack_search |
Search memory for relevant context |
hyperstack_store |
Save a fact, decision, preference, or person |
hyperstack_graph |
Traverse knowledge graph (impact analysis, decision trails) |
hyperstack_list |
List all stored memories |
hyperstack_delete |
Remove outdated memories |
Why HyperStack?
- You control the graph. No LLM auto-extraction. No phantom relationships. Your agent explicitly defines cards and links.
- Zero LLM cost per memory op. Mem0/Zep charge ~$0.002 per operation. HyperStack: $0.
- Time-travel debugging. See the graph as it existed at any point in time. "Git blame for agent memory."
- 30-second setup. No Neo4j, no Docker, no OpenSearch. One API key, done.
Pricing
| Plan | Cards | Graph | Price |
|---|---|---|---|
| Free | 10 | ❌ | $0 |
| Pro | 100 | ✅ + time-travel | $29/mo |
| Team | 500 | ✅ | $59/mo |
| Business | 2,000 | ✅ | $149/mo |
Get a free API key at cascadeai.dev/hyperstack
License
MIT
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file hyperstack_langgraph-1.1.0.tar.gz.
File metadata
- Download URL: hyperstack_langgraph-1.1.0.tar.gz
- Upload date:
- Size: 9.6 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.10
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
4420f0acdd3e71ecb702cfe98e41a76475182614fd0a88ba651ef74c8d7bec3b
|
|
| MD5 |
69265623e5fbdd82c2bda84c750b3d73
|
|
| BLAKE2b-256 |
dc38646d4db7ecdbc2f2bba2a61eb37b33e3e117b2a97739a77431f47ab029fd
|
File details
Details for the file hyperstack_langgraph-1.1.0-py3-none-any.whl.
File metadata
- Download URL: hyperstack_langgraph-1.1.0-py3-none-any.whl
- Upload date:
- Size: 9.2 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.10
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
253fad0c9b50be4be00c40d70cd781edd3bd91971b6131bc7fa21ad8829b7e16
|
|
| MD5 |
67910b02847a40ddd322378ba1e964ec
|
|
| BLAKE2b-256 |
f18a8af215f2f1b6789b900d8dd8ba4eb9c5487df25d29cd38e227b921544591
|