Skip to main content

SUBSTRATE cognitive memory integration for LangChain and LangGraph

Project description

langchain-substrate

SUBSTRATE cognitive memory integration for LangChain and LangGraph.

Use SUBSTRATE as a persistent memory store for LangGraph agents or as a retriever in LangChain RAG pipelines. SUBSTRATE provides causal memory, semantic search, knowledge graphs, emotion state, identity verification, and 61 cognitive capability layers.

Installation

pip install langchain-substrate

Or install from source:

cd integrations/langchain
pip install -e ".[dev]"

Quick Start

Environment Setup

import os
os.environ["SUBSTRATE_API_KEY"] = "sk-sub-..."

As a LangGraph Memory Store

Use SubstrateStore as the backing store for any LangGraph agent. This gives your agent persistent, semantically searchable memory across conversations.

from langchain_substrate import SubstrateStore, SubstrateClient
from langgraph.prebuilt import create_react_agent
from langchain_openai import ChatOpenAI

# Create the SUBSTRATE-backed store
client = SubstrateClient(api_key=os.environ["SUBSTRATE_API_KEY"])
store = SubstrateStore(client=client)

# Create a LangGraph agent with SUBSTRATE memory
model = ChatOpenAI(model="gpt-4o")
agent = create_react_agent(model, tools=[], store=store)

# The agent now persists state to SUBSTRATE
config = {"configurable": {"thread_id": "conversation-1"}}
response = agent.invoke(
    {"messages": [{"role": "user", "content": "Remember that my favorite color is blue."}]},
    config=config,
)

Store Operations

# Store a value
store.put(("user", "alice"), "preferences", {"theme": "dark", "language": "en"})

# Retrieve by key
item = store.get(("user", "alice"), "preferences")
print(item.value)  # {"theme": "dark", "language": "en"}

# Semantic search across memory
results = store.search(("user", "alice"), query="color preferences", limit=5)
for item in results:
    print(item.key, item.value)

# Delete is a no-op (SUBSTRATE memory is append-only)
store.delete(("user", "alice"), "preferences")

Multi-Tenant Isolation

# Use namespace_prefix for tenant isolation
store = SubstrateStore(
    client=client,
    namespace_prefix="myapp.prod",
)
# All operations are scoped under "myapp.prod.*"
store.put(("user", "bob"), "state", {"step": 3})

As a LangChain Retriever (RAG)

Use SubstrateRetriever in any LangChain RAG chain. It uses SUBSTRATE's hybrid search (semantic + keyword) to find relevant memories.

from langchain_substrate import SubstrateRetriever
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.runnables import RunnablePassthrough
from langchain_core.output_parsers import StrOutputParser

# Create the retriever
retriever = SubstrateRetriever(
    api_key=os.environ["SUBSTRATE_API_KEY"],
    top_k=5,
)

# Build a RAG chain
prompt = ChatPromptTemplate.from_template(
    "Answer based on the following context:\n{context}\n\nQuestion: {question}"
)
model = ChatOpenAI(model="gpt-4o")

chain = (
    {"context": retriever, "question": RunnablePassthrough()}
    | prompt
    | model
    | StrOutputParser()
)

answer = chain.invoke("What are the entity's core values?")

Retriever with Namespace Scoping

retriever = SubstrateRetriever(
    api_key=os.environ["SUBSTRATE_API_KEY"],
    namespace="app.conversations",
    top_k=10,
)

Free Tier Fallback

hybrid_search requires the Pro tier. On the free tier, fall back to memory_search:

retriever = SubstrateRetriever(
    api_key=os.environ["SUBSTRATE_API_KEY"],
    search_tool="memory_search",
)

Async Support

All operations support async for use in async LangGraph workflows:

import asyncio
from langchain_substrate import SubstrateStore, SubstrateClient

async def main():
    client = SubstrateClient(api_key="sk-sub-...")
    store = SubstrateStore(client=client)

    await store.aput(("user", "alice"), "mood", {"current": "happy"})
    item = await store.aget(("user", "alice"), "mood")
    print(item.value)

    results = await store.asearch(("user",), query="emotional state")
    for r in results:
        print(r.value)

asyncio.run(main())

Architecture

LangGraph Agent / RAG Chain
        |
   SubstrateStore / SubstrateRetriever
        |
   SubstrateClient (httpx)
        |
   SUBSTRATE MCP Server (JSON-RPC over HTTP)
        |
   Causal Memory + Knowledge Graph + 61 Layers

Namespace Encoding

LangGraph uses tuple namespaces like ("user", "alice", "prefs"). SUBSTRATE uses flat string keys. The store encodes namespaces as dot-separated prefixes:

LangGraph Namespace SUBSTRATE Prefix
("user", "alice") user.alice
("app", "v2", "state") app.v2.state

Tool Mapping

Store Operation SUBSTRATE Tool Tier
put() respond Free
get() memory_search Free
search() hybrid_search Pro
list_namespaces() N/A (limited) --
delete() No-op --

SUBSTRATE MCP Tools Available

Tool Description Tier
respond Send a message, get a response Free
memory_search Search causal memory episodes Free
hybrid_search Semantic + keyword search Pro
get_emotion_state Affective state vector Free
verify_identity Cryptographic identity check Free
knowledge_graph_query Query knowledge graph Pro
get_values Core value architecture Free
theory_of_mind User model Free
get_trust_state Trust scores Pro

Development

# Install dev dependencies
pip install -e ".[dev]"

# Run tests
pytest

# Run with coverage
pytest --cov=langchain_substrate --cov-report=term-missing

# Lint
ruff check src/ tests/

# Type check
mypy src/

License

MIT -- Garmo Labs

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

langchain_substrate-0.2.0.tar.gz (14.5 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

langchain_substrate-0.2.0-py3-none-any.whl (11.9 kB view details)

Uploaded Python 3

File details

Details for the file langchain_substrate-0.2.0.tar.gz.

File metadata

  • Download URL: langchain_substrate-0.2.0.tar.gz
  • Upload date:
  • Size: 14.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.14.3

File hashes

Hashes for langchain_substrate-0.2.0.tar.gz
Algorithm Hash digest
SHA256 1018b30f12f9db5b5137757f883ebe170e1b58cba07f54c0ce487c85781758c0
MD5 f976ecf7089188934aa5da1b5958ef1c
BLAKE2b-256 bc19fbf3e1ec68fb4b5a747ecd9d1fbcd0ed77da60ad55f90eb9555cea715886

See more details on using hashes here.

File details

Details for the file langchain_substrate-0.2.0-py3-none-any.whl.

File metadata

File hashes

Hashes for langchain_substrate-0.2.0-py3-none-any.whl
Algorithm Hash digest
SHA256 4f6b481621e196966b89db1e36f494cfc58327e831daed167278783abae9fab9
MD5 076a65faee0c32c57b1c7252bb8d79e7
BLAKE2b-256 3e501d382386210ff90f04381ae65fdb795c7fa7231e16f695de3a7888b5f485

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page