Skip to main content

CoralBricks + LangChain integration: retriever, chat history, memory helper, and agent tools.

Project description

coralbricks-langchain

Use CoralBricks as the memory backend for your LangChain applications: retriever, chat history, agent memory tools, and RAG middleware — all backed by the CoralBricks Memory API.

  • Drop-in CoralBricksRetriever for any LCEL chain or RAG pipeline.
  • @dynamic_prompt RAG middleware — automatically injects retrieved context before every LLM call.
  • Three agent tools (store, search, delete) for persistent memory across turns.
  • CoralBricksChatMessageHistory — persistent chat history backed by the CoralBricks chat API.
  • Share memory across agents via project_id / session_id scoping.

Installation

pip install coralbricks-langchain

Requires Python 3.10+ and LangChain >= 1.0.


API key and base URL

  • API key: Get a CoralBricks API key from the CoralBricks web app.
  • Base URL: https://cw.coralbricks.ai
export CORALBRICKS_API_KEY="your_coralbricks_api_key"
export CORAL_MEMORY_BASE_URL="https://cw.coralbricks.ai"

Quick start

from coralbricks_langchain import CoralBricksClient, CoralBricksMemory

client = CoralBricksClient(
    api_key="your_coralbricks_api_key",
    base_url="https://cw.coralbricks.ai",
)

memory = CoralBricksMemory(
    client=client,
    project_id="langchain:my-app",
    session_id="user-123",
)

# Save a memory (embed + store)
mem_id = memory.save_memory("Pro plan costs $199/month with unlimited ops.")

# Search by meaning
hits = memory.search_memory("What does the Pro plan cost?", top_k=3)
for h in hits:
    print(h.get("score"), h.get("text"))

RAG with @dynamic_prompt middleware

The modern LangChain >= 1.0 RAG pattern — context is retrieved from CoralBricks and injected into the system prompt automatically before every model call:

from langchain.agents import create_agent
from langchain.agents.middleware import ModelRequest, dynamic_prompt
from langchain_openai import ChatOpenAI

from coralbricks_langchain import CoralBricksClient, CoralBricksMemory, CoralBricksRetriever

client = CoralBricksClient(api_key="your_coralbricks_api_key")
memory = CoralBricksMemory(client=client, project_id="my-kb", session_id="docs-v1")
retriever = CoralBricksRetriever(memory=memory, top_k=5)

@dynamic_prompt
def rag_context(request: ModelRequest) -> str:
    query = request.messages[-1].content if request.messages else ""
    docs = retriever.invoke(query)
    context = "\n\n".join(f"[{i+1}] {d.page_content}" for i, d in enumerate(docs))
    return (
        "Answer the user's question using ONLY the context below.\n\n"
        f"Context:\n{context}"
    )

model = ChatOpenAI(model="gpt-4o-mini", temperature=0)
agent = create_agent(model, middleware=[rag_context])

result = agent.invoke({"messages": [{"role": "user", "content": "What is the Pro plan price?"}]})
print(result["messages"][-1].content)

Agent memory tools

Give your agent tools to store, search, and delete memories across turns:

from langchain.agents import create_agent
from langchain_openai import ChatOpenAI

from coralbricks_langchain import (
    CoralBricksClient,
    CoralBricksMemory,
    get_tools,
    set_global_memory,
)

client = CoralBricksClient(api_key="your_coralbricks_api_key")
memory = CoralBricksMemory(client=client, project_id="my-app", session_id="user-123")

set_global_memory(memory)
tools = get_tools()  # [store_coralbricks_memory, search_coralbricks_memory, delete_coralbricks_memory]

model = ChatOpenAI(model="gpt-4o-mini", temperature=0)
agent = create_agent(
    model,
    tools=tools,
    system_prompt=(
        "You are a helpful assistant with persistent memory (CoralBricks). "
        "Before answering, always search memory for relevant context. "
        "When you learn something important, store it."
    ),
)

result = agent.invoke({"messages": [{"role": "user", "content": "Remember: Alex is on the Enterprise plan."}]})
print(result["messages"][-1].content)

Retriever (LCEL chains)

CoralBricksRetriever implements BaseRetriever and drops into any LCEL chain:

from coralbricks_langchain import CoralBricksClient, CoralBricksMemory, CoralBricksRetriever

retriever = CoralBricksRetriever(
    memory=CoralBricksMemory(client=client, project_id="my-kb"),
    top_k=5,
)

docs = retriever.invoke("What is the cancellation policy?")
for doc in docs:
    print(doc.page_content)

Chat message history

Persistent chat history backed by the CoralBricks chat API:

from coralbricks_langchain import CoralBricksClient, CoralBricksChatMessageHistory

client = CoralBricksClient(api_key="your_coralbricks_api_key")
history = CoralBricksChatMessageHistory(client=client, conversation_id="conv-001")

history.add_user_message("Hello!")
history.add_ai_message("Hi, how can I help?")

for msg in history.messages:
    print(msg.type, msg.content)

Public API

Symbol Description
CoralBricksClient Low-level HTTP client for the CoralBricks Memory API
CoralBricksMemory High-level helper: save_memory, search_memory, delete_memory
CoralBricksRetriever LangChain BaseRetriever for LCEL RAG pipelines
CoralBricksChatMessageHistory LangChain BaseChatMessageHistory backed by CoralBricks chat storage
store_coralbricks_memory Agent tool — embed and store a memory item
search_coralbricks_memory Agent tool — semantic search over memories
delete_coralbricks_memory Agent tool — delete memories by ID
set_global_memory Configure the global CoralBricksMemory instance used by tools
get_tools Factory returning all three memory tools

Conventions

Field Example Purpose
project_id langchain:my-app App/use-case namespace (shared across agents)
session_id user-123, conv-001 Conversation or user scope
metadata {"source": "policy"} Optional metadata stored with each item

License

Apache-2.0.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

coralbricks_langchain-0.2.0.tar.gz (12.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

coralbricks_langchain-0.2.0-py3-none-any.whl (13.0 kB view details)

Uploaded Python 3

File details

Details for the file coralbricks_langchain-0.2.0.tar.gz.

File metadata

  • Download URL: coralbricks_langchain-0.2.0.tar.gz
  • Upload date:
  • Size: 12.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.12

File hashes

Hashes for coralbricks_langchain-0.2.0.tar.gz
Algorithm Hash digest
SHA256 a59281759fb0267b41a2ef6334ad565ada8550befb52451c352eb974d28dd548
MD5 4a60dc4698e3f9ee3e5b46e04bc58615
BLAKE2b-256 a51cc928d9d598ab8ce1f7f69f9b31f497251cea0a32a7b563139169e8e22849

See more details on using hashes here.

File details

Details for the file coralbricks_langchain-0.2.0-py3-none-any.whl.

File metadata

File hashes

Hashes for coralbricks_langchain-0.2.0-py3-none-any.whl
Algorithm Hash digest
SHA256 cafd383b131a93d5642a6555bc3e46f3c5c250d604538522acab70f466fc4479
MD5 908716332f771acd7201cf4cca9e5aa1
BLAKE2b-256 d68cc0068942d880abf6b63e4d87168522fd497b5a26ca06e8cf6000a485b8b4

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page