Skip to main content

LangChain integration for OneBrain — persistent AI memory layer

Project description

langchain-onebrain

PyPI version Python 3.9+ License: MIT

LangChain integration for OneBrain — a persistent AI memory layer for humans and agents. This package provides chat message history, a retriever, and agent tools that connect LangChain to OneBrain via the official Python SDK.

Architecture: This package wraps onebrain-sdk (not raw HTTP). If OneBrain's API changes, only the SDK needs updating — this package stays stable.


Installation

pip install langchain-onebrain

Requires Python 3.9+.

Dependencies:

  • langchain-core>=0.3,<1
  • onebrain-sdk>=1.0,<2

Configuration

Set your OneBrain API key as an environment variable or pass it directly:

export ONEBRAIN_API_KEY="ob_your_prefix:your_secret_here"

All components accept api_key and base_url parameters to override the defaults.

Environment Variable Description Default
ONEBRAIN_API_KEY Your OneBrain API key — (required)
ONEBRAIN_BASE_URL Base URL for self-hosted instances https://onebrain.rocks/api/eu

Getting an API Key

  1. Sign in at onebrain.rocks/dashboard.
  2. Navigate to Settings > API Keys.
  3. Click Create API Key and copy the full key (ob_prefix:secret).

Quick Start

Chat Message History

Store and retrieve conversation messages as OneBrain memories:

from langchain_onebrain import OneBrainChatMessageHistory
from langchain_core.messages import HumanMessage, AIMessage

# Create a history backed by OneBrain
history = OneBrainChatMessageHistory(
    api_key="ob_xxx:secret",
    session_id="session-abc-123",
)

# Add messages
history.add_user_message("What is the weather like?")
history.add_ai_message("I don't have real-time weather data.")

# Retrieve all messages
for msg in history.messages:
    print(f"{msg.type}: {msg.content}")

# Clear (archives messages, does not delete)
history.clear()

With ConversationChain

from langchain.chains import ConversationChain
from langchain_openai import ChatOpenAI
from langchain_onebrain import OneBrainChatMessageHistory

llm = ChatOpenAI(model="gpt-4")
memory_history = OneBrainChatMessageHistory(session_id="user-42")

chain = ConversationChain(
    llm=llm,
    memory=ConversationBufferMemory(
        chat_memory=memory_history,
        return_messages=True,
    ),
)

response = chain.predict(input="Tell me about quantum computing.")

Retriever

Search OneBrain memories and return them as LangChain documents:

from langchain_onebrain import OneBrainRetriever

retriever = OneBrainRetriever(
    api_key="ob_xxx:secret",
    top_k=5,
    search_mode="hybrid",  # "keyword", "vector", or "hybrid"
)

# Use directly
docs = retriever.invoke("user preferences for dark mode")
for doc in docs:
    print(doc.page_content)
    print(doc.metadata)  # id, type, confidence, score, source_type, created_at

With RetrievalQA

from langchain.chains import RetrievalQA
from langchain_openai import ChatOpenAI
from langchain_onebrain import OneBrainRetriever

llm = ChatOpenAI(model="gpt-4")
retriever = OneBrainRetriever(top_k=10, search_mode="hybrid")

qa = RetrievalQA.from_chain_type(
    llm=llm,
    retriever=retriever,
    chain_type="stuff",
)

answer = qa.invoke("What are the user's coding preferences?")
print(answer["result"])

Agent Tools

Four tools for use with LangChain agents:

from langchain_onebrain import (
    OneBrainSearchTool,
    OneBrainWriteTool,
    OneBrainContextTool,
    OneBrainEntityTool,
)

# Create tools
search = OneBrainSearchTool(api_key="ob_xxx:secret")
write = OneBrainWriteTool(api_key="ob_xxx:secret")
context = OneBrainContextTool(api_key="ob_xxx:secret")
entity = OneBrainEntityTool(api_key="ob_xxx:secret")

# Use with an agent
from langchain_openai import ChatOpenAI
from langchain.agents import AgentExecutor, create_openai_tools_agent
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder

llm = ChatOpenAI(model="gpt-4")
tools = [search, write, context, entity]

prompt = ChatPromptTemplate.from_messages([
    ("system", "You are a helpful assistant with access to the user's memory."),
    MessagesPlaceholder("chat_history", optional=True),
    ("human", "{input}"),
    MessagesPlaceholder("agent_scratchpad"),
])

agent = create_openai_tools_agent(llm, tools, prompt)
executor = AgentExecutor(agent=agent, tools=tools)

result = executor.invoke({"input": "What do you know about my work projects?"})
print(result["output"])

Tool Reference

Tool Name Description
OneBrainSearchTool onebrain_search Search memories with keyword/vector/hybrid modes
OneBrainWriteTool onebrain_write Store new memories (facts, preferences, goals, etc.)
OneBrainContextTool onebrain_context Get optimized user context (brief/assistant/project/deep)
OneBrainEntityTool onebrain_entity List entities (people, organizations, tools)

Self-Hosted Setup

OneBrain supports self-hosted deployments. Point the SDK to your instance:

from langchain_onebrain import OneBrainRetriever

retriever = OneBrainRetriever(
    api_key="ob_xxx:secret",
    base_url="https://brain.your-company.com/api/v1",
)

Or set the environment variable:

export ONEBRAIN_BASE_URL="https://brain.your-company.com/api/v1"

API Reference

OneBrainChatMessageHistory

Parameter Type Default Description
api_key str | None None OneBrain API key (or ONEBRAIN_API_KEY env var)
session_id str "default" Unique session identifier
base_url str | None None API base URL override

Methods:

  • messages — Property returning List[BaseMessage]
  • add_message(message) — Store a single message
  • add_messages(messages) — Store multiple messages
  • clear() — Archive all session messages

OneBrainRetriever

Parameter Type Default Description
api_key str | None None OneBrain API key
top_k int 10 Maximum results
search_mode str "hybrid" Search mode: keyword/vector/hybrid
base_url str | None None API base URL override
alpha float | None None Keyword/vector balance (0.0-1.0)

Methods:

  • invoke(query) / _get_relevant_documents(query) — Returns List[Document]

OneBrainSearchTool

Parameter Type Default Description
query str Search query (required)
top_k int 10 Maximum results
mode str "hybrid" Search mode

OneBrainWriteTool

Parameter Type Default Description
title str Memory title (required)
body str Memory content (required)
memory_type str "fact" Type: fact/preference/decision/goal/experience/skill

OneBrainContextTool

Parameter Type Default Description
scope str "deep" Scope: brief/assistant/project/deep

OneBrainEntityTool

Parameter Type Default Description
entity_type str | None None Filter by entity type
limit int 20 Maximum results

Development

git clone https://github.com/azappnew/langchain-onebrain.git
cd langchain-onebrain
python -m venv .venv
source .venv/bin/activate
pip install -e ".[dev]"

# Run tests
pytest tests/ -v

# With coverage
pytest tests/ -v --cov=langchain_onebrain --cov-report=term-missing

# Lint
ruff check src/ tests/

# Type check
mypy src/

License

MIT License. See LICENSE for details.

Copyright (c) 2026 AZapp One.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

langchain_onebrain-0.1.0.tar.gz (16.6 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

langchain_onebrain-0.1.0-py3-none-any.whl (12.3 kB view details)

Uploaded Python 3

File details

Details for the file langchain_onebrain-0.1.0.tar.gz.

File metadata

  • Download URL: langchain_onebrain-0.1.0.tar.gz
  • Upload date:
  • Size: 16.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.9.6

File hashes

Hashes for langchain_onebrain-0.1.0.tar.gz
Algorithm Hash digest
SHA256 91279f2177a50d60f501c0f1178ef07e89bcff0432ee7deffe0fe6102bd6744c
MD5 0d46ed500f4746175c6eaf63ecef0009
BLAKE2b-256 e517f689d26f782d56ae53a7b7b35fc8f65711c192294c980e50419c3ea4bea3

See more details on using hashes here.

File details

Details for the file langchain_onebrain-0.1.0-py3-none-any.whl.

File metadata

File hashes

Hashes for langchain_onebrain-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 cecea6a2ba8fd1001809e9f761c046b0601aa866a7c97bd7d368a27a449a716b
MD5 852fa3a88f2de34f5c8f4cf9939866fc
BLAKE2b-256 d8bd522b07834b45932fd89588af0ba9270d401312ded6d2ff1a44a8ca2416e7

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page