LlamaIndex integration for OneBrain — persistent AI memory layer
Project description
onebrain-llama-index
LlamaIndex integration for OneBrain -- the persistent AI memory layer for humans and agents.
This package wraps the onebrain-sdk Python client and exposes OneBrain functionality as native LlamaIndex components: a Reader, a Retriever, and a ToolSpec.
Installation
pip install onebrain-llama-index
Quick Start
Set up your API key
export ONEBRAIN_API_KEY="ob_your_key:your_secret"
Or pass it directly to any component constructor.
Reader -- Load memories as Documents
from llama_index_onebrain import OneBrainReader
reader = OneBrainReader(api_key="ob_xxx:secret")
# Search for specific memories
docs = reader.load_data(query="project deadlines")
# Or list all memories of a given type
docs = reader.load_data(memory_type="fact", limit=50)
# Use with any LlamaIndex index
from llama_index.core import VectorStoreIndex
index = VectorStoreIndex.from_documents(docs)
query_engine = index.as_query_engine()
response = query_engine.query("What are the upcoming deadlines?")
Retriever -- Plug into query engines
from llama_index_onebrain import OneBrainRetriever
from llama_index.core.query_engine import RetrieverQueryEngine
from llama_index.llms.openai import OpenAI
retriever = OneBrainRetriever(
api_key="ob_xxx:secret",
top_k=5,
search_mode="hybrid",
)
query_engine = RetrieverQueryEngine.from_args(
retriever=retriever,
llm=OpenAI(model="gpt-4"),
)
response = query_engine.query("What are my preferences?")
print(response)
ToolSpec -- Use with ReAct agents
from llama_index_onebrain import OneBrainToolSpec
from llama_index.core.agent import ReActAgent
from llama_index.llms.openai import OpenAI
spec = OneBrainToolSpec(api_key="ob_xxx:secret")
agent = ReActAgent.from_tools(
spec.to_tool_list(),
llm=OpenAI(model="gpt-4"),
verbose=True,
)
# The agent can now search, write, and retrieve context
response = agent.chat("What do you remember about my work projects?")
print(response)
# Write new memories
response = agent.chat("Remember that I prefer dark mode in all apps")
print(response)
Components
OneBrainReader
Loads OneBrain memories as LlamaIndex Document objects.
| Parameter | Type | Default | Description |
|---|---|---|---|
api_key |
str |
None |
API key (falls back to ONEBRAIN_API_KEY env var) |
base_url |
str |
None |
Custom API base URL |
load_data() parameters:
| Parameter | Type | Default | Description |
|---|---|---|---|
query |
str |
None |
Search query (uses memory.search when set) |
memory_type |
str |
None |
Filter by type: fact, preference, decision, goal, experience, skill |
limit |
int |
100 |
Maximum number of memories to load |
OneBrainRetriever
LlamaIndex BaseRetriever implementation backed by OneBrain search.
| Parameter | Type | Default | Description |
|---|---|---|---|
api_key |
str |
None |
API key (falls back to ONEBRAIN_API_KEY env var) |
top_k |
int |
10 |
Maximum number of results |
search_mode |
str |
"hybrid" |
Search mode: hybrid, keyword, or vector |
base_url |
str |
None |
Custom API base URL |
callback_manager |
CallbackManager |
None |
LlamaIndex callback manager |
OneBrainToolSpec
Exposes four tools for LlamaIndex agents:
| Tool | Description |
|---|---|
search_memory(query, top_k=10) |
Search memories by semantic query |
write_memory(content, memory_type="fact") |
Write a new memory |
get_context(scope="assistant") |
Retrieve user context |
list_entities(entity_type=None) |
List stored entities |
Configuration
Environment Variables
| Variable | Description |
|---|---|
ONEBRAIN_API_KEY |
Your OneBrain API key |
Self-Hosted Setup
If you run a self-hosted OneBrain instance, pass the base_url parameter:
reader = OneBrainReader(
api_key="ob_xxx:secret",
base_url="https://your-instance.example.com/api/eu",
)
retriever = OneBrainRetriever(
api_key="ob_xxx:secret",
base_url="https://your-instance.example.com/api/eu",
)
spec = OneBrainToolSpec(
api_key="ob_xxx:secret",
base_url="https://your-instance.example.com/api/eu",
)
Development
# Install dev dependencies
pip install -e ".[dev]"
# Run tests
pytest tests/ -v
# Run with coverage
pytest tests/ -v --cov=llama_index_onebrain --cov-report=term-missing
# Type checking
mypy src/
# Linting
ruff check src/ tests/
License
MIT
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file onebrain_llama_index-0.1.0.tar.gz.
File metadata
- Download URL: onebrain_llama_index-0.1.0.tar.gz
- Upload date:
- Size: 12.3 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.9.6
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
e92b437c1368aa1fa8c3c9f4e17693840a6d93d92880604ff284df2534eb2289
|
|
| MD5 |
2743438b011bd40263b15dae5bc58ae7
|
|
| BLAKE2b-256 |
a1f784b0e35650471a411e783d87be26bda400b861f6007070b8e394bbf65f2d
|
File details
Details for the file onebrain_llama_index-0.1.0-py3-none-any.whl.
File metadata
- Download URL: onebrain_llama_index-0.1.0-py3-none-any.whl
- Upload date:
- Size: 10.0 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.9.6
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
792591abcafef61db89edcf6ad653571bf3d078c818172285fd6bd904056fa8f
|
|
| MD5 |
57d0afe9bd775660f6f2c6a65499e522
|
|
| BLAKE2b-256 |
705f15f307e86b202867cf907670d3348a651069d85f3b4c399fbdf0b57de340
|