Skip to main content

Persistent memory for AI agents. Store, recall, and share knowledge across sessions.

Project description

AgentBay Python SDK

Persistent memory for AI agents. 3 lines to give your agent a brain.

Install

pip install agentbay

Quick Start -- Auto-Memory (Recommended)

The chat() method wraps your LLM call with automatic memory. No manual store/recall needed.

from agentbay import AgentBay

brain = AgentBay("ab_live_your_key", project_id="your-project-id")

# Memory happens automatically -- no manual store/recall needed
response = brain.chat([
    {"role": "user", "content": "fix the auth session expiry bug"}
])

# brain.chat() automatically:
# 1. Recalled relevant memories about auth and sessions
# 2. Injected them into the LLM context
# 3. Got the response from Claude
# 4. Extracted learnings and stored them for next time

Using OpenAI

response = brain.chat(
    [{"role": "user", "content": "refactor the payment module"}],
    model="gpt-4o",
    provider="openai",
)

Passing extra LLM parameters

response = brain.chat(
    [{"role": "user", "content": "optimize the database queries"}],
    max_tokens=8192,
    temperature=0.7,
)

Disabling auto-memory

# Recall only (don't store new learnings)
response = brain.chat(messages, auto_store=False)

# Store only (don't inject recalled memories)
response = brain.chat(messages, auto_recall=False)

# No memory at all (just use as a plain LLM wrapper)
response = brain.chat(messages, auto_recall=False, auto_store=False)

Mem0-Compatible API

If you're migrating from Mem0, AgentBay supports the same add() / search() interface:

brain = AgentBay("ab_live_your_key", project_id="your-project-id")

# Store with automatic type detection
brain.add("The auth bug was caused by expired JWT tokens not being refreshed")
brain.add("We decided to use PostgreSQL instead of MongoDB for ACID compliance")

# Search
results = brain.search("authentication issues")
for r in results:
    print(r["title"], r["confidence"])

Manual Memory Control

For full control, use store() and recall() directly:

from agentbay import AgentBay

brain = AgentBay("ab_live_your_key", project_id="your-project-id")
brain.store("Next.js 16 + Prisma + PostgreSQL", title="Project stack")
results = brain.recall("What stack does this project use?")

Or create a new brain on the fly:

from agentbay import AgentBay

brain = AgentBay("ab_live_your_key")
brain.setup_brain("My Agent's Memory")
brain.store("Always use UTC timestamps", title="Convention", type="PREFERENCE")

Core API

Method What it does
brain.chat(messages, model, provider, ...) LLM call with automatic memory
brain.add(data) Store with auto-detection (Mem0-compatible)
brain.search(query) Search memories (Mem0-compatible alias)
brain.store(content, title, type, tier, tags) Save a memory (full control)
brain.recall(query, limit, tier, tags) Search memories (semantic + keyword)
brain.forget(knowledge_id) Archive a memory
brain.verify(knowledge_id) Confirm a memory is still accurate
brain.health() Get memory stats
brain.setup_brain(name, description) Create a new Knowledge Brain

Memory Types

  • PATTERN -- Learned behaviors and recurring themes
  • FACT -- Verified information
  • PREFERENCE -- User/agent preferences
  • PROCEDURE -- Step-by-step processes
  • CONTEXT -- Situational context
  • PITFALL -- Bugs, errors, and fixes to avoid
  • DECISION -- Architecture and design decisions

With CrewAI

pip install agentbay[crewai]
from crewai import Agent
from agentbay.integrations.crewai import AgentBayCrewAIMemory

memory = AgentBayCrewAIMemory(
    api_key="ab_live_your_key",
    project_id="your-project-id",
)

agent = Agent(
    role="Researcher",
    goal="Find and remember information",
    memory=memory,
)

With LangChain

pip install agentbay[langchain]
from langchain_openai import ChatOpenAI
from langchain.agents import initialize_agent, AgentType
from agentbay.integrations.langchain import AgentBayMemoryTool

tool = AgentBayMemoryTool(
    api_key="ab_live_your_key",
    project_id="your-project-id",
)

llm = ChatOpenAI()
agent = initialize_agent(
    tools=[tool],
    llm=llm,
    agent=AgentType.OPENAI_FUNCTIONS,
)
agent.run("Remember that deploys happen every Tuesday at 2pm UTC")

Error Handling

from agentbay import AgentBayError, AuthenticationError, RateLimitError

try:
    results = brain.recall("query")
except AuthenticationError:
    print("Bad API key")
except RateLimitError:
    print("Slow down")
except AgentBayError as e:
    print(f"Error {e.status_code}: {e}")

Environment Variables

For chat(), set your LLM provider API key:

# For Anthropic (default provider)
export ANTHROPIC_API_KEY=sk-ant-...

# For OpenAI
export OPENAI_API_KEY=sk-...

Or pass it directly:

response = brain.chat(messages, api_key="sk-ant-...")

Links

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

agentbay-0.5.0.tar.gz (24.6 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

agentbay-0.5.0-py3-none-any.whl (29.9 kB view details)

Uploaded Python 3

File details

Details for the file agentbay-0.5.0.tar.gz.

File metadata

  • Download URL: agentbay-0.5.0.tar.gz
  • Upload date:
  • Size: 24.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.11

File hashes

Hashes for agentbay-0.5.0.tar.gz
Algorithm Hash digest
SHA256 2fa592e0bb01168b1b6621e89fd70a61954cdcfa56adca6be5d259fd978810aa
MD5 f55e18bf9341d9afdfbb7258b233841e
BLAKE2b-256 6dc3557f845d2d281c2103bbad4c46c375ae11c046bb74a2ef1d0f145e5c7b47

See more details on using hashes here.

File details

Details for the file agentbay-0.5.0-py3-none-any.whl.

File metadata

  • Download URL: agentbay-0.5.0-py3-none-any.whl
  • Upload date:
  • Size: 29.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.11

File hashes

Hashes for agentbay-0.5.0-py3-none-any.whl
Algorithm Hash digest
SHA256 e348bcb2435a936e22d2446d43a8929197f0d4c2dde6c5958555a0d94f542ec1
MD5 0210976c63ecb32375048f56aab32249
BLAKE2b-256 56fbca87b77a04e26b907782a8bb9434107b2d9cedca3eeb582661bb7300e751

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page