Skip to main content

Persistent memory for AI agents. Store, recall, and share knowledge across sessions.

Project description

AgentBay Python SDK

Persistent memory for AI agents. 3 lines to give your agent a brain.

Install

pip install agentbay

Quick Start -- Auto-Memory (Recommended)

The chat() method wraps your LLM call with automatic memory. No manual store/recall needed.

from agentbay import AgentBay

brain = AgentBay("ab_live_your_key", project_id="your-project-id")

# Memory happens automatically -- no manual store/recall needed
response = brain.chat([
    {"role": "user", "content": "fix the auth session expiry bug"}
])

# brain.chat() automatically:
# 1. Recalled relevant memories about auth and sessions
# 2. Injected them into the LLM context
# 3. Got the response from Claude
# 4. Extracted learnings and stored them for next time

Using OpenAI

response = brain.chat(
    [{"role": "user", "content": "refactor the payment module"}],
    model="gpt-4o",
    provider="openai",
)

Passing extra LLM parameters

response = brain.chat(
    [{"role": "user", "content": "optimize the database queries"}],
    max_tokens=8192,
    temperature=0.7,
)

Disabling auto-memory

# Recall only (don't store new learnings)
response = brain.chat(messages, auto_store=False)

# Store only (don't inject recalled memories)
response = brain.chat(messages, auto_recall=False)

# No memory at all (just use as a plain LLM wrapper)
response = brain.chat(messages, auto_recall=False, auto_store=False)

Mem0-Compatible API

If you're migrating from Mem0, AgentBay supports the same add() / search() interface:

brain = AgentBay("ab_live_your_key", project_id="your-project-id")

# Store with automatic type detection
brain.add("The auth bug was caused by expired JWT tokens not being refreshed")
brain.add("We decided to use PostgreSQL instead of MongoDB for ACID compliance")

# Search
results = brain.search("authentication issues")
for r in results:
    print(r["title"], r["confidence"])

Manual Memory Control

For full control, use store() and recall() directly:

from agentbay import AgentBay

brain = AgentBay("ab_live_your_key", project_id="your-project-id")
brain.store("Next.js 16 + Prisma + PostgreSQL", title="Project stack")
results = brain.recall("What stack does this project use?")

Or create a new brain on the fly:

from agentbay import AgentBay

brain = AgentBay("ab_live_your_key")
brain.setup_brain("My Agent's Memory")
brain.store("Always use UTC timestamps", title="Convention", type="PREFERENCE")

Core API

Method What it does
brain.chat(messages, model, provider, ...) LLM call with automatic memory
brain.add(data) Store with auto-detection (Mem0-compatible)
brain.search(query) Search memories (Mem0-compatible alias)
brain.store(content, title, type, tier, tags) Save a memory (full control)
brain.recall(query, limit, tier, tags) Search memories (semantic + keyword)
brain.forget(knowledge_id) Archive a memory
brain.verify(knowledge_id) Confirm a memory is still accurate
brain.health() Get memory stats
brain.setup_brain(name, description) Create a new Knowledge Brain

Memory Types

  • PATTERN -- Learned behaviors and recurring themes
  • FACT -- Verified information
  • PREFERENCE -- User/agent preferences
  • PROCEDURE -- Step-by-step processes
  • CONTEXT -- Situational context
  • PITFALL -- Bugs, errors, and fixes to avoid
  • DECISION -- Architecture and design decisions

With CrewAI

pip install agentbay[crewai]
from crewai import Agent
from agentbay.integrations.crewai import AgentBayCrewAIMemory

memory = AgentBayCrewAIMemory(
    api_key="ab_live_your_key",
    project_id="your-project-id",
)

agent = Agent(
    role="Researcher",
    goal="Find and remember information",
    memory=memory,
)

With LangChain

pip install agentbay[langchain]
from langchain_openai import ChatOpenAI
from langchain.agents import initialize_agent, AgentType
from agentbay.integrations.langchain import AgentBayMemoryTool

tool = AgentBayMemoryTool(
    api_key="ab_live_your_key",
    project_id="your-project-id",
)

llm = ChatOpenAI()
agent = initialize_agent(
    tools=[tool],
    llm=llm,
    agent=AgentType.OPENAI_FUNCTIONS,
)
agent.run("Remember that deploys happen every Tuesday at 2pm UTC")

Error Handling

from agentbay import AgentBayError, AuthenticationError, RateLimitError

try:
    results = brain.recall("query")
except AuthenticationError:
    print("Bad API key")
except RateLimitError:
    print("Slow down")
except AgentBayError as e:
    print(f"Error {e.status_code}: {e}")

Environment Variables

For chat(), set your LLM provider API key:

# For Anthropic (default provider)
export ANTHROPIC_API_KEY=sk-ant-...

# For OpenAI
export OPENAI_API_KEY=sk-...

Or pass it directly:

response = brain.chat(messages, api_key="sk-ant-...")

Links

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

agentbay-1.0.0.tar.gz (45.5 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

agentbay-1.0.0-py3-none-any.whl (59.6 kB view details)

Uploaded Python 3

File details

Details for the file agentbay-1.0.0.tar.gz.

File metadata

  • Download URL: agentbay-1.0.0.tar.gz
  • Upload date:
  • Size: 45.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.11

File hashes

Hashes for agentbay-1.0.0.tar.gz
Algorithm Hash digest
SHA256 839c66c3c35c76da2b4a4ae134c39fc6d78ec581e31b00aac2932785940755ed
MD5 362f106cd98a0acde3a4cb64489ab0b2
BLAKE2b-256 9ffd2705229772790382f149bbdd6c6d535d2497c372d580837cdf04219b5012

See more details on using hashes here.

File details

Details for the file agentbay-1.0.0-py3-none-any.whl.

File metadata

  • Download URL: agentbay-1.0.0-py3-none-any.whl
  • Upload date:
  • Size: 59.6 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.11

File hashes

Hashes for agentbay-1.0.0-py3-none-any.whl
Algorithm Hash digest
SHA256 cd0d5e55bf1a076db5b4015d2af67ff407eff81248996b70e7032d6a0cb18de6
MD5 76c87c73dd85fda1767389ea27af3ba7
BLAKE2b-256 69b3bbe5385edef38947673b9400679d5d04d3abe4623d59515f992cc57b3a3d

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page