Drop-in SDK for adding persistent memory and learning to any agent.
Project description
Agentic Learning SDK
Add persistent memory to any LLM agent with one line of code. This Agentic Learning SDK automatically captures conversations, manages context, and enables agents to remember information across sessions.
with learning(agent="my_agent"):
response = client.chat.completions.create(...) # Memory handled automatically
Features
- ๐ Drop-in Integration - Works with Anthropic, Claude Agents SDK, OpenAI (Chat Completions & Responses), and Gemini
- ๐พ Persistent Memory - Conversations automatically saved and recalled across sessions
- ๐ฏ Zero Configuration - No prompt engineering or manual context management required
- โก Streaming Support - Full support for streaming responses
- ๐ Memory Search - Query past conversations with semantic search
- ๐๏ธ Flexible Modes - Auto-inject memory, capture-only, or hybrid approaches
Quick Start
Installation
pip install agentic-learning
Basic Usage
# Set your API keys
export OPENAI_API_KEY="your-openai-key"
export LETTA_API_KEY="your-letta-key"
from openai import OpenAI
from agentic_learning import learning
client = OpenAI()
# Add memory to your agent with one line
with learning(agent="my_assistant"):
# Your LLM call - conversation is automatically captured
response = client.chat.completions.create(
model="gpt-5",
messages=[{"role": "user", "content": "My name is Alice"}]
)
# Agent remembers prior context
response = client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": "What's my name?"}]
)
# Returns: "Your name is Alice"
That's it - this SDK automatically:
- โ Captures all conversations
- โ Injects relevant memory into prompts
- โ Saves to persistent storage (Letta)
- โ Recalls information across sessions
Supported Providers
| Provider | Package | Status | Py Example | TS Example |
|---|---|---|---|---|
| Anthropic | anthropic |
โ Stable | anthropic_example.py | anthropic_example.ts |
| Claude Agents SDK | claude-agent-sdk |
โ Stable | claude_example.py | claude_example.ts |
| OpenAI Chat Completions | openai |
โ Stable | openai_example.py | openai_example.ts |
| OpenAI Responses API | openai |
โ Stable | openai_responses_example.py | openai_responses_example.ts |
| Gemini | google-generativeai |
โ Stable | gemini_example.py | gemini_example.ts |
| Vercel AI SDK | ai-sdk |
โ Experimental | vercel_example.ts |
See examples/README.md for detailed documentation.
Core Concepts
Learning Context
Wrap any LLM calls in a learning() context to enable conversation capture and dynamic memory:
with learning(agent="agent_name"):
# All SDK calls inside this block have memory enabled
response = llm_client.generate(...)
Note: Memory is scoped by agent name. Each agent maintains its own isolated memory, so agent="sales_bot" and agent="support_bot" have separate conversation histories and context.
Memory Injection
The SDK automatically retrieves relevant memory and injects it into your prompts:
# First session
with learning(agent="sales_bot", memory=["customer"]):
response = client.chat.completions.create(
messages=[{"role": "user", "content": "I'm interested in Product X"}]
)
# Later session - agent remembers any information related to "customer"
with learning(agent="sales_bot", memory=["customer"]):
response = client.chat.completions.create(
messages=[{"role": "user", "content": "Tell me more about that product"}]
)
# Agent knows you're asking about Product X
Capture-Only Mode
Store conversations without injecting memory (useful for logging or background processing):
with learning(agent="agent_name", capture_only=True):
# Conversations saved but not injected into prompts
response = client.chat.completions.create(...)
# Later, list entire conversation history
learning_client = AgenticLearning()
messages = learning_client.messages.list("agent_name")
Memory Search
Query past conversations with semantic search:
# Search for relevant conversations
messages = learning_client.memory.search(
agent="agent_name",
query="What are my project requirements?"
)
How It Works
The SDK uses automatic interception of LLM SDK calls:
- Intercepts - Patches LLM SDK methods to capture conversations
- Enriches - Retrieves relevant memory and injects into prompts
- Stores - Saves conversations to Letta for persistent storage
- Recalls - Automatically loads relevant context in future sessions
โโโโโโโโโโโโโโโโโโ-โ
โ Your Code โ
โ client.create() โ
โโโโโโโโโโฌโโโโโโโโ-โ
โ
โผ
โโโโโโโโโโโโโโโโโโ-โ
โ Agentic Learning โ โ Intercepts call
โ Interceptor โ โ Injects memory
โโโโโโโโโโฌโโโโโโโ-โโ
โ
โผ
โโโโโโโโโโโโโโโโ-โโโ
โ LLM API โ โ Sees enriched prompt
โ (OpenAI, etc) โ
โโโโโโโโโโฌโโโโโโ-โโโ
โ
โผ
โโโโโโโโโโโโโโโโโ-โโ
โ Letta Server โ โ Stores conversation
โ (Persistent DB) โ โ Memory update
โโโโโโโโโโโโโโโโโโ-โ
Architecture
Interceptors
The SDK provides interceptors for different integration patterns:
- API-Level Interceptors (OpenAI, Anthropic, Gemini) - Patch HTTP API methods
- Transport-Level Interceptors (Claude Agent SDK) - Patch subprocess transport layer
All interceptors share common logic through BaseAPIInterceptor, making it easy to add new providers.
Client Architecture
AgenticLearning()
โโโ agents # Agent management
โ โโโ create()
โ โโโ update()
โ โโโ retrieve()
โ โโโ list()
โ โโโ delete()
โ โโโ sleeptime # Background memory processing
โโโ memory # Memory block management
โ โโโ create()
โ โโโ upsert()
โ โโโ retrieve()
โ โโโ list()
โ โโโ search() # Semantic search
โ โโโ remember() # Store memories
โ โโโ context # Memory context retrieval
โโโ messages # Message history
โโโ capture() # Save conversation turn
โโโ list()
โโโ create() # Send message to LLM
Requirements
- Python 3.9+
- Letta API key (sign up at letta.com)
- At least one LLM SDK:
openai>=1.0.0anthropic>=0.18.0google-generativeai>=0.3.0claude-agent-sdk>=0.1.0
Local Development (Optional)
For local development, you can run Letta server locally:
# Install Letta
pip install letta
# Start server (default: http://localhost:8283)
letta server
See Letta documentation for more details.
Development Setup
# Clone repository
git clone https://github.com/letta-ai/agentic_learning_sdk.git
cd agentic_learning_sdk
# Install in development mode
pip install -e python/
# Run examples
cd examples
python3 openai_example.py
Advanced Usage
Custom Letta Server URL
learning_client = AgenticLearning(base_url="http://custom-host:8283")
Agent Configuration
# Create agent with custom memory blocks
agent = learning_client.agents.create(
agent="my_agent",
memory=["human", "persona", "project_context"],
model="anthropic/claude-sonnet-4-20250514"
)
# Create custom memory block
learning_client.memory.create(
agent="my_agent",
label="user_preferences",
value="Prefers concise technical responses"
)
Async Support
from agentic_learning import learning_async, AsyncAgenticLearning
async_client = AsyncAgenticLearning()
async with learning_async(agent="my_agent", client=async_client):
response = await async_llm_client.generate(...)
Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
Adding a New Provider
- Create a new interceptor in
python/src/agentic_learning/interceptors/ - Extend
BaseAPIInterceptor(for API-level) orBaseInterceptor(for transport-level) - Implement SDK-specific methods:
extract_user_messages()extract_assistant_message()inject_memory_context()_build_response_from_chunks()
- Register in
__init__.py - Add example to
examples/
See existing interceptors for reference implementations.
License
Apache 2.0 - See LICENSE for details.
Links
- ๐ Homepage
- ๐ Examples
- ๐ Issue Tracker
- ๐ฌ Letta Discord
- ๐ Letta Documentation
Acknowledgments
Built with Letta - the leading platform for building stateful AI agents with long-term memory.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file agentic_learning-0.3.1.tar.gz.
File metadata
- Download URL: agentic_learning-0.3.1.tar.gz
- Upload date:
- Size: 111.7 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.9.24
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
88921c4cc59594fceaae7d633e3fde1041161679a15c20e657754107a5227bda
|
|
| MD5 |
8b7f1abd7b3423f242a2a5cef3085530
|
|
| BLAKE2b-256 |
c799c0b5584ea10318b722d62a7078cf1e90455c30582ebe16c9a1eec1af2dfe
|
File details
Details for the file agentic_learning-0.3.1-py3-none-any.whl.
File metadata
- Download URL: agentic_learning-0.3.1-py3-none-any.whl
- Upload date:
- Size: 34.5 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.9.24
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
95b2e5b9001beb8868567c814567dee7b8e9d2c9b87e9a87df07b135080b841d
|
|
| MD5 |
8aa0642a9115c9cb6e3b398dd0542c9c
|
|
| BLAKE2b-256 |
3a7885f078f52614f6f737c41b0658c3e8a4b3aa78a502f50a893e8daa47009c
|