Drop-in SDK to add an AI memory layer to any application. Works with OpenAI, Anthropic, Gemini, Claude.
Project description
Learning SDK - AI Memory Layer for Any Application
Add continual learning and long-term memory to any LLM agent with one line of code. This SDK enables agents to learn from every conversation and recall context across sessions—making any agent across any platform stateful.
from openai import OpenAI
from agentic_learning import learning
client = OpenAI()
with learning(agent="my_agent"):
response = client.chat.completions.create(...) # LLM is now stateful!
Installation
pip install agentic-learning
Quick Start
# Set your API keys
export OPENAI_API_KEY="your-openai-key"
export LETTA_API_KEY="your-letta-key"
from openai import OpenAI
from agentic_learning import learning
client = OpenAI()
# Add continual learning with one line
with learning(agent="my_assistant"):
# All LLM calls inside this block have learning enabled
response = client.chat.completions.create(
model="gpt-5",
messages=[{"role": "user", "content": "My name is Alice"}]
)
# Agent remembers prior context
response = client.chat.completions.create(
model="gpt-5",
messages=[{"role": "user", "content": "What's my name?"}]
)
# Returns: "Your name is Alice"
That's it - this SDK automatically:
- ✅ Learns from every conversation
- ✅ Recalls relevant context when needed
- ✅ Remembers across sessions
- ✅ Works with your existing LLM code
Supported Providers
| Provider | Package | Status | Example |
|---|---|---|---|
| OpenAI Chat | openai>=1.0.0 |
✅ Stable | openai_example.py |
| OpenAI Responses | openai>=1.0.0 |
✅ Stable | openai_responses_example.py |
| Anthropic | anthropic>=0.18.0 |
✅ Stable | anthropic_example.py |
| Claude Agent SDK | @anthropic-ai/claude-agent-sdk>=0.1.0 |
✅ Stable | claude_example.py |
| Gemini | google-generativeai>=0.3.0 |
✅ Stable | gemini_example.py |
Create an issue to request support for another provider, or contribute a PR.
How It Works
This SDK adds stateful memory to your existing LLM code with zero architectural changes:
Benefits:
- 🔌 Drop-in integration - Works with your existing LLM Provider SDK code
- 🧠 Automatic memory - Relevant context retrieved and injected into prompts
- 💾 Persistent across sessions - Conversations remembered even after restarts
- 💰 Cost-effective - Only relevant context injected, reducing token usage
- ⚡ Fast retrieval - Semantic search powered by Letta's optimized infrastructure
- 🏢 Production-ready - Built on Letta's proven memory management platform
Architecture:
1. 🎯 Wrap 2. 📝 Capture 3. 🔍 Retrieve 4. 🤖 Respond
your code conversations relevant with full
in learning automatically memories context
┌─────────────┐
│ Your Code │
│ learning() │
└──────┬──────┘
│
▼
┌─────────────┐ ┌──────────────┐
│ Interceptor │───▶│ Letta Server │ (Stores conversations,
│ (Inject) │◀───│ (Memory) │ retrieves context)
└──────┬──────┘ └──────────────┘
│
▼
┌─────────────┐
│ LLM API │ (Sees enriched prompts)
│ OpenAI/etc │
└─────────────┘
Key Features
Memory Across Sessions
# First session
with learning(agent="sales_bot"):
response = client.chat.completions.create(
messages=[{"role": "user", "content": "I'm interested in Product X"}]
)
# Later session - agent remembers automatically
with learning(agent="sales_bot"):
response = client.chat.completions.create(
messages=[{"role": "user", "content": "Tell me more about that product"}]
)
# Agent knows you're asking about Product X
Search Agent Memory
from agentic_learning import AgenticLearning
learning_client = AgenticLearning()
# Search past conversations
messages = learning_client.memory.search(
agent="my_agent",
query="What are my project requirements?"
)
Advanced Features
Capture-Only Mode
# Store conversations without injecting memory (useful for logging)
with learning(agent="my_agent", capture_only=True):
response = client.chat.completions.create(...)
Custom Memory Blocks
# Configure which memory blocks to use
with learning(agent="sales_bot", memory=["customer", "product_preferences"]):
response = client.chat.completions.create(...)
Local Development
Using Local Letta Server
from agentic_learning import AgenticLearning, learning
# Connect to local server
learning_client = AgenticLearning(base_url="http://localhost:8283")
with learning(agent="my_agent", client=learning_client):
response = client.chat.completions.create(...)
Run Letta locally with Docker:
docker run \
-v ~/.letta/.persist/pgdata:/var/lib/postgresql/data \
-p 8283:8283 \
-e OPENAI_API_KEY="your_key" \
letta/letta:latest
See the self-hosting guide for more options.
Development Setup
# Clone repository
git clone https://github.com/letta-ai/agentic-learning-sdk.git
cd agentic-learning-sdk/python
# Install in development mode
pip install -e .
# Run tests
pytest tests/ -v
# Run specific provider tests
pytest tests/ -m openai -v
pytest tests/ -m anthropic -v
Examples
See the examples/ directory for complete working examples:
cd ../examples
pip install -r requirements.txt
python openai_example.py
Documentation
- 📖 Full Documentation - Complete SDK documentation
- 🧪 Test Suite - 36/36 tests passing (100%)
- 🎯 Examples - Working examples for all providers
- 💬 Letta Discord - Community support
- 📚 Letta Docs - Letta platform documentation
Requirements
- Python 3.9+
- Letta API key (sign up at letta.com)
- At least one LLM provider SDK
License
Apache 2.0 - See LICENSE for details.
Built with Letta - the leading platform for building stateful AI agents with long-term memory.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file agentic_learning-0.4.2.tar.gz.
File metadata
- Download URL: agentic_learning-0.4.2.tar.gz
- Upload date:
- Size: 114.4 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.9.25
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
3960ffa79ae7e2c9b13632bbcbd0d765979ad3ffe6da9aff2b035e706a6f7102
|
|
| MD5 |
f52d479115b204dc324aa35d317c4547
|
|
| BLAKE2b-256 |
da6091c548d52032e579c4ef45f9f31f3d6b7b75284a2ed5b386ecdee88d07bb
|
File details
Details for the file agentic_learning-0.4.2-py3-none-any.whl.
File metadata
- Download URL: agentic_learning-0.4.2-py3-none-any.whl
- Upload date:
- Size: 34.3 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.9.25
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
e47388d87d869c4cea93e985e615b0e581f97ec95d9145ea590542e98d0484df
|
|
| MD5 |
c04360407adc82f02b49e8eaff09f79b
|
|
| BLAKE2b-256 |
16ed1ec42b8e23f469a0fc06099b1131c331109ec2be588574add4e2c623a764
|