Skip to main content

Drop-in SDK for adding persistent memory and learning to any agent.

Project description

Agentic Learning SDK

Add persistent memory to any LLM agent with one line of code. This Agentic Learning SDK automatically captures conversations, manages context, and enables agents to remember information across sessions.

with learning(agent="my_agent"):
    response = client.chat.completions.create(...)  # Memory handled automatically

Python Version npm shield License

Features

  • ๐Ÿ”Œ Drop-in Integration - Works with Anthropic, Claude Agents SDK, OpenAI (Chat Completions & Responses), and Gemini
  • ๐Ÿ’พ Persistent Memory - Conversations automatically saved and recalled across sessions
  • ๐ŸŽฏ Zero Configuration - No prompt engineering or manual context management required
  • โšก Streaming Support - Full support for streaming responses
  • ๐Ÿ” Memory Search - Query past conversations with semantic search
  • ๐ŸŽ›๏ธ Flexible Modes - Auto-inject memory, capture-only, or hybrid approaches

Quick Start

Installation

pip install agentic-learning

Basic Usage

# Set your API keys
export OPENAI_API_KEY="your-openai-key"
export LETTA_API_KEY="your-letta-key"
from openai import OpenAI
from agentic_learning import learning

client = OpenAI()

# Add memory to your agent with one line
with learning(agent="my_assistant"):
    # Your LLM call - conversation is automatically captured
    response = client.chat.completions.create(
        model="gpt-5",
        messages=[{"role": "user", "content": "My name is Alice"}]
    )

    # Agent remembers prior context
    response = client.chat.completions.create(
        model="gpt-4o",
        messages=[{"role": "user", "content": "What's my name?"}]
    )
    # Returns: "Your name is Alice"

That's it - this SDK automatically:

  • โœ… Captures all conversations
  • โœ… Injects relevant memory into prompts
  • โœ… Saves to persistent storage (Letta)
  • โœ… Recalls information across sessions

Supported Providers

Provider Package Status Py Example TS Example
Anthropic anthropic โœ… Stable anthropic_example.py anthropic_example.ts
Claude Agents SDK claude-agent-sdk โœ… Stable claude_example.py claude_example.ts
OpenAI Chat Completions openai โœ… Stable openai_example.py openai_example.ts
OpenAI Responses API openai โœ… Stable openai_responses_example.py openai_responses_example.ts
Gemini google-generativeai โœ… Stable gemini_example.py gemini_example.ts
Vercel AI SDK ai-sdk โœ… Experimental vercel_example.ts

See examples/README.md for detailed documentation.

Core Concepts

Learning Context

Wrap any LLM calls in a learning() context to enable conversation capture and dynamic memory:

with learning(agent="agent_name"):
    # All SDK calls inside this block have memory enabled
    response = llm_client.generate(...)

Note: Memory is scoped by agent name. Each agent maintains its own isolated memory, so agent="sales_bot" and agent="support_bot" have separate conversation histories and context.

Memory Injection

The SDK automatically retrieves relevant memory and injects it into your prompts:

# First session
with learning(agent="sales_bot", memory=["customer"]):
    response = client.chat.completions.create(
        messages=[{"role": "user", "content": "I'm interested in Product X"}]
    )

# Later session - agent remembers any information related to "customer"
with learning(agent="sales_bot", memory=["customer"]):
    response = client.chat.completions.create(
        messages=[{"role": "user", "content": "Tell me more about that product"}]
    )
    # Agent knows you're asking about Product X

Capture-Only Mode

Store conversations without injecting memory (useful for logging or background processing):

with learning(agent="agent_name", capture_only=True):
    # Conversations saved but not injected into prompts
    response = client.chat.completions.create(...)

# Later, list entire conversation history
learning_client = AgenticLearning()
messages = learning_client.messages.list("agent_name")

Memory Search

Query past conversations with semantic search:

# Search for relevant conversations
messages = learning_client.memory.search(
    agent="agent_name",
    query="What are my project requirements?"
)

How It Works

The SDK uses automatic interception of LLM SDK calls:

  1. Intercepts - Patches LLM SDK methods to capture conversations
  2. Enriches - Retrieves relevant memory and injects into prompts
  3. Stores - Saves conversations to Letta for persistent storage
  4. Recalls - Automatically loads relevant context in future sessions
โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€-โ”
โ”‚     Your Code    โ”‚
โ”‚  client.create() โ”‚
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€-โ”˜
         โ”‚
         โ–ผ
โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€-โ”
โ”‚ Agentic Learning โ”‚  โ† Intercepts call
โ”‚   Interceptor    โ”‚  โ† Injects memory
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€-โ”€โ”˜
         โ”‚
         โ–ผ
โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€-โ”€โ”€โ”
โ”‚     LLM API      โ”‚  โ† Sees enriched prompt
โ”‚  (OpenAI, etc)   โ”‚
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€-โ”€โ”€โ”˜
         โ”‚
         โ–ผ
โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€-โ”€โ”
โ”‚   Letta Server   โ”‚  โ† Stores conversation
โ”‚  (Persistent DB) โ”‚  โ† Memory update
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€-โ”˜

Architecture

Interceptors

The SDK provides interceptors for different integration patterns:

  • API-Level Interceptors (OpenAI, Anthropic, Gemini) - Patch HTTP API methods
  • Transport-Level Interceptors (Claude Agent SDK) - Patch subprocess transport layer

All interceptors share common logic through BaseAPIInterceptor, making it easy to add new providers.

Client Architecture

AgenticLearning()
โ”œโ”€โ”€ agents          # Agent management
โ”‚   โ”œโ”€โ”€ create()
โ”‚   โ”œโ”€โ”€ update()
โ”‚   โ”œโ”€โ”€ retrieve()
โ”‚   โ”œโ”€โ”€ list()
โ”‚   โ”œโ”€โ”€ delete()
โ”‚   โ””โ”€โ”€ sleeptime   # Background memory processing
โ”œโ”€โ”€ memory          # Memory block management
โ”‚   โ”œโ”€โ”€ create()
โ”‚   โ”œโ”€โ”€ upsert()
โ”‚   โ”œโ”€โ”€ retrieve()
โ”‚   โ”œโ”€โ”€ list()
โ”‚   โ”œโ”€โ”€ search()    # Semantic search
โ”‚   โ”œโ”€โ”€ remember()  # Store memories
โ”‚   โ””โ”€โ”€ context     # Memory context retrieval
โ””โ”€โ”€ messages        # Message history
    โ”œโ”€โ”€ capture()   # Save conversation turn
    โ”œโ”€โ”€ list()
    โ””โ”€โ”€ create()    # Send message to LLM

Requirements

  • Python 3.9+
  • Letta API key (sign up at letta.com)
  • At least one LLM SDK:
    • openai>=1.0.0
    • anthropic>=0.18.0
    • google-generativeai>=0.3.0
    • claude-agent-sdk>=0.1.0

Local Development (Optional)

For local development, you can run Letta server locally:

# Install Letta
pip install letta

# Start server (default: http://localhost:8283)
letta server

See Letta documentation for more details.

Development Setup

# Clone repository
git clone https://github.com/letta-ai/agentic_learning_sdk.git
cd agentic_learning_sdk

# Install in development mode
pip install -e python/

# Run examples
cd examples
python3 openai_example.py

Advanced Usage

Custom Letta Server URL

learning_client = AgenticLearning(base_url="http://custom-host:8283")

Agent Configuration

# Create agent with custom memory blocks
agent = learning_client.agents.create(
    agent="my_agent",
    memory=["human", "persona", "project_context"],
    model="anthropic/claude-sonnet-4-20250514"
)

# Create custom memory block
learning_client.memory.create(
    agent="my_agent",
    label="user_preferences",
    value="Prefers concise technical responses"
)

Async Support

from agentic_learning import learning_async, AsyncAgenticLearning

async_client = AsyncAgenticLearning()

async with learning_async(agent="my_agent", client=async_client):
    response = await async_llm_client.generate(...)

Contributing

Contributions are welcome! Please feel free to submit a Pull Request.

Adding a New Provider

  1. Create a new interceptor in python/src/agentic_learning/interceptors/
  2. Extend BaseAPIInterceptor (for API-level) or BaseInterceptor (for transport-level)
  3. Implement SDK-specific methods:
    • extract_user_messages()
    • extract_assistant_message()
    • inject_memory_context()
    • _build_response_from_chunks()
  4. Register in __init__.py
  5. Add example to examples/

See existing interceptors for reference implementations.

License

Apache 2.0 - See LICENSE for details.

Links

Acknowledgments

Built with Letta - the leading platform for building stateful AI agents with long-term memory.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

agentic_learning-0.3.1.tar.gz (111.7 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

agentic_learning-0.3.1-py3-none-any.whl (34.5 kB view details)

Uploaded Python 3

File details

Details for the file agentic_learning-0.3.1.tar.gz.

File metadata

  • Download URL: agentic_learning-0.3.1.tar.gz
  • Upload date:
  • Size: 111.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.9.24

File hashes

Hashes for agentic_learning-0.3.1.tar.gz
Algorithm Hash digest
SHA256 88921c4cc59594fceaae7d633e3fde1041161679a15c20e657754107a5227bda
MD5 8b7f1abd7b3423f242a2a5cef3085530
BLAKE2b-256 c799c0b5584ea10318b722d62a7078cf1e90455c30582ebe16c9a1eec1af2dfe

See more details on using hashes here.

File details

Details for the file agentic_learning-0.3.1-py3-none-any.whl.

File metadata

File hashes

Hashes for agentic_learning-0.3.1-py3-none-any.whl
Algorithm Hash digest
SHA256 95b2e5b9001beb8868567c814567dee7b8e9d2c9b87e9a87df07b135080b841d
MD5 8aa0642a9115c9cb6e3b398dd0542c9c
BLAKE2b-256 3a7885f078f52614f6f737c41b0658c3e8a4b3aa78a502f50a893e8daa47009c

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page