Skip to main content

Drop-in SDK to add an AI memory layer to any application. Works with OpenAI, Anthropic, Gemini, Claude.

Project description

Agentic Learning SDK - AI Memory Layer for Any Application

Add continual learning to any LLM agent with one line of code. This SDK enables agents to learn from every conversation and recall context across sessions—making your agents truly stateful.

from openai import OpenAI
from agentic_learning import learning

client = OpenAI()

with learning(agent="my_agent"):
    response = client.chat.completions.create(...)  # LLM is now stateful!

pypi License Tests

Installation

pip install agentic-learning

Quick Start

# Set your API keys
export OPENAI_API_KEY="your-openai-key"
export LETTA_API_KEY="your-letta-key"
from openai import OpenAI
from agentic_learning import learning

client = OpenAI()

# Add continual learning with one line
with learning(agent="my_assistant"):
    # All LLM calls inside this block have learning enabled
    response = client.chat.completions.create(
        model="gpt-5",
        messages=[{"role": "user", "content": "My name is Alice"}]
    )

    # Agent remembers prior context
    response = client.chat.completions.create(
        model="gpt-5",
        messages=[{"role": "user", "content": "What's my name?"}]
    )
    # Returns: "Your name is Alice"

That's it - this SDK automatically:

  • ✅ Learns from every conversation
  • ✅ Recalls relevant context when needed
  • ✅ Remembers across sessions
  • ✅ Works with your existing LLM code

Supported Providers

Provider Package Status Example
OpenAI Chat openai>=1.0.0 ✅ Stable openai_example.py
OpenAI Responses openai>=1.0.0 ✅ Stable openai_responses_example.py
Anthropic anthropic>=0.18.0 ✅ Stable anthropic_example.py
Claude Agent SDK @anthropic-ai/claude-agent-sdk>=0.1.0 ✅ Stable claude_example.py
Gemini google-generativeai>=0.3.0 ✅ Stable gemini_example.py

Create an issue to request support for another provider, or contribute a PR.

How It Works

This SDK adds stateful memory to your existing LLM code with zero architectural changes:

Benefits:

  • 🔌 Drop-in integration - Works with your existing LLM Provider SDK code
  • 🧠 Automatic memory - Relevant context retrieved and injected into prompts
  • 💾 Persistent across sessions - Conversations remembered even after restarts
  • 💰 Cost-effective - Only relevant context injected, reducing token usage
  • Fast retrieval - Semantic search powered by Letta's optimized infrastructure
  • 🏢 Production-ready - Built on Letta's proven memory management platform

Architecture:

1. 🎯 Wrap      2. 📝 Capture       3. 🔍 Retrieve   4. 🤖 Respond
   your code       conversations      relevant         with full
   in learning     automatically      memories         context

┌─────────────┐
│  Your Code  │
│  learning() │
└──────┬──────┘
       │
       ▼
┌─────────────┐    ┌──────────────┐
│ Interceptor │───▶│ Letta Server │  (Stores conversations,
│  (Inject)   │◀───│  (Memory)    │   retrieves context)
└──────┬──────┘    └──────────────┘
       │
       ▼
┌─────────────┐
│  LLM API    │  (Sees enriched prompts)
│ OpenAI/etc  │
└─────────────┘

Key Features

Memory Across Sessions

# First session
with learning(agent="sales_bot"):
    response = client.chat.completions.create(
        messages=[{"role": "user", "content": "I'm interested in Product X"}]
    )

# Later session - agent remembers automatically
with learning(agent="sales_bot"):
    response = client.chat.completions.create(
        messages=[{"role": "user", "content": "Tell me more about that product"}]
    )
    # Agent knows you're asking about Product X

Search Agent Memory

from agentic_learning import AgenticLearning

learning_client = AgenticLearning()

# Search past conversations
messages = learning_client.memory.search(
    agent="my_agent",
    query="What are my project requirements?"
)

Advanced Features

Capture-Only Mode

# Store conversations without injecting memory (useful for logging)
with learning(agent="my_agent", capture_only=True):
    response = client.chat.completions.create(...)

Custom Memory Blocks

# Configure which memory blocks to use
with learning(agent="sales_bot", memory=["customer", "product_preferences"]):
    response = client.chat.completions.create(...)

Local Development

Using Local Letta Server

from agentic_learning import AgenticLearning, learning

# Connect to local server
learning_client = AgenticLearning(base_url="http://localhost:8283")

with learning(agent="my_agent", client=learning_client):
    response = client.chat.completions.create(...)

Run Letta locally with Docker:

docker run \
  -v ~/.letta/.persist/pgdata:/var/lib/postgresql/data \
  -p 8283:8283 \
  -e OPENAI_API_KEY="your_key" \
  letta/letta:latest

See the self-hosting guide for more options.

Development Setup

# Clone repository
git clone https://github.com/letta-ai/agentic-learning-sdk.git
cd agentic-learning-sdk/python

# Install in development mode
pip install -e .

# Run tests
pytest tests/ -v

# Run specific provider tests
pytest tests/ -m openai -v
pytest tests/ -m anthropic -v

Examples

See the examples/ directory for complete working examples:

cd ../examples
pip install -r requirements.txt
python openai_example.py

Documentation

Requirements

  • Python 3.9+
  • Letta API key (sign up at letta.com)
  • At least one LLM provider SDK

License

Apache 2.0 - See LICENSE for details.

Built with Letta - the leading platform for building stateful AI agents with long-term memory.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

agentic_learning-0.4.1.tar.gz (114.3 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

agentic_learning-0.4.1-py3-none-any.whl (34.2 kB view details)

Uploaded Python 3

File details

Details for the file agentic_learning-0.4.1.tar.gz.

File metadata

  • Download URL: agentic_learning-0.4.1.tar.gz
  • Upload date:
  • Size: 114.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.9.25

File hashes

Hashes for agentic_learning-0.4.1.tar.gz
Algorithm Hash digest
SHA256 af0d5f86366fa97728cb67615fbc1c8b8600cd315c93a13eb9e0497ca8037d94
MD5 4dbd6b6acb0b42c2ea54b2e26ecb21e2
BLAKE2b-256 457f055b388fb961d5536ef9030dd4d52f8d4f34a620fb5f2644759bb2eeff06

See more details on using hashes here.

File details

Details for the file agentic_learning-0.4.1-py3-none-any.whl.

File metadata

File hashes

Hashes for agentic_learning-0.4.1-py3-none-any.whl
Algorithm Hash digest
SHA256 b2e9ace7e45ad528e597cba03b24d8a014cbb8acca4942d3bb995c4a2e6ce69d
MD5 862daada3157f97678aac2213840f9ba
BLAKE2b-256 a16793ba8d48d6f2e67c512f7e6639a14fa9f62e49a03a6abf1c2141fa0b3fb0

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page