Skip to main content

A lightweight TOML-based memory system for AI agents

Project description

TOMLDiary

Memory, Simplified: TOML-Driven, Agent-Approved.

TOMLDiary is a dead-simple, customizable memory system for agentic applications. It stores data in human-readable TOML files so your agents can keep a tidy diary of only the useful stuff.

Key Benefits

  • Human-readable TOML storage – easy to inspect, debug and manage.
  • Fully customizable – define your own memory schema with simple Pydantic models.
  • Smart deduplication – prevents duplicate preferences with FuzzyWuzzy similarity detection (70% threshold).
  • Enhanced limit enforcement – visual indicators and pre-flight checking prevent failed operations.
  • Force creation mechanism – bypass similarity detection when needed with id="new" parameter.
  • Minimal overhead – lightweight design, backend agnostic and easy to integrate.
  • Atomic, safe writes – ensures data integrity with proper file locking.

Installation

Requires Python 3.11+

uv add tomldiary pydantic-ai

Quick Start

from pydantic import BaseModel
from typing import Dict
from tomldiary import Diary, PreferenceItem
from tomldiary.backends import LocalBackend

# Be as specific as possible in your preference schema, it passed to the system prompt of the agent extracting the data!
# This of the fields as the "slots" to organize facts into and tell the agent what to remember.
class MyPrefTable(BaseModel):
    """
    likes    : What the user enjoys
    dislikes : Things user avoids
    allergies: Substances causing reactions
    routines : User’s typical habits
    biography: User’s personal details
    """

    likes: Dict[str, PreferenceItem] = {}
    dislikes: Dict[str, PreferenceItem] = {}
    allergies: Dict[str, PreferenceItem] = {}
    routines: Dict[str, PreferenceItem] = {}
    biography: Dict[str, PreferenceItem] = {}


diary = Diary(
    backend=LocalBackend(path="./memories"),
    pref_table_cls=MyPrefTable,
    max_prefs_per_category=100,
    max_conversations=50,
)

await diary.ensure_session(user_id, session_id)
await diary.update_memory(
    user_id,
    session_id,
    user_msg="I'm allergic to walnuts.",
    assistant_msg="I'll remember you're allergic to walnuts.",
)

TOML Memory Example

[_meta]
version = "0.3"
schema_name = "MyPrefTable"

[allergies.walnuts]
text = "allergic to walnuts"
contexts = ["diet", "health"]
_count = 1
_created = "2024-01-01T00:00:00Z"
_updated = "2024-01-01T00:00:00Z"

Conversations File (alice_conversations.toml)

[_meta]
version = "0.3"
schema_name = "MyPrefTable"

[conversations.chat_123]
_created = "2024-01-01T00:00:00Z"
_turns = 5
summary = "Discussed food preferences and dietary restrictions"
keywords = ["food", "allergy", "italian"]

Advanced Usage

Custom Preference Categories

Create your own preference schema:

class DetailedPrefTable(BaseModel):
    """
    dietary     : Food preferences and restrictions
    medical     : Health conditions and medications
    interests   : Hobbies and topics of interest
    goals       : Personal objectives and aspirations
    family      : Family members and relationships
    work        : Professional information
    """
    dietary: Dict[str, PreferenceItem] = {}
    medical: Dict[str, PreferenceItem] = {}
    interests: Dict[str, PreferenceItem] = {}
    goals: Dict[str, PreferenceItem] = {}
    family: Dict[str, PreferenceItem] = {}
    work: Dict[str, PreferenceItem] = {}

Smart Preference Management

The system includes enhanced tools for intelligent preference management:

# The extraction agent uses these enhanced tools automatically:
# - list_preferences(category) - shows limits with visual indicators (✅/⚠️/❌)  
# - upsert_preference() with smart workflows:
#   * Similarity detection prevents duplicates
#   * Auto-increment counts on updates  
#   * Force creation with id="new" when needed
#   * Intelligent error messages with match percentages

# Examples of enhanced error messages:
# "❌ Similar preferences found:
#   • likes/pref001: 'black blazers for work' (85% match)
#   • likes/pref003: 'dark blazers' (72% match)
# 
# To update existing: upsert_preference('likes', id='pref001')
# To force create anyway: upsert_preference('likes', id='new', text='black blazers')"

Backend Options

The library supports different storage backends:

# Local filesystem (default)
from tomldiary.backends import LocalBackend
backend = LocalBackend(Path("./memories"))

# S3 backend (implement S3Backend)
# backend = S3Backend(bucket="my-memories")

# Redis backend (implement RedisBackend)  
# backend = RedisBackend(host="localhost")

Memory Writer Configuration

# Configure the background writer
writer = MemoryWriter(
    diary=diary,
    workers=3,        # Number of background workers
    qsize=100,        # Queue size
    retry_limit=3,    # Max retries on failure
    retry_delay=1.0   # Delay between retries
)

API Reference

Diary

Main class for memory operations:

  • preferences(user_id): Get user preferences as TOML string
  • last_conversations(user_id, limit): Get last N conversation summaries
  • ensure_session(user_id, session_id): Create session if needed
  • update_memory(user_id, session_id, user_msg, assistant_msg): Process and store memory

MemoryWriter

Background queue for non-blocking writes:

  • submit(user_id, session_id, user_message, assistant_response): Queue memory update
  • close(): Graceful shutdown
  • failed_count(): Number of failed operations

Models

  • PreferenceItem: Single preference with text, contexts, and metadata
  • ConversationItem: Conversation with summary, keywords, and turn count
  • MemoryDeps: Container for preferences and conversations

Examples

See the examples/ directory for:

  • simple_example.py: Basic usage with educational agent (no LLM required)
  • example_cooking_show.py: Advanced AI-powered cooking show with celebrity chef interviews
  • culinary_prefs.py: Custom preference schema for culinary applications

Note: Examples use custom agents for educational purposes. The built-in extraction agent automatically uses the enhanced smart deduplication and limit enforcement tools described above.

Development

# Install dev dependencies
uv sync --group dev

# Run tests
pytest

# Format code
ruff format .

# Lint code
ruff check .

License

MIT License - see LICENSE file for details.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

tomldiary-0.0.4.tar.gz (18.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

tomldiary-0.0.4-py3-none-any.whl (22.2 kB view details)

Uploaded Python 3

File details

Details for the file tomldiary-0.0.4.tar.gz.

File metadata

  • Download URL: tomldiary-0.0.4.tar.gz
  • Upload date:
  • Size: 18.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.7.8

File hashes

Hashes for tomldiary-0.0.4.tar.gz
Algorithm Hash digest
SHA256 8f5e4c5f4602186f005a0855d91968b7ff500498906d38dfe335684fad89d320
MD5 3287cbe7edf8a008cc83eaf5a3c5b4fd
BLAKE2b-256 636f7142c0fa4ac458415496b0244af4bace81aeab4db91a18c362fbf4a10b03

See more details on using hashes here.

File details

Details for the file tomldiary-0.0.4-py3-none-any.whl.

File metadata

  • Download URL: tomldiary-0.0.4-py3-none-any.whl
  • Upload date:
  • Size: 22.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.7.8

File hashes

Hashes for tomldiary-0.0.4-py3-none-any.whl
Algorithm Hash digest
SHA256 253bc411993a340c479c4ecd074ced89473a9ebeeaf0d6d871939f9fe6bd1540
MD5 4142bf1b5095331b1586c0b08819cdd5
BLAKE2b-256 2437c98cd15f2ebb929bdb90de3273c5de5fa4201c1da64d1e6cdd07ce5f8101

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page