Skip to main content

A lightweight TOML-based memory system for AI agents

Project description

TOMLDiary

Memory, Simplified: TOML-Driven, Agent-Approved.

TOMLDiary is a dead-simple, customizable memory system for agentic applications. It stores data in human-readable TOML files so your agents can keep a tidy diary of only the useful stuff.

Key Benefits

  • Human-readable TOML storage – easy to inspect, debug and manage.
  • Fully customizable – define your own memory schema with simple Pydantic models.
  • Smart deduplication – prevents duplicate preferences with FuzzyWuzzy similarity detection (70% threshold).
  • Enhanced limit enforcement – visual indicators and pre-flight checking prevent failed operations.
  • Force creation mechanism – bypass similarity detection when needed with id="new" parameter.
  • Minimal overhead – lightweight design, backend agnostic and easy to integrate.
  • Atomic, safe writes – ensures data integrity with proper file locking.

Installation

Requires Python 3.11+

uv add tomldiary pydantic-ai

Quick Start

from pydantic import BaseModel
from typing import Dict
from tomldiary import Diary, PreferenceItem
from tomldiary.backends import LocalBackend

# Be as specific as possible in your preference schema, it passed to the system prompt of the agent extracting the data!
# This of the fields as the "slots" to organize facts into and tell the agent what to remember.
class MyPrefTable(BaseModel):
    """
    likes    : What the user enjoys
    dislikes : Things user avoids
    allergies: Substances causing reactions
    routines : User’s typical habits
    biography: User’s personal details
    """

    likes: Dict[str, PreferenceItem] = {}
    dislikes: Dict[str, PreferenceItem] = {}
    allergies: Dict[str, PreferenceItem] = {}
    routines: Dict[str, PreferenceItem] = {}
    biography: Dict[str, PreferenceItem] = {}


diary = Diary(
    backend=LocalBackend(path="./memories"),
    pref_table_cls=MyPrefTable,
    max_prefs_per_category=100,
    max_conversations=50,
)

await diary.ensure_session(user_id, session_id)
await diary.update_memory(
    user_id,
    session_id,
    user_msg="I'm allergic to walnuts.",
    assistant_msg="I'll remember you're allergic to walnuts.",
)

TOML Memory Example

[_meta]
version = "0.3"
schema_name = "MyPrefTable"

[allergies.walnuts]
text = "allergic to walnuts"
contexts = ["diet", "health"]
_count = 1
_created = "2024-01-01T00:00:00Z"
_updated = "2024-01-01T00:00:00Z"

Conversations File (alice_conversations.toml)

[_meta]
version = "0.3"
schema_name = "MyPrefTable"

[conversations.chat_123]
_created = "2024-01-01T00:00:00Z"
_turns = 5
summary = "Discussed food preferences and dietary restrictions"
keywords = ["food", "allergy", "italian"]

Advanced Usage

Custom Preference Categories

Create your own preference schema:

class DetailedPrefTable(BaseModel):
    """
    dietary     : Food preferences and restrictions
    medical     : Health conditions and medications
    interests   : Hobbies and topics of interest
    goals       : Personal objectives and aspirations
    family      : Family members and relationships
    work        : Professional information
    """
    dietary: Dict[str, PreferenceItem] = {}
    medical: Dict[str, PreferenceItem] = {}
    interests: Dict[str, PreferenceItem] = {}
    goals: Dict[str, PreferenceItem] = {}
    family: Dict[str, PreferenceItem] = {}
    work: Dict[str, PreferenceItem] = {}

Smart Preference Management

The system includes enhanced tools for intelligent preference management:

# The extraction agent uses these enhanced tools automatically:
# - list_preferences(category) - shows limits with visual indicators (✅/⚠️/❌)  
# - upsert_preference() with smart workflows:
#   * Similarity detection prevents duplicates
#   * Auto-increment counts on updates  
#   * Force creation with id="new" when needed
#   * Intelligent error messages with match percentages

# Examples of enhanced error messages:
# "❌ Similar preferences found:
#   • likes/pref001: 'black blazers for work' (85% match)
#   • likes/pref003: 'dark blazers' (72% match)
# 
# To update existing: upsert_preference('likes', id='pref001')
# To force create anyway: upsert_preference('likes', id='new', text='black blazers')"

Backend Options

The library supports different storage backends:

# Local filesystem (default)
from tomldiary.backends import LocalBackend
backend = LocalBackend(Path("./memories"))

# S3 backend (implement S3Backend)
# backend = S3Backend(bucket="my-memories")

# Redis backend (implement RedisBackend)  
# backend = RedisBackend(host="localhost")

Memory Writer Configuration

# Configure the background writer
writer = MemoryWriter(
    diary=diary,
    workers=3,        # Number of background workers
    qsize=100,        # Queue size
    retry_limit=3,    # Max retries on failure
    retry_delay=1.0   # Delay between retries
)

API Reference

Diary

Main class for memory operations:

  • preferences(user_id): Get user preferences as TOML string
  • last_conversations(user_id, limit): Get last N conversation summaries
  • ensure_session(user_id, session_id): Create session if needed
  • update_memory(user_id, session_id, user_msg, assistant_msg): Process and store memory

Automated compaction sweeps

Use CompactionConfig to schedule background clean-up passes that trim redundant preferences or stale conversation summaries. The configuration persists progress inside _meta.compaction so counters survive restarts.

from tomldiary.compaction import CompactionConfig

compaction = CompactionConfig(
    enabled=True,
    total_char_threshold=4000,      # trigger when serialized store exceeds N characters
    segment_char_threshold=600,     # or if any single block exceeds this size
    user_turn_interval=25,          # also run every 25 user turns
    cooldown_seconds=900,           # minimum gap between runs
    compact_preferences=True,       # target preference store
    compact_conversations=False,    # skip conversation summaries for this diary
)

diary = Diary(
    backend=backend,
    pref_table_cls=MyPrefTable,
    agent=extractor,
    compaction_config=compaction,
)

The compactor uses dedicated tools (list_preference_blocks, rewrite_*, delete_*) and will loop through every block during a sweep. When disabled, the diary still records char counts and turn statistics so triggers fire immediately once compaction is re-enabled.

MemoryWriter

Background queue for non-blocking writes:

  • submit(user_id, session_id, user_message, assistant_response): Queue memory update
  • close(): Graceful shutdown
  • failed_count(): Number of failed operations

Models

  • PreferenceItem: Single preference with text, contexts, and metadata
  • ConversationItem: Conversation with summary, keywords, and turn count
  • MemoryDeps: Container for preferences and conversations

Examples

See the examples/ directory for:

  • simple_example.py: Basic usage with educational agent (no LLM required)
  • example_cooking_show.py: Advanced AI-powered cooking show with celebrity chef interviews
  • culinary_prefs.py: Custom preference schema for culinary applications

Note: Examples use custom agents for educational purposes. The built-in extraction agent automatically uses the enhanced smart deduplication and limit enforcement tools described above.

Development

# Install dev dependencies
uv sync --group dev

# Run tests
pytest

# Format code
ruff format .

# Lint code
ruff check .

License

MIT License - see LICENSE file for details.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

tomldiary-0.1.0.tar.gz (24.0 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

tomldiary-0.1.0-py3-none-any.whl (29.5 kB view details)

Uploaded Python 3

File details

Details for the file tomldiary-0.1.0.tar.gz.

File metadata

  • Download URL: tomldiary-0.1.0.tar.gz
  • Upload date:
  • Size: 24.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.8.19

File hashes

Hashes for tomldiary-0.1.0.tar.gz
Algorithm Hash digest
SHA256 64cf9d6552a02777ad1f24945270010c9d9eebf3f4d13d349c318d90270e75c7
MD5 53c8c5b8f1b57e3408f5b944bd7290f2
BLAKE2b-256 5c0baea993177bb9795e7baf977fc56d9d0086614435b2301fb9cd4bcf357688

See more details on using hashes here.

File details

Details for the file tomldiary-0.1.0-py3-none-any.whl.

File metadata

  • Download URL: tomldiary-0.1.0-py3-none-any.whl
  • Upload date:
  • Size: 29.5 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.8.19

File hashes

Hashes for tomldiary-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 6aee6471e1ed8f32f2b740d2519e9faa3121414d8ac7ffc72e5e40d986c991e8
MD5 20e1acd2c16933fb58aec6ed8bc93cd2
BLAKE2b-256 e93c2153f253d72d49f857284774d986bd403c23ea7bc5cfe3ae9f80dc9835b0

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page