Skip to main content

Lightweight, file-based context management for AI agents. Your agent writes better abstracts than your RAG pipeline.

Project description

๐Ÿ“š ShelfAI

๐Ÿšง Experimental โ€” ShelfAI is in active development. The core is battle-tested on a 25-agent production swarm, but the public API may change. We're releasing early for community feedback. Open an issue or join the conversation.

Most frameworks optimize retrieval and persistence. ShelfAI optimizes the source files they retrieve from.

Every agent framework treats skill files as flat text โ€” you either load the whole thing or nothing. Retrieval systems find the right file, but nobody optimizes what's inside it. Skills accumulate with no pruning, no chunking, no internal structure for selective loading.

ShelfAI is the document-ops layer for agent context. It applies RAG architecture principles โ€” abstracts, semantic chunking, titled sections โ€” to agent skill files, so an LLM can skim an abstract, decide relevance, and load only the chunks it needs. An agent maintains the files over time, handling restructuring and pruning automatically.

The pattern comes from thinking about how we structure a medical RAG system we're building in partnership with the University of Coimbra โ€” tiered retrieval, agent-written abstracts, structured chunking. We asked: why doesn't agent memory work this way? So we built ShelfAI.

pip install shelfai==0.2.0a3

Alpha release โ€” this is an early, experimental version. The API may change between releases. Back up your agent memory and skill files before running write operations (--write). ShelfAI creates automatic backups (.pre-chunk, .pre-compact), but during alpha, keep your own copies too. We're publishing early to get feedback from the Hermes and OpenClaw communities. Please report issues โ€” it helps a lot.

License Python CI PyPI Status


Where ShelfAI Sits

Your agent's memory architecture is sophisticated. The documents it remembers are not. Existing memory systems โ€” Hermes, Honcho, SuperMemory, QMD โ€” solve what to remember and when to retrieve it. ShelfAI solves how the documents are structured once you've decided to read one.

Raw skill file / knowledge document
        โ†“
   [ ShelfAI ]  โ† structures, chunks, titles, writes abstracts
        โ†“
   Structured document with abstract + semantic chunks
        โ†“
   QMD / SuperMemory / Hermes skills / OpenClaw ClawHub

ShelfAI is a preprocessing layer that makes retrieval systems work better, not a replacement for any of them.

Layer Tool What It Does
Search QMD Finds the right files. BM25 + vector + LLM reranking, fully local.
Structure ShelfAI Curates what gets searched. Abstracts, chunks, learning loop.
Entity Memory Honcho or Supermemory Remembers users, projects, facts that change over time.

The Core Problem

Every agent framework today treats skill files as flat text:

Skill discovery โ€” You either match the YAML description or you don't. There's no abstract to help make smarter routing decisions. In Hermes, for example, Level 0 gives you a name + description index (~3k tokens) and Level 1 gives you the full skill. There is no Level 0.5.

Skill loading โ€” You read zero lines or all 500. No way to say "give me chunk 3 about error handling." Every irrelevant line burns tokens.

Skill creation โ€” The agent writes a flat markdown file. No internal structure optimized for future retrieval. Skills accumulate indefinitely with no pruning, no contradiction detection, no internal navigation.

ShelfAI adds the missing gradient: abstracts for smarter routing, semantic chunks for selective loading, and a learning loop that improves both over time.


How It Works

The Shelf

Your agent's knowledge lives in a simple directory:

shelf/
โ”œโ”€โ”€ index.md           โ† One-line abstracts for everything
โ”œโ”€โ”€ skills/            โ† Agent capabilities and procedures
โ”œโ”€โ”€ knowledge/         โ† Domain-specific reference material
โ”œโ”€โ”€ memory/            โ† Learnings from past sessions
โ”‚   โ”œโ”€โ”€ user/          โ† User preferences
โ”‚   โ””โ”€โ”€ agent/         โ† Operational lessons and patterns
โ””โ”€โ”€ resources/         โ† Reference materials

The Index

shelf/index.md is the only file your agent reads first:

# ShelfAI Index

## Skills
- **skills/seo_audit.md** โ€” Use when a client requests a site audit. Covers
  technical crawl, Core Web Vitals, internal linking, and content gaps.
- **skills/lead_nurture.md** โ€” Use when following up with a lead. Includes
  timing rules, email templates, and the 72-hour re-engagement trigger.

## Knowledge
- **knowledge/api_docs.md** โ€” Payments API reference. Key gotcha: staging
  returns 200 with error body, don't trust status codes.

## Memory
- **memory/agent/lessons.md** โ€” Staging needs VPN. Screaming Frog misses
  JS-rendered pages on Client B's site.

The agent reads the index, matches abstracts to the task, and loads only the matching files. This is the Level 0.5 โ€” richer than a name/description pair, cheaper than loading the full file.

The Learning Loop

After each conversation, ShelfAI's session agent:

  1. Analyzes the transcript
  2. Extracts operational lessons, workflow patterns, preference updates
  3. Deduplicates against existing knowledge
  4. Updates memory files and refines index abstracts
  5. QMD re-indexes โ€” better abstracts mean better search next time
Session happens
    โ”‚
    โ”œโ”€โ†’ ShelfAI session agent extracts operational lessons
    โ”‚     โ†’ Updates memory files
    โ”‚     โ†’ Refines abstracts
    โ””โ”€โ†’ QMD re-indexes the updated shelf
          โ†’ Better abstracts = better reranking

Next session
    โ”œโ”€โ†’ QMD finds more relevant files
    โ””โ”€โ†’ ShelfAI provides richer, curated context

Agent performs better โ†’ richer sessions โ†’ better extractions โ†’ loop

The key insight: Your agent uses your context daily. It knows which details matter, which skills get called when, which gotchas keep tripping things up. An agent that uses the context writes better retrieval abstracts than a model that just summarizes it.


Agent File Chunking

Monolithic agent instruction files (150-400+ lines) waste tokens loading instructions irrelevant to the current task. ShelfAI's chunking system splits them into modular, selectively-loaded chunks โ€” reducing per-run token cost by ~60%.

Two-Layer Approach

  1. Heuristic pre-filter (free): shelfai chunk extracts soul/rules/read-order into always-loaded chunks. Handles the ~35% of chunking that's structurally obvious. Zero LLM cost, safe to run anytime.

  2. LLM semantic pass (~$0.01/agent): The session agent groups remaining sections by deliverable/workflow. Triggered on a weekly cadence. The LLM has full latitude to say "no change needed" when a chunk's size serves the deliverable.

Chunk Structure

agents/{id}/
โ”œโ”€โ”€ AGENT.md              # Thin router (~40 lines) โ€” maps tasks to chunks
โ”œโ”€โ”€ MEMORY.md             # Learned patterns
โ”œโ”€โ”€ chunks/
โ”‚   โ”œโ”€โ”€ soul.md           # Always loaded โ€” mission, role, identity
โ”‚   โ”œโ”€โ”€ rules.md          # Always loaded โ€” hard constraints
โ”‚   โ”œโ”€โ”€ read-order.md     # Always loaded โ€” system integration, data sources
โ”‚   โ”œโ”€โ”€ {task-1}.md       # Loaded when task matches
โ”‚   โ””โ”€โ”€ {task-2}.md       # Loaded when task matches

Chunking CLI

# Scan for agents that need chunking
shelfai chunk-scan ./agents

# Preview the pre-filter on a specific agent
shelfai chunk ./agents/18-efficiency/AGENT.md --dry-run

# Write chunk files (backs up original as AGENT.md.pre-chunk)
shelfai chunk ./agents/18-efficiency/AGENT.md --write
Class When to Load Examples
always Every run soul, rules, read-order, MEMORY.md
task Current task matches tiktok, blog-article, daily-scorecard
schedule Time-triggered weekly-report (Mondays), monthly-review
reference On demand or searched scoring-formulas, tool-setup

Quick Start

# Install
pip install shelfai==0.2.0a3

# Initialize a shelf
shelfai init --template agent

# Add your agent's knowledge
shelfai add ./my_playbook.md --category skills
shelfai add ./api_docs.md --category knowledge

# Build the index (manual = best quality, auto = faster)
shelfai index --manual    # You write abstracts with retrieval hints
# OR
shelfai index             # Auto-generate abstracts with LLM (~$0.01)

# Register with QMD
qmd collection add ./shelf --name shelf
qmd embed

# After each conversation, extract learnings
shelfai session ./transcript.md
qmd embed                 # Re-index so QMD sees the updates

That's it. Your agent now has a knowledge base that improves after every conversation.

Use with Your Agent

from shelfai import Shelf

shelf = Shelf("./shelf")

def run_task(task: str):
    # Find relevant context
    relevant = shelf.index.search(task)
    context = "\n".join(shelf.read_file(e.file_path) for e in relevant)
    lessons = shelf.read_file("memory/agent/lessons.md", default="")

    return run_agent(f"{context}\n\n{lessons}", task)

Post-Session Learning

from shelfai import Shelf
from shelfai.agents.session import SessionManager
from shelfai.providers.anthropic import AnthropicProvider

shelf = Shelf("./shelf")
provider = AnthropicProvider()
manager = SessionManager(shelf, provider)

# After conversation ends
report = manager.process_file("transcript.md")
print(f"Extracted {report.extraction.total_items} learnings")

# Re-index so QMD sees the updates
import subprocess
subprocess.run(["qmd", "embed"])

Memory Compaction

Memory files grow as the session agent appends lessons after each conversation. Without compaction, they accumulate duplicates, superseded entries, and stale observations that dilute context quality.

shelfai compact consolidates memory files using heuristic dedup โ€” no LLM needed, safe to run anytime.

# Scan all memory files (shelf + agent MEMORY.md)
shelfai compact --shelf ./shelf --agents ./agents --scan

# Preview compaction on a specific file
shelfai compact --file ./shelf/memory/agent/what-works.md

# Apply compaction (backs up originals as .pre-compact)
shelfai compact --shelf ./shelf --agents ./agents --write

Removes near-duplicate entries, strips placeholders, archives entries older than 90 days (configurable via --stale-days). Preserves file structure, headings, and tables. Backs up originals before writing.


Integrations

ShelfAI is framework-agnostic. It manages markdown files. Works with any agent runtime.

Framework Integration Status
Claude Code Claude skill (skills/claude/) โœ… Shipped
Hermes Agent Post-run hook + skill structuring (examples/hermes_integration.py) ๐Ÿ“– Example
OpenClaw ClawHub skill packaging (examples/openclaw_integration.py) ๐Ÿ“– Example
QMD Direct โ€” ShelfAI curates what QMD indexes โœ… Shipped
Honcho Complementary โ€” ShelfAI handles ops knowledge, Honcho handles entity memory Compatible
SuperMemory Complementary โ€” ShelfAI structures docs before ingestion Compatible

See examples/ for integration guides.


Built in Production

ShelfAI was built because we needed it. We run a 25-agent content swarm (17 pipeline + 8 oversight) and hit every failure mode:

  • Memory bloat: 37 memory files, 207 entries with 25 near-duplicates poisoning retrieval. shelfai compact cleaned them in one pass.
  • Monolithic configs: Agent files exceeding 400 lines, burning tokens on irrelevant instructions every run. shelfai chunk split them into task-specific modules. ~60% token savings.

CLI Reference

Command Description
shelfai init Initialize a new shelf
shelfai add <file> Add a file or URL to the shelf
shelfai index Build/rebuild the index (generate abstracts)
shelfai session <file> Run session agent on a transcript
shelfai search <query> Test abstract matching
shelfai status Show shelf health and stats
shelfai prune Clean up stale memory entries
shelfai export Export shelf as a single file
shelfai chunk-scan <dir> Scan agents directory for chunking candidates
shelfai chunk <file> Run heuristic pre-filter on a monolithic agent file
shelfai compact Consolidate memory files (dedup, archive stale)
shelfai review List or approve staged new-context proposals

Cost

Component Cost
ShelfAI $0 + ~$0.02/session for LLM calls
QMD $0 โ€” fully local
Honcho $0 โ€” open source (or hosted)
Supermemory Free tier or $19/mo

Philosophy

  1. Files beat databases for human-scale knowledge. If you can ls it, you understand it.
  2. Agents write better indexes. The thing that uses the context should write the retrieval abstracts.
  3. Transparency beats magic. When retrieval fails, open the file and read why.
  4. Zero infrastructure is the default. Scale up when you need to, not because your tools demand it.

Project Status

v0.2.0-alpha (Experimental)

Core Features:

  • Core CLI (init, add, index, session, search, status, export, prune, review)
  • Session management agent (5-stage pipeline, schema validation, backups)
  • Auto-indexing with LLM providers (Anthropic, OpenAI)
  • Production hardening (path traversal protection, file locking, retry logic)
  • Agent file chunking (chunk-scan + chunk commands, two-layer architecture)
  • Memory compaction (heuristic dedup, stale archival, placeholder cleanup)
  • Integrations: Claude skill, Hermes Agent (example), OpenClaw (example)
  • 163 tests passing โœ“ Apache 2.0 licensed

Roadmap:

  • shelfai register --qmd (one-command QMD setup)
  • MCP server implementation
  • Watch mode (auto-index on file changes)
  • Shelf templates (customer support, content production, analysis, sales)

Contributing

We welcome contributions. ShelfAI is early-stage and there's a lot of surface area.

Areas where help is especially welcome: shelf templates for specific domains, real-world case studies, integration examples for your framework, benchmarks (token cost reductions, retrieval quality), and the shelfai register --qmd CLI command.

See CONTRIBUTING.md for guidelines. If you want to contribute before formal guidelines are up, just open an issue โ€” we're friendly.


License

Apache License 2.0 โ€” see LICENSE for details.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

shelfai-0.2.0a3.tar.gz (86.0 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

shelfai-0.2.0a3-py3-none-any.whl (70.1 kB view details)

Uploaded Python 3

File details

Details for the file shelfai-0.2.0a3.tar.gz.

File metadata

  • Download URL: shelfai-0.2.0a3.tar.gz
  • Upload date:
  • Size: 86.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.10.12

File hashes

Hashes for shelfai-0.2.0a3.tar.gz
Algorithm Hash digest
SHA256 ad0c0db9f7e63bc7fbab84366f5cd4e4a435b28078d8f43876e85c3f6edf8c97
MD5 9b4df9737d0782b438acc5c35557cfbd
BLAKE2b-256 a318bd9b91a0ec68aef7696643fc640533bc77fa96ac55fc44b9c8b984b2d69b

See more details on using hashes here.

File details

Details for the file shelfai-0.2.0a3-py3-none-any.whl.

File metadata

  • Download URL: shelfai-0.2.0a3-py3-none-any.whl
  • Upload date:
  • Size: 70.1 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.10.12

File hashes

Hashes for shelfai-0.2.0a3-py3-none-any.whl
Algorithm Hash digest
SHA256 2fdfdad131663a917678a4473f210c50e6a57d89d1870d9647e0dd0bb6ac83b8
MD5 34cf8be7791b32483d3cb9b323ce05ee
BLAKE2b-256 251e94017b28100ec4f2aed90b8851bec31bddf3c4f429aad0b2d86166b1d07e

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page