Skip to main content

Markdown-native chat storage for LlamaIndex. Human-readable, git-versionable, powered by soul-agent.

Project description

llamaindex-soul 🧠

Markdown-native chat storage for LlamaIndex.

Your chat history shouldn't be a black box. llamaindex-soul stores everything in human-readable markdown files:

chat_memory/
├── user1.md
├── user2.md
└── session_abc.md

Human-readable. Git-versionable. Powered by soul-agent.

Install

pip install llamaindex-soul

Quick Start

from llamaindex_soul import SoulChatStore
from llama_index.core.memory import ChatMemoryBuffer

# Create markdown-based chat store
chat_store = SoulChatStore()

# Use with ChatMemoryBuffer
memory = ChatMemoryBuffer.from_defaults(
    token_limit=3000,
    chat_store=chat_store,
    chat_store_key="user1",
)

# Use with an agent
from llama_index.core.agent import FunctionAgent

agent = FunctionAgent(tools=tools, llm=llm)
await agent.run("Hello!", memory=memory)

After running, check chat_memory/user1.md:

# Chat History: user1

## 2026-03-06 19:30:15 UTC
**User:** Hello!

## 2026-03-06 19:30:17 UTC
**Assistant:** Hi there! How can I help you today?

Features

Multiple Sessions

# Each user/session gets its own file
chat_store = SoulChatStore(storage_dir="./chats")

chat_store.add_message("alice", ChatMessage(role=MessageRole.USER, content="Hi"))
chat_store.add_message("bob", ChatMessage(role=MessageRole.USER, content="Hello"))
# Creates: ./chats/alice.md, ./chats/bob.md

Semantic Search

# Find relevant past conversations
results = chat_store.recall("user1", "database recommendations")
for r in results:
    print(f"[{r['score']:.2f}] {r['content']}")

Persistence

# Already persists on every write, but you can force it
chat_store.persist()

# Load from existing directory
chat_store = SoulChatStore.from_persist_path("./chat_memory")

API Reference

SoulChatStore

SoulChatStore(
    storage_dir="./chat_memory",  # Where to store chat files
    use_hybrid=True,              # Enable soul-agent RAG+RLM
    provider="anthropic",         # LLM provider for retrieval
)

Methods

Method Description
add_message(key, message) Add a message to a session
get_messages(key) Get all messages for a session
set_messages(key, messages) Set messages (overwrites)
delete_messages(key) Delete all messages for a session
delete_last_message(key) Delete the last message
delete_message(key, idx) Delete message at index
get_keys() Get all session keys
recall(key, query, limit=5) Semantic search over history
persist() Force persist to disk

The Soul Ecosystem

Package Framework PyPI
soul-agent Core library pip install soul-agent
crewai-soul CrewAI pip install crewai-soul
langchain-soul LangChain pip install langchain-soul
llamaindex-soul LlamaIndex pip install llamaindex-soul

Links

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llamaindex_soul-0.1.0.tar.gz (8.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

llamaindex_soul-0.1.0-py3-none-any.whl (6.7 kB view details)

Uploaded Python 3

File details

Details for the file llamaindex_soul-0.1.0.tar.gz.

File metadata

  • Download URL: llamaindex_soul-0.1.0.tar.gz
  • Upload date:
  • Size: 8.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.14.2

File hashes

Hashes for llamaindex_soul-0.1.0.tar.gz
Algorithm Hash digest
SHA256 ca68974e32f6ba19b93a1042a872ac04187038bb10274ca2e3a7f66e1af14a10
MD5 a39eaaac22fa352941b4881ceeb50e88
BLAKE2b-256 816730f63c55bb71d0c534a263e7fc71a3122321e507960813374c4f1cd8ac72

See more details on using hashes here.

File details

Details for the file llamaindex_soul-0.1.0-py3-none-any.whl.

File metadata

File hashes

Hashes for llamaindex_soul-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 406b54bf7ae27b277a5bd804118f40ed705888297ce8f80ff7482f3ec7733bba
MD5 3346c647478e5e1373964fb9a1930cf1
BLAKE2b-256 975d99b2939abfe3d7d9c38a595e330da0261806de53764e9268327a65740466

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page