The soul ecosystem for LlamaIndex: persistent memory, identity, database schema intelligence, and SoulMate API integration.
Project description
llamaindex-soul ๐ง
The soul ecosystem for LlamaIndex.
One package, full stack:
- Persistent Memory โ Markdown-native, git-versionable, human-readable
- Hybrid Retrieval โ RAG + RLM via soul-agent
- Database Intelligence โ Auto-generated semantic layers via soul-schema
- Managed Cloud Option โ SoulMate handles memory infrastructure for you
Choose Your Setup
| Setup | Best For | Storage |
|---|---|---|
| Local (default) | Development, git-tracked projects | Markdown files |
| SoulMate (managed) | Production, teams, zero-infra | Cloud API (we handle it) |
Both use the same soul-agent RAG+RLM under the hood.
Install
pip install llamaindex-soul
This automatically installs:
soul-agentโ Hybrid RAG+RLM memorysoul-schemaโ Database semantic layer generator
Quick Start
Basic Memory
from llamaindex_soul import SoulChatStore
from llama_index.core.memory import ChatMemoryBuffer
# Create markdown-based chat storage with full RAG+RLM
chat_store = SoulChatStore()
memory = ChatMemoryBuffer.from_defaults(
token_limit=3000,
chat_store=chat_store,
chat_store_key="user1",
)
# Use it with your agent
from llama_index.core.agent import FunctionAgent
agent = FunctionAgent(tools=tools, llm=llm)
await agent.run("Hello!", memory=memory)
Your chat history is stored in markdown files โ human-readable, git-versionable, no database required.
Semantic Search
# Search past conversations
results = chat_store.recall("user1", "What did we discuss about databases?")
for result in results:
print(f"[{result['score']:.2f}] {result['content']}")
Database Schema Intelligence
Give your agents understanding of database structure:
from llamaindex_soul import SchemaMemory
# Connect to any SQLAlchemy-compatible database
schema = SchemaMemory("postgresql://user:pass@host/db")
# Auto-generate semantic descriptions using LLM
schema.generate()
# Get context for natural language queries
context = schema.context_for("Show me revenue by region")
SoulMate: Managed Memory (Recommended for Production)
Don't want to manage files? SoulMate is the managed version โ same RAG+RLM, zero infrastructure:
from llamaindex_soul import SoulMateChatStore
from llama_index.core.memory import ChatMemoryBuffer
# Connect to SoulMate cloud
chat_store = SoulMateChatStore(api_key="your-key")
memory = ChatMemoryBuffer.from_defaults(
token_limit=3000,
chat_store=chat_store,
chat_store_key="user1",
)
# Same interface, managed infrastructure
await agent.run("Hello!", memory=memory)
Factory Function
from llamaindex_soul import create_chat_store
# Local file-based (default)
store = create_chat_store()
# SoulMate managed
store = create_chat_store("soulmate", api_key="...")
Architecture
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ llamaindex-soul โ
โ โ
โ โโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโโ โ
โ โSoulChatStoreโ โSoulMateChatStoreโ โ
โ โ (local) โ โ (managed) โ โ
โ โโโโโโโโฌโโโโโโโ โโโโโโโโโฌโโโโโโโโ โ
โ โ โ โ
โ โผ โผ โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โ โ soul-agent (RAG + RLM) โ โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ โ
โ โ
โ โโโโโโโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โ โ soul-schema โ โ SoulMate API โ โ
โ โ (db intelligence)โ โ (managed memory) โ โ
โ โโโโโโโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
Part of the Soul Ecosystem
- soul-agent โ Core RAG + RLM library
- soul-schema โ Database semantic layer generator
- crewai-soul โ CrewAI integration
- langchain-soul โ LangChain integration
- llamaindex-soul โ LlamaIndex integration (this package)
- SoulMate โ Managed cloud service
License
MIT
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file llamaindex_soul-0.1.1.tar.gz.
File metadata
- Download URL: llamaindex_soul-0.1.1.tar.gz
- Upload date:
- Size: 14.4 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.14.2
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
3bae97dc0e3d25aee966c9c3b68410ae7e4f390cb9ef9dcf97f95922ae471520
|
|
| MD5 |
0d642cc7eda2ea034cba7b5c832fbc08
|
|
| BLAKE2b-256 |
4d337681031717ca1f84276732ee781bed5b31f746dd6c35da469be7c9b5afdb
|
File details
Details for the file llamaindex_soul-0.1.1-py3-none-any.whl.
File metadata
- Download URL: llamaindex_soul-0.1.1-py3-none-any.whl
- Upload date:
- Size: 12.7 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.14.2
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
20ea3e3c6e6fb51a06f57778683b3b7cbe001615f491c076aa6b3a4fb8be078b
|
|
| MD5 |
b365f1c261b94561b12778291cfe477e
|
|
| BLAKE2b-256 |
c945b291b253f63b80e39c46e7651fe246416ab015502d077083ab731f8425b1
|