Skip to main content

Persistent identity and memory for any LLM agent — markdown-native, provider-agnostic

Project description

soul.py 🧠

PyPI version PyPI downloads License: MIT

Your AI forgets everything when the conversation ends. soul.py fixes that.

from hybrid_agent import HybridAgent

agent = HybridAgent()
agent.ask("My name is Prahlad and I'm building an AI research lab.")

# New process. New session. Memory persists.
agent = HybridAgent()
result = agent.ask("What do you know about me?")
print(result["answer"])
# → "You're Prahlad, building an AI research lab."

No database. No server. Just markdown files and smart retrieval.


▶ Live Demos

Version Demo What it shows
v0.1 soul.themenonlab.com Memory persists across sessions
v1.0 soulv1.themenonlab.com Semantic RAG retrieval
v2.0 soulv2.themenonlab.com Auto query routing: RAG + RLM
Ask Darwin soul-book.themenonlab.com 📖 Book companion — watch routing decisions live

Install

pip install soul-agent
pip install soul-agent[anthropic]
pip install soul-agent[openai]

Quickstart

soul init   # creates SOUL.md and MEMORY.md
# v0.1 — simple markdown memory (great starting point)
from soul import Agent
agent = Agent(provider="anthropic")
agent.ask("Remember this.")

# v2.0 — automatic RAG + RLM routing (this repo's default)
from hybrid_agent import HybridAgent
agent = HybridAgent()  # auto-detects best retrieval per query
result = agent.ask("What do you know about me?")
print(result["answer"])
print(result["route"])   # "RAG" or "RLM"

How it works

soul.py uses two markdown files as persistent state:

File Purpose
SOUL.md Identity — who the agent is, how it behaves
MEMORY.md Memory — timestamped log of every exchange

v2.0 adds a query router that automatically dispatches to the right retrieval strategy:

Your query
    ↓
Router (fast LLM call)
    ├── FOCUSED  (~90%) → RAG — vector search, sub-second
    └── EXHAUSTIVE (~10%) → RLM — recursive synthesis, thorough

Architecture based on: RAG + RLM: The Complete Knowledge Base Architecture


Branches

Branch Description Best for
main v2.0 — RAG + RLM hybrid (default) Production use
v2.0-rag-rlm Same as main, versioned Pinning to v2
v1.0-rag RAG only, no RLM Simpler setup
v0.1-stable Pure markdown, zero deps Learning / prototyping

v2.0 API

result = agent.ask("What is my name?")

result["answer"]        # the response
result["route"]         # "RAG" or "RLM"
result["router_ms"]     # router latency
result["retrieval_ms"]  # retrieval latency
result["total_ms"]      # total latency
result["rag_context"]   # retrieved chunks (RAG path)
result["rlm_meta"]      # chunk stats (RLM path)

v2.0 Setup

agent = HybridAgent(
    soul_path="SOUL.md",
    memory_path="MEMORY.md",
    mode="auto",                    # "auto" | "rag" | "rlm"
    qdrant_url="...",               # or set QDRANT_URL env var
    qdrant_api_key="...",           # or QDRANT_API_KEY
    azure_embedding_endpoint="...", # or AZURE_EMBEDDING_ENDPOINT
    azure_embedding_key="...",      # or AZURE_EMBEDDING_KEY
    k=5,                            # RAG retrieval count
)

Falls back to BM25 (keyword) if Qdrant/Azure not configured.


Why not LangChain / LlamaIndex / MemGPT?

Those are orchestration frameworks. soul.py is a primitive — persistent identity and memory you can drop into anything you're building.

  • No framework lock-in — works with any LLM provider
  • Human-readable — SOUL.md and MEMORY.md are plain text
  • Version-controllable — git diff your agent's memories
  • Composable — use just the parts you need

Roadmap

See ROADMAP.md for planned features and how to contribute.


License

MIT

Citation

@software{menon2026soul,
  author = {Menon, Prahlad G.},
  title  = {soul.py: Persistent Identity and Memory for LLM Agents},
  year   = {2026},
  url    = {https://github.com/menonpg/soul.py}
}

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

soul_agent-0.1.5.tar.gz (18.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

soul_agent-0.1.5-py3-none-any.whl (18.1 kB view details)

Uploaded Python 3

File details

Details for the file soul_agent-0.1.5.tar.gz.

File metadata

  • Download URL: soul_agent-0.1.5.tar.gz
  • Upload date:
  • Size: 18.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.12

File hashes

Hashes for soul_agent-0.1.5.tar.gz
Algorithm Hash digest
SHA256 069ee1c5c70e24a60b0be5474954c48a91702fc0cf3393e844927455c19a1727
MD5 666d55c12901364affbc101d685e15a0
BLAKE2b-256 390fc78411bfc767c1684352e103025f5e4c1c4b09e7dc8622b4ef72fddc4b69

See more details on using hashes here.

File details

Details for the file soul_agent-0.1.5-py3-none-any.whl.

File metadata

  • Download URL: soul_agent-0.1.5-py3-none-any.whl
  • Upload date:
  • Size: 18.1 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.12

File hashes

Hashes for soul_agent-0.1.5-py3-none-any.whl
Algorithm Hash digest
SHA256 345ff486ea761c63ba0b6430884cc15a833d5268b9efb0cc65b6f12c9c83832d
MD5 570c45e68a0bae4d6bf8b52554468bca
BLAKE2b-256 6a3646c9000b8101c2ab6ee96e18ab5618c27b7a186b4bc54b529a455fff2662

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page