Semantic memory search for markdown knowledge bases
Project description
ย
memsearch
OpenClaw's memory, everywhere.
https://github.com/user-attachments/assets/31de76cc-81a8-4462-a47d-bd9c394d33e3
๐ก Inspired by OpenClaw's memory system, memsearch brings the same markdown-first architecture to a standalone library โ same chunking, same chunk ID format. Pluggable into any agent framework, backed by Milvus (local Milvus Lite โ Milvus Server โ Zilliz Cloud). See it in action with the included Claude Code plugin.
โจ Why memsearch?
- ๐ฆ OpenClaw's memory, everywhere โ OpenClaw has one of the best memory designs in open-source AI: markdown as the single source of truth โ simple, human-readable,
git-friendly, zero vendor lock-in - โก Smart dedup โ SHA-256 content hashing means unchanged content is never re-embedded
- ๐ Live sync โ File watcher auto-indexes on changes, deletes stale chunks when files are removed
- ๐งน Memory compact โ LLM-powered summarization compresses old memories, just like OpenClaw's compact cycle
- ๐งฉ Ready-made Claude Code plugin โ a drop-in example of agent memory built on memsearch
๐ How It Works
Markdown is the source of truth โ the vector store is just a derived index, rebuildable anytime.
โโโโ Search โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ โ
โ "how to configure Redis?" โ
โ โ โ
โ โผ โ
โ โโโโโโโโโโโโ โโโโโโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโโโโโ โ
โ โ Embed โโโโโโถโ Cosine similarityโโโโโโถโ Top-K results โ โ
โ โ query โ โ (Milvus) โ โ with source info โ โ
โ โโโโโโโโโโโโ โโโโโโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโโโโโ โ
โ โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โโโโ Ingest โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ โ
โ MEMORY.md โ
โ memory/2026-02-09.md โโโโโโโโโโโโ โโโโโโโโโโโโโโโโโโ โ
โ memory/2026-02-08.md โโโโถโ Chunker โโโโโโถโ Dedup โ โ
โ โ(heading, โ โ(chunk_hash PK) โ โ
โ โparagraph)โ โโโโโโโโโฌโโโโโโโโโ โ
โ โโโโโโโโโโโโ โ โ
โ new chunks only โ
โ โผ โ
โ โโโโโโโโโโโโโโโโ โ
โ โ Embed & โ โ
โ โ Milvus upsertโ โ
โ โโโโโโโโโโโโโโโโ โ
โ โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โโโโ Watch โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ File watcher (1500ms debounce) โโโถ auto re-index / delete stale โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โโโโ Compact โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ Retrieve chunks โโโถ LLM summarize โโโถ write memory/YYYY-MM-DD.md โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
๐ The entire pipeline runs locally by default โ your data never leaves your machine unless you choose a remote Milvus backend or a cloud embedding provider.
๐งฉ Claude Code Plugin
memsearch ships with a Claude Code plugin โ a real-world example of OpenClaw's memory running outside OpenClaw. It gives Claude automatic persistent memory across sessions: every session is summarized to markdown, every prompt triggers a semantic search, and a background watcher keeps the index in sync. No commands to learn, no manual saving โ just install and go.
# 1. Install the memsearch CLI
pip install memsearch
# 2. Set your embedding API key (OpenAI is the default provider)
export OPENAI_API_KEY="sk-..."
# 3. In Claude Code, add the marketplace and install the plugin
/plugin marketplace add zilliztech/memsearch
/plugin install memsearch
# 4. Restart Claude Code for the plugin to take effect, then start chatting!
claude
๐ง Development mode โ install from local clone
git clone https://github.com/zilliztech/memsearch.git
pip install memsearch
claude --plugin-dir ./memsearch/ccplugin
Session start โโโถ start memsearch watch (singleton) โโโถ inject recent memories
โ
User prompt โโโถ memsearch search โโโถ inject relevant memories
โ
Claude stops โโโถ haiku summary โโโถ write .memsearch/memory/YYYY-MM-DD.md
โ โ
Session end โโโถ stop watch watch auto-indexes โโ
Under the hood: 4 shell hooks + 1 watch process, all calling the memsearch CLI. Memories are transparent .md files โ human-readable, git-friendly, rebuildable. See ccplugin/README.md for the full architecture, hook details, progressive disclosure model, and comparison with claude-mem.
๐ฆ Installation
pip install memsearch
Additional embedding providers
pip install "memsearch[google]" # Google Gemini
pip install "memsearch[voyage]" # Voyage AI
pip install "memsearch[ollama]" # Ollama (local)
pip install "memsearch[local]" # sentence-transformers (local, no API key)
pip install "memsearch[all]" # Everything
๐ Python API โ Build an Agent with Memory
The example below shows a complete agent loop with memory: save knowledge to markdown, index it, and recall it later via semantic search.
import asyncio
from datetime import date
from pathlib import Path
from openai import OpenAI
from memsearch import MemSearch
MEMORY_DIR = "./memory"
llm = OpenAI() # your LLM client
ms = MemSearch(paths=[MEMORY_DIR]) # memsearch handles the rest
def save_memory(content: str):
"""Append a note to today's memory log (OpenClaw-style daily markdown)."""
p = Path(MEMORY_DIR) / f"{date.today()}.md"
p.parent.mkdir(parents=True, exist_ok=True)
with open(p, "a") as f:
f.write(f"\n{content}\n")
async def agent_chat(user_input: str) -> str:
# 1. Recall โ search past memories for relevant context
memories = await ms.search(user_input, top_k=3)
context = "\n".join(f"- {m['content'][:200]}" for m in memories)
# 2. Think โ call LLM with memory context
resp = llm.chat.completions.create(
model="gpt-4o-mini",
messages=[
{"role": "system", "content": f"You have these memories:\n{context}"},
{"role": "user", "content": user_input},
],
)
answer = resp.choices[0].message.content
# 3. Remember โ save this exchange and index it
save_memory(f"## {user_input}\n{answer}")
await ms.index()
return answer
async def main():
# Seed some knowledge
save_memory("## Team\n- Alice: frontend lead\n- Bob: backend lead")
save_memory("## Decision\nWe chose Redis for caching over Memcached.")
await ms.index()
# Agent can now recall those memories
print(await agent_chat("Who is our frontend lead?"))
print(await agent_chat("What caching solution did we pick?"))
asyncio.run(main())
๐ Anthropic Claude example โ click to expand
pip install memsearch anthropic
import asyncio
from datetime import date
from pathlib import Path
from anthropic import Anthropic
from memsearch import MemSearch
MEMORY_DIR = "./memory"
llm = Anthropic()
ms = MemSearch(paths=[MEMORY_DIR])
def save_memory(content: str):
p = Path(MEMORY_DIR) / f"{date.today()}.md"
p.parent.mkdir(parents=True, exist_ok=True)
with open(p, "a") as f:
f.write(f"\n{content}\n")
async def agent_chat(user_input: str) -> str:
# 1. Recall
memories = await ms.search(user_input, top_k=3)
context = "\n".join(f"- {m['content'][:200]}" for m in memories)
# 2. Think โ call Claude with memory context
resp = llm.messages.create(
model="claude-sonnet-4-5-20250929",
max_tokens=1024,
system=f"You have these memories:\n{context}",
messages=[{"role": "user", "content": user_input}],
)
answer = resp.content[0].text
# 3. Remember
save_memory(f"## {user_input}\n{answer}")
await ms.index()
return answer
async def main():
save_memory("## Team\n- Alice: frontend lead\n- Bob: backend lead")
await ms.index()
print(await agent_chat("Who is our frontend lead?"))
asyncio.run(main())
๐ฆ Ollama (fully local, no API key) โ click to expand
pip install "memsearch[ollama]"
ollama pull nomic-embed-text # embedding model
ollama pull llama3.2 # chat model
import asyncio
from datetime import date
from pathlib import Path
from ollama import chat
from memsearch import MemSearch
MEMORY_DIR = "./memory"
ms = MemSearch(paths=[MEMORY_DIR], embedding_provider="ollama")
def save_memory(content: str):
p = Path(MEMORY_DIR) / f"{date.today()}.md"
p.parent.mkdir(parents=True, exist_ok=True)
with open(p, "a") as f:
f.write(f"\n{content}\n")
async def agent_chat(user_input: str) -> str:
# 1. Recall
memories = await ms.search(user_input, top_k=3)
context = "\n".join(f"- {m['content'][:200]}" for m in memories)
# 2. Think โ call Ollama locally
resp = chat(
model="llama3.2",
messages=[
{"role": "system", "content": f"You have these memories:\n{context}"},
{"role": "user", "content": user_input},
],
)
answer = resp.message.content
# 3. Remember
save_memory(f"## {user_input}\n{answer}")
await ms.index()
return answer
async def main():
save_memory("## Team\n- Alice: frontend lead\n- Bob: backend lead")
await ms.index()
print(await agent_chat("Who is our frontend lead?"))
asyncio.run(main())
๐๏ธ Milvus Backend
memsearch supports three Milvus deployment modes โ just change milvus_uri:
| Mode | milvus_uri |
Best for |
|---|---|---|
| Milvus Lite (default) | ~/.memsearch/milvus.db |
Personal use, dev โ zero config |
| Milvus Server | http://localhost:19530 |
Multi-agent, team environments |
| Zilliz Cloud | https://in03-xxx.api.gcp-us-west1.zillizcloud.com |
Production, fully managed |
๐ Code examples and setup details โ Getting Started โ Milvus Backends
๐ฅ๏ธ CLI Usage
memsearch index ./memory/ # index markdown files
memsearch search "how to configure Redis caching" # semantic search
memsearch watch ./memory/ # auto-index on file changes
memsearch compact # LLM-powered memory summarization
memsearch config init # interactive config wizard
memsearch stats # show index statistics
๐ Full command reference with all flags and examples โ CLI Reference
โ๏ธ Configuration
Settings are resolved in priority order (lowest โ highest):
- Built-in defaults โ 2. Global
~/.memsearch/config.tomlโ 3. Project.memsearch.tomlโ 4. CLI flags
API keys for embedding/LLM providers are read from standard environment variables (OPENAI_API_KEY, GOOGLE_API_KEY, VOYAGE_API_KEY, ANTHROPIC_API_KEY, etc.).
๐ Config wizard, TOML examples, and all settings โ Getting Started โ Configuration
๐ Embedding Providers
| Provider | Install | Default Model |
|---|---|---|
| OpenAI | memsearch (included) |
text-embedding-3-small |
memsearch[google] |
gemini-embedding-001 |
|
| Voyage | memsearch[voyage] |
voyage-3-lite |
| Ollama | memsearch[ollama] |
nomic-embed-text |
| Local | memsearch[local] |
all-MiniLM-L6-v2 |
๐ Provider setup and env vars โ CLI Reference โ Embedding Provider Reference
๐พ OpenClaw Compatibility
memsearch is a drop-in memory backend for projects following OpenClaw's memory architecture โ same memory layout, chunk ID format, dedup strategy, and compact cycle. If you're already using OpenClaw's memory directory layout, just point memsearch at it โ no migration needed.
๐ Full compatibility matrix โ Architecture โ Inspired by OpenClaw
๐ License
MIT
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file memsearch-0.1.5.tar.gz.
File metadata
- Download URL: memsearch-0.1.5.tar.gz
- Upload date:
- Size: 2.8 MB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
dcf7c923c31cfc2715058463d97f20343dbc70d0d3ec5977b6d6ef6b77ff0bf6
|
|
| MD5 |
434cb657281d3dcef64352c55dc37276
|
|
| BLAKE2b-256 |
6c624d7f16e7aec930ea760fb69431fc3305e87d4a5dfd4385a93784229d747a
|
Provenance
The following attestation bundles were made for memsearch-0.1.5.tar.gz:
Publisher:
release.yml on zilliztech/memsearch
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
memsearch-0.1.5.tar.gz -
Subject digest:
dcf7c923c31cfc2715058463d97f20343dbc70d0d3ec5977b6d6ef6b77ff0bf6 - Sigstore transparency entry: 942712370
- Sigstore integration time:
-
Permalink:
zilliztech/memsearch@ff2236e39763188c4009b29581f43e557b05bc2f -
Branch / Tag:
refs/tags/v0.1.5 - Owner: https://github.com/zilliztech
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yml@ff2236e39763188c4009b29581f43e557b05bc2f -
Trigger Event:
push
-
Statement type:
File details
Details for the file memsearch-0.1.5-py3-none-any.whl.
File metadata
- Download URL: memsearch-0.1.5-py3-none-any.whl
- Upload date:
- Size: 34.8 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
5b243e586131bc10087700cb64994138701904aeb2265e78c6372ad1061f9626
|
|
| MD5 |
45a16f6d95eee0fc894c6a77789a90b2
|
|
| BLAKE2b-256 |
e6211dd1426f47a7e0eb75da644926d462fb963df32e990ed2be4785a934b682
|
Provenance
The following attestation bundles were made for memsearch-0.1.5-py3-none-any.whl:
Publisher:
release.yml on zilliztech/memsearch
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
memsearch-0.1.5-py3-none-any.whl -
Subject digest:
5b243e586131bc10087700cb64994138701904aeb2265e78c6372ad1061f9626 - Sigstore transparency entry: 942712375
- Sigstore integration time:
-
Permalink:
zilliztech/memsearch@ff2236e39763188c4009b29581f43e557b05bc2f -
Branch / Tag:
refs/tags/v0.1.5 - Owner: https://github.com/zilliztech
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yml@ff2236e39763188c4009b29581f43e557b05bc2f -
Trigger Event:
push
-
Statement type: