Skip to main content

Semantic memory search for markdown knowledge bases

Project description

ย  memsearch

Cross-platform semantic memory for AI coding agents.

PyPI Claude Code OpenClaw OpenCode Codex CLI Python License Tests Docs Stars Discord X (Twitter)

memsearch demo

Why memsearch?

  • ๐ŸŒ All Platforms, One Memory โ€” memories flow across Claude Code, OpenClaw, OpenCode, and Codex CLI. A conversation in one agent becomes searchable context in all others โ€” no extra setup
  • ๐Ÿ‘ฅ For Agent Users, install a plugin and get persistent memory with zero effort; for Agent Developers, use the full CLI and Python API to build memory and harness engineering into your own agents
  • ๐Ÿ“„ Markdown is the source of truth โ€” inspired by OpenClaw. Your memories are just .md files โ€” human-readable, editable, version-controllable. Milvus is a "shadow index": a derived, rebuildable cache
  • ๐Ÿ” Progressive retrieval, hybrid search, smart dedup, live sync โ€” 3-layer recall (search โ†’ expand โ†’ transcript); dense vector + BM25 sparse + RRF reranking; SHA-256 content hashing skips unchanged content; file watcher auto-indexes in real time

๐Ÿง‘โ€๐Ÿ’ป For Agent Users

Pick your platform, install the plugin, and you're done. Each plugin captures conversations automatically and provides semantic recall with zero configuration.

For Claude Code Users

# Install
/plugin marketplace add zilliztech/memsearch
/plugin install memsearch
# Restart Claude Code to activate the plugin

After restarting, just chat with Claude Code as usual. The plugin captures every conversation turn automatically.

Verify it's working โ€” after a few conversations, check your memory files:

ls .memsearch/memory/          # you should see daily .md files
cat .memsearch/memory/$(date +%Y-%m-%d).md

Recall memories โ€” two ways to trigger:

/memory-recall what did we discuss about Redis?

Or just ask naturally โ€” Claude auto-invokes the skill when it senses the question needs history:

We discussed Redis caching before, what was the TTL we chose?

๐Ÿ“– Claude Code Plugin docs ยท Troubleshooting

For Codex CLI Users

# Install
git clone --depth 1 https://github.com/zilliztech/memsearch.git
bash memsearch/plugins/codex/scripts/install.sh
codex --yolo  # needed for ONNX model network access

After installing, chat as usual. Hooks capture and summarize each turn.

Verify it's working:

ls .memsearch/memory/

Recall memories โ€” use the skill:

$memory-recall what did we discuss about deployment?

๐Ÿ“– Codex CLI Plugin docs

For OpenClaw Users

# Install from ClawHub
openclaw plugins install --force clawhub:memsearch
openclaw config set plugins.entries.memsearch.hooks.allowConversationAccess true
openclaw config set plugins.entries.memsearch.hooks.allowPromptInjection true
openclaw gateway restart

After installing, chat in TUI as usual. The plugin captures each turn automatically.

Verify it's working โ€” memory files are stored in your agent's workspace:

# For the main agent:
ls ~/.openclaw/workspace/.memsearch/memory/
# For other agents (e.g. work):
ls ~/.openclaw/workspace-work/.memsearch/memory/

Recall memories โ€” two ways to trigger:

/memory-recall what was the batch size limit we set?

Or just ask naturally โ€” the LLM auto-invokes memory tools when it senses the question needs history:

We discussed batch size limits before, what did we decide?

๐Ÿ“– OpenClaw Plugin docs ยท Browse on ClawHub

For OpenCode Users

// In ~/.config/opencode/opencode.json
{ "plugin": ["@zilliz/memsearch-opencode"] }

After installing, chat in TUI as usual. A background daemon captures conversations.

Verify it's working:

ls .memsearch/memory/    # daily .md files appear after a few conversations

Recall memories โ€” two ways to trigger:

/memory-recall what did we discuss about authentication?

Or just ask naturally โ€” the LLM auto-invokes memory tools when it senses the question needs history:

We discussed the authentication flow before, what was the approach?

๐Ÿ“– OpenCode Plugin docs

โš™๏ธ Configuration (all platforms)

All plugins share the same memsearch backend. Configure once, works everywhere.

Embedding

Defaults to ONNX bge-m3 โ€” runs locally on CPU, no API key, no cost. On first launch the model (~558 MB) is downloaded from HuggingFace Hub.

memsearch config set embedding.provider onnx     # default โ€” local, free
memsearch config set embedding.provider openai   # needs OPENAI_API_KEY
memsearch config set embedding.provider ollama   # local, any model

All providers and models: Configuration โ€” Embedding Provider

Milvus Backend

Just change milvus_uri (and optionally milvus_token) to switch between deployment modes:

Milvus Lite (default) โ€” zero config, single file. Great for getting started:

# Works out of the box, no setup needed
memsearch config get milvus.uri   # โ†’ ~/.memsearch/milvus.db

โญ Zilliz Cloud (recommended) โ€” fully managed, free tier available โ€” sign up ๐Ÿ‘‡:

memsearch config set milvus.uri "https://in03-xxx.api.gcp-us-west1.zillizcloud.com"
memsearch config set milvus.token "your-api-key"
โญ Sign up for a free Zilliz Cloud cluster

You can sign up on Zilliz Cloud to get a free cluster and API key.

Sign up and get API key

Self-hosted Milvus Server (Docker) โ€” for advanced users

For multi-user or team environments with a dedicated Milvus instance. Requires Docker. See the official installation guide.

memsearch config set milvus.uri http://localhost:19530

๐Ÿ“– Full configuration guide: Configuration ยท Platform comparison

Capture Summarization Model

Each plugin keeps its default capture summarization model unless you override it explicitly:

memsearch config set plugins.codex.summarize.model gpt-5.1-codex-mini
memsearch config set plugins.opencode.summarize.model anthropic/claude-haiku

Plugin-specific summarize settings do not fall back to llm.model; leave them empty or unset to preserve the default behavior.

What can you use it for?

  • Resume debugging threads โ€” ask how a similar Redis, Docker, database, or deployment issue was fixed last time.
  • Recover decision rationale โ€” find why the project chose one architecture, library, migration path, or API design over another.
  • Trace feature history โ€” understand how a feature evolved across sessions, including the files changed and tradeoffs discussed.
  • Do code archaeology โ€” ask when and why a module, config, or workflow was changed before touching it again.
  • Find the right session to resume โ€” ask which previous conversation covered a topic, recover the relevant context, and continue from there.
  • Carry context across agents โ€” keep Claude Code, Codex CLI, OpenClaw, and OpenCode working from the same project memory.

๐Ÿ› ๏ธ For Agent Developers

Beyond ready-to-use plugins, memsearch provides a complete CLI and Python API for building memory into your own agents. Whether you're adding persistent context to a custom agent, building a memory-augmented RAG pipeline, or doing harness engineering โ€” the same core engine that powers the plugins is available as a library.

๐Ÿ—๏ธ Architecture Overview

โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
โ”‚                  ๐Ÿง‘โ€๐Ÿ’ป For Agent Users (Plugins)                โ”‚
โ”‚  โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ” โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ” โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ” โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ” โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ” โ”‚
โ”‚  โ”‚ Claude   โ”‚ โ”‚ OpenClaw โ”‚ โ”‚ OpenCode โ”‚ โ”‚ Codex  โ”‚ โ”‚ Your โ”‚ โ”‚
โ”‚  โ”‚ Code     โ”‚ โ”‚ Plugin   โ”‚ โ”‚ Plugin   โ”‚ โ”‚ Plugin โ”‚ โ”‚ App  โ”‚ โ”‚
โ”‚  โ””โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”˜ โ””โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”˜ โ””โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”˜ โ””โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”˜ โ””โ”€โ”€โ”ฌโ”€โ”€โ”€โ”˜ โ”‚
โ”‚       โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜     โ”‚
โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค
โ”‚  ๐Ÿ› ๏ธ For Agent Developers   โ”‚  Build your own with โ†“          โ”‚
โ”‚  โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”  โ”‚
โ”‚  โ”‚           memsearch CLI / Python API                   โ”‚  โ”‚
โ”‚  โ”‚      index ยท search ยท expand ยท watch ยท compact         โ”‚  โ”‚
โ”‚  โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜  โ”‚
โ”‚  โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”  โ”‚
โ”‚  โ”‚           Core: Chunker โ†’ Embedder โ†’ Milvus            โ”‚  โ”‚
โ”‚  โ”‚        Hybrid Search (BM25 + Dense + RRF)              โ”‚  โ”‚
โ”‚  โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜  โ”‚
โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค
โ”‚  ๐Ÿ“„ Markdown Files (Source of Truth)                         โ”‚
โ”‚  memory/2026-03-27.md ยท memory/2026-03-26.md ยท ...           โ”‚
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜

Plugins sit on top of the CLI/API layer. The API handles indexing, searching, and Milvus sync. Markdown files are always the source of truth โ€” Milvus is a rebuildable shadow index. Everything below the plugin layer is what you use as an agent developer.

How Plugins Work (Claude Code as example)

Capture โ€” after each conversation turn:

User asks question โ†’ Agent responds โ†’ Stop hook fires
                                          โ”‚
                     โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜
                     โ–ผ
              Parse last turn
                     โ”‚
                     โ–ผ
         LLM summarizes (haiku)
         "- User asked about X."
         "- Claude did Y."
                     โ”‚
                     โ–ผ
         Append to memory/2026-03-27.md
         with <!-- session:UUID --> anchor
                     โ”‚
                     โ–ผ
         memsearch index โ†’ Milvus

Recall โ€” 3-layer progressive search:

User: "What did we discuss about batch size?"
                     โ”‚
                     โ–ผ
  L1  memsearch search "batch size"    โ†’ ranked chunks
                     โ”‚ (need more?)
                     โ–ผ
  L2  memsearch expand <chunk_hash>    โ†’ full .md section
                     โ”‚ (need original?)
                     โ–ผ
  L3  parse-transcript <session.jsonl> โ†’ raw dialogue

๐Ÿ“„ Markdown as Source of Truth

  Plugins append โ”€โ”€โ†’  .md files  โ†โ”€โ”€ human editable
                          โ”‚
                          โ–ผ
                  memsearch watch (live watcher)
                          โ”‚
                  detects file change
                          โ”‚
                          โ–ผ
                  re-chunk changed .md
                          โ”‚
                  hash each chunk (SHA-256)
                          โ”‚
              โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
              โ–ผ                       โ–ผ
       hash unchanged?          hash is new/changed?
       โ†’ skip (no API call)     โ†’ embed โ†’ upsert to Milvus
              โ”‚                       โ”‚
              โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜
                          โ–ผ
                โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
                โ”‚  Milvus (shadow) โ”‚
                โ”‚  always in sync  โ”‚
                โ”‚  rebuildable     โ”‚
                โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜

๐Ÿ“ฆ Installation

# Install as a global CLI tool โ€” recommended when you mainly use the
# `memsearch` command or any of the agent plugins (Claude Code, Codex,
# OpenClaw, OpenCode), which all shell out to the CLI.
uv tool install memsearch       # via uv
pipx install memsearch          # via pipx
pip install memsearch           # plain pip

# Install as a project dependency โ€” use this if you want to import
# `memsearch` from your own Python code (e.g. via the MemSearch class).
uv add memsearch                # via uv, adds to pyproject.toml
pip install memsearch           # into an activated venv
Optional embedding providers
# As a CLI tool (recommended โ€” local ONNX, no API key)
uv tool install "memsearch[onnx]"
pipx install "memsearch[onnx]"
pip install "memsearch[onnx]"

# As a project dependency
uv add "memsearch[onnx]"

# Other options: [openai], [google], [voyage], [jina], [mistral], [ollama], [local], [all]

๐Ÿ Python API โ€” Give Your Agent Memory

from memsearch import MemSearch

mem = MemSearch(paths=["./memory"])

await mem.index()                                      # index markdown files
results = await mem.search("Redis config", top_k=3)    # semantic search
scoped = await mem.search("pricing", top_k=3, source_prefix="./memory/product")
print(results[0]["content"], results[0]["score"])       # content + similarity
Full example โ€” agent with memory (OpenAI) โ€” click to expand
import asyncio
from datetime import date
from pathlib import Path
from openai import OpenAI
from memsearch import MemSearch

MEMORY_DIR = "./memory"
llm = OpenAI()                                        # your LLM client
mem = MemSearch(paths=[MEMORY_DIR])                    # memsearch handles the rest

def save_memory(content: str):
    """Append a note to today's memory log (OpenClaw-style daily markdown)."""
    p = Path(MEMORY_DIR) / f"{date.today()}.md"
    p.parent.mkdir(parents=True, exist_ok=True)
    with open(p, "a") as f:
        f.write(f"\n{content}\n")

async def agent_chat(user_input: str) -> str:
    # 1. Recall โ€” search past memories for relevant context
    memories = await mem.search(user_input, top_k=3)
    context = "\n".join(f"- {m['content'][:200]}" for m in memories)

    # 2. Think โ€” call LLM with memory context
    resp = llm.chat.completions.create(
        model="gpt-4o-mini",
        messages=[
            {"role": "system", "content": f"You have these memories:\n{context}"},
            {"role": "user", "content": user_input},
        ],
    )
    answer = resp.choices[0].message.content

    # 3. Remember โ€” save this exchange and index it
    save_memory(f"## {user_input}\n{answer}")
    await mem.index()

    return answer

async def main():
    # Seed some knowledge
    save_memory("## Team\n- Alice: frontend lead\n- Bob: backend lead")
    save_memory("## Decision\nWe chose Redis for caching over Memcached.")
    await mem.index()  # or mem.watch() to auto-index in the background

    # Agent can now recall those memories
    print(await agent_chat("Who is our frontend lead?"))
    print(await agent_chat("What caching solution did we pick?"))

asyncio.run(main())
Anthropic Claude example โ€” click to expand
pip install memsearch anthropic
import asyncio
from datetime import date
from pathlib import Path
from anthropic import Anthropic
from memsearch import MemSearch

MEMORY_DIR = "./memory"
llm = Anthropic()
mem = MemSearch(paths=[MEMORY_DIR])

def save_memory(content: str):
    p = Path(MEMORY_DIR) / f"{date.today()}.md"
    p.parent.mkdir(parents=True, exist_ok=True)
    with open(p, "a") as f:
        f.write(f"\n{content}\n")

async def agent_chat(user_input: str) -> str:
    # 1. Recall
    memories = await mem.search(user_input, top_k=3)
    context = "\n".join(f"- {m['content'][:200]}" for m in memories)

    # 2. Think โ€” call Claude with memory context
    resp = llm.messages.create(
        model="claude-sonnet-4-5-20250929",
        max_tokens=1024,
        system=f"You have these memories:\n{context}",
        messages=[{"role": "user", "content": user_input}],
    )
    answer = resp.content[0].text

    # 3. Remember
    save_memory(f"## {user_input}\n{answer}")
    await mem.index()
    return answer

async def main():
    save_memory("## Team\n- Alice: frontend lead\n- Bob: backend lead")
    await mem.index()
    print(await agent_chat("Who is our frontend lead?"))

asyncio.run(main())
Ollama (fully local, no API key) โ€” click to expand
pip install "memsearch[ollama]"
ollama pull nomic-embed-text          # embedding model
ollama pull llama3.2                  # chat model
import asyncio
from datetime import date
from pathlib import Path
from ollama import chat
from memsearch import MemSearch

MEMORY_DIR = "./memory"
mem = MemSearch(paths=[MEMORY_DIR], embedding_provider="ollama")

def save_memory(content: str):
    p = Path(MEMORY_DIR) / f"{date.today()}.md"
    p.parent.mkdir(parents=True, exist_ok=True)
    with open(p, "a") as f:
        f.write(f"\n{content}\n")

async def agent_chat(user_input: str) -> str:
    # 1. Recall
    memories = await mem.search(user_input, top_k=3)
    context = "\n".join(f"- {m['content'][:200]}" for m in memories)

    # 2. Think โ€” call Ollama locally
    resp = chat(
        model="llama3.2",
        messages=[
            {"role": "system", "content": f"You have these memories:\n{context}"},
            {"role": "user", "content": user_input},
        ],
    )
    answer = resp.message.content

    # 3. Remember
    save_memory(f"## {user_input}\n{answer}")
    await mem.index()
    return answer

async def main():
    save_memory("## Team\n- Alice: frontend lead\n- Bob: backend lead")
    await mem.index()
    print(await agent_chat("Who is our frontend lead?"))

asyncio.run(main())

๐Ÿ“– Full Python API reference: Python API docs

โŒจ๏ธ CLI Usage

Setup:

memsearch config init                              # interactive setup wizard
memsearch config set embedding.provider onnx       # switch embedding provider
memsearch config set milvus.uri http://localhost:19530  # switch Milvus backend

Index & Search:

memsearch index ./memory/                          # index markdown files
memsearch index ./memory/ ./notes/ --force         # re-embed everything
memsearch search "Redis caching"                   # hybrid search (BM25 + vector)
memsearch search "auth flow" --top-k 10 --json-output  # JSON for scripting
memsearch expand <chunk_hash>                      # show full section around a chunk

Live Sync & Maintenance:

memsearch watch ./memory/                          # live file watcher (auto-index on change)
memsearch compact                                  # LLM-powered chunk summarization
memsearch stats                                    # show indexed chunk count
memsearch reset --yes                              # drop all indexed data and rebuild

๐Ÿ“– Full CLI reference with all flags: CLI docs

โš™๏ธ Configuration

Embedding and Milvus backend settings โ†’ Configuration (all platforms)

Settings priority: Built-in defaults โ†’ ~/.memsearch/config.toml โ†’ .memsearch.toml โ†’ CLI flags.

๐Ÿ“– Full config guide: Configuration

๐Ÿ”— Links

  • ๐Ÿ“– Documentation โ€” full guides, API reference, and architecture details
  • ๐Ÿ”Œ Platform Plugins โ€” Claude Code, OpenClaw, OpenCode, Codex CLI
  • ๐Ÿ’ก Design Philosophy โ€” why markdown, why Milvus, competitor comparison
  • ๐Ÿฆž OpenClaw โ€” the memory architecture that inspired memsearch
  • ๐Ÿ—„๏ธ Milvus | Zilliz Cloud โ€” the vector database powering memsearch

๐Ÿค Contributing

Bug reports, feature requests, and pull requests are welcome! See the Contributing Guide for development setup, testing, and plugin development instructions. For questions and discussions, join us on Discord.

๐Ÿ“„ License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

memsearch-0.4.2.tar.gz (3.0 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

memsearch-0.4.2-py3-none-any.whl (52.3 kB view details)

Uploaded Python 3

File details

Details for the file memsearch-0.4.2.tar.gz.

File metadata

  • Download URL: memsearch-0.4.2.tar.gz
  • Upload date:
  • Size: 3.0 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for memsearch-0.4.2.tar.gz
Algorithm Hash digest
SHA256 c765945c0435421062ec55ac95b094db28f18203f66e644834f1477c0dfcf200
MD5 1434f053d00305d74380dde95a05f2da
BLAKE2b-256 43395f2e53c07d69833a21b2617fc671eb9415a048e57a47852a37e74af26508

See more details on using hashes here.

Provenance

The following attestation bundles were made for memsearch-0.4.2.tar.gz:

Publisher: release.yml on zilliztech/memsearch

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file memsearch-0.4.2-py3-none-any.whl.

File metadata

  • Download URL: memsearch-0.4.2-py3-none-any.whl
  • Upload date:
  • Size: 52.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for memsearch-0.4.2-py3-none-any.whl
Algorithm Hash digest
SHA256 5b0447e28b1aabd0d48053400b69501cc5c09cfd14139abe926348bd7b5a0bae
MD5 912cf2d8de7c8a05aaa8b748d3705ae6
BLAKE2b-256 7c86d08b1bfc19df840756aa1a8e1616f9d8212f54243b622c12fd55ce2ad081

See more details on using hashes here.

Provenance

The following attestation bundles were made for memsearch-0.4.2-py3-none-any.whl:

Publisher: release.yml on zilliztech/memsearch

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page