Skip to main content

Single-file AI memory system for Python. Store, search, and query documents with built-in RAG.

Project description

memvid-sdk

A single-file AI memory system for Python. Store documents, search with BM25 + vector ranking, and run RAG queries from a portable .mv2 file.

Built on Rust with PyO3 bindings. No database setup, no external services required.

Install

pip install memvid-sdk

For framework integrations:

pip install "memvid-sdk[langchain]"    # LangChain tools
pip install "memvid-sdk[llamaindex]"   # LlamaIndex query engine
pip install "memvid-sdk[openai]"       # OpenAI function schemas
pip install "memvid-sdk[full]"         # All integrations

Quick Start

from memvid_sdk import create

# Create a memory file
mv = create("notes.mv2")

# Store some documents
mv.put(
    title="Project Update",
    label="meeting",
    text="Discussed Q4 roadmap. Alice will handle the frontend refactor.",
    metadata={"date": "2024-01-15", "attendees": ["Alice", "Bob"]}
)

mv.put(
    title="Technical Decision",
    label="architecture",
    text="Decided to use PostgreSQL for the main database. Redis for caching.",
)

# Search by keyword
results = mv.find("database")
for hit in results["hits"]:
    print(f"{hit['title']}: {hit['snippet']}")

# Ask a question
answer = mv.ask("What database are we using?", model="openai:gpt-4o-mini")
print(answer["text"])

# Close the file
mv.seal()

Core API

Opening and Creating

from memvid_sdk import create, use

# Create a new memory file
mv = create("notes.mv2")

# Open an existing file
mv = use("basic", "notes.mv2", mode="open")

# Create or open (auto mode)
mv = use("basic", "notes.mv2", mode="auto")

# Open read-only
mv = use("basic", "notes.mv2", read_only=True)

# Context manager (auto-closes)
with use("basic", "notes.mv2") as mv:
    mv.put(title="Note", label="general", text="Content here")

Storing Documents

# Store text content
mv.put(
    title="Meeting Notes",
    label="meeting",
    text="Discussed the new API design.",
    metadata={"date": "2024-01-15", "priority": "high"},
    tags=["api", "design", "q1"]
)

# Store a file (PDF, DOCX, TXT, etc.)
mv.put(
    title="Q4 Report",
    label="reports",
    file="./documents/q4-report.pdf"
)

# Store with both text and file
mv.put(
    title="Contract Summary",
    label="legal",
    text="Key terms: 2-year agreement, auto-renewal clause.",
    file="./contracts/agreement.pdf"
)

Batch Ingestion

For large imports, put_many is significantly faster:

documents = [
    {"title": "Doc 1", "label": "notes", "text": "First document content..."},
    {"title": "Doc 2", "label": "notes", "text": "Second document content..."},
    # ... thousands more
]

frame_ids = mv.put_many(documents)
print(f"Added {len(frame_ids)} documents")

Searching

# Lexical search (BM25 ranking)
results = mv.find("machine learning", k=10)

for hit in results["hits"]:
    print(f"{hit['title']}: {hit['snippet']}")

Search parameters:

Parameter Type Description
k int Number of results (default: 5)
snippet_chars int Snippet length (default: 240)
mode str "lex", "sem", or "auto"
scope str Filter by URI prefix

Semantic Search

Semantic search requires embeddings. Generate them during ingestion:

# Using local embeddings (bge-small, nomic, etc.)
mv.put(
    title="Document",
    text="Content here...",
    enable_embedding=True,
    embedding_model="bge-small"
)

# Using OpenAI embeddings
mv.put(
    title="Document",
    text="Content here...",
    enable_embedding=True,
    embedding_model="openai-small"  # requires OPENAI_API_KEY
)

Then search semantically:

results = mv.find("neural networks", mode="sem")

Windows users: Local embedding models (bge-small, nomic, etc.) are not available on Windows due to ONNX runtime limitations. Use OpenAI embeddings instead by setting OPENAI_API_KEY.

Question Answering (RAG)

# Basic RAG query
answer = mv.ask("What did we decide about the database?")
print(answer["text"])

# With specific model
answer = mv.ask(
    "Summarize the meeting notes",
    model="openai:gpt-4o-mini",
    k=6  # number of documents to retrieve
)

# Get context only (no LLM synthesis)
context = mv.ask("What was discussed?", context_only=True)
print(context["context"])  # Retrieved document snippets

Timeline and Stats

# Get recent entries
entries = mv.timeline(limit=20)

# Get statistics
stats = mv.stats()
print(f"Documents: {stats['frame_count']}")
print(f"Size: {stats['size_bytes']} bytes")

Closing

Always close the memory when done:

mv.seal()

Or use a context manager for automatic cleanup.

External Embeddings

For more control over embeddings, use external providers:

from memvid_sdk import create
from memvid_sdk.embeddings import OpenAIEmbeddings

# Create memory with vector index enabled
mv = create("knowledge.mv2", enable_vec=True, enable_lex=True)

# Initialize embedding provider
embedder = OpenAIEmbeddings(model="text-embedding-3-small")

# Prepare documents
documents = [
    {"title": "ML Basics", "label": "ai", "text": "Machine learning enables systems to learn from data."},
    {"title": "Deep Learning", "label": "ai", "text": "Deep learning uses neural networks with multiple layers."},
]

# Generate embeddings
texts = [doc["text"] for doc in documents]
embeddings = embedder.embed_documents(texts)

# Store documents with pre-computed embeddings
frame_ids = mv.put_many(documents, embeddings=embeddings)

# Search using external embeddings
query = "neural networks"
query_embedding = embedder.embed_query(query)
results = mv.find(query, k=3, query_embedding=query_embedding, mode="sem")

for hit in results["hits"]:
    print(f"{hit['title']}: {hit['score']:.3f}")

Built-in providers:

  • OpenAIEmbeddings (requires OPENAI_API_KEY)
  • CohereEmbeddings (requires COHERE_API_KEY)
  • VoyageEmbeddings (requires VOYAGE_API_KEY)
  • NvidiaEmbeddings (requires NVIDIA_API_KEY)
  • GeminiEmbeddings (requires GOOGLE_API_KEY or GEMINI_API_KEY)
  • MistralEmbeddings (requires MISTRAL_API_KEY)
  • HuggingFaceEmbeddings (local, no API key)

Use the factory function for quick setup:

from memvid_sdk.embeddings import get_embedder

# Create any supported provider
embedder = get_embedder("openai")  # or "cohere", "voyage", "nvidia", "gemini", "mistral", "huggingface"

Framework Integrations

LangChain

mv = use("langchain", "notes.mv2")
tools = mv.tools  # List of StructuredTool instances

LlamaIndex

mv = use("llamaindex", "notes.mv2")
engine = mv.as_query_engine()
response = engine.query("What is the timeline?")

OpenAI Function Calling

mv = use("openai", "notes.mv2")
functions = mv.functions  # JSON schemas for tool_calls

CrewAI

mv = use("crewai", "notes.mv2")
tools = mv.tools  # CrewAI-compatible tools

Error Handling

Typed exceptions for programmatic handling:

from memvid_sdk import CapacityExceededError, LockedError, EmbeddingFailedError

try:
    mv.put(title="Doc", text="Content")
except CapacityExceededError:
    print("Storage capacity exceeded")
except LockedError:
    print("File is locked by another process")
except EmbeddingFailedError:
    print("Embedding generation failed")

Common exceptions:

Code Exception Description
MV001 CapacityExceededError Storage capacity exceeded
MV007 LockedError File locked by another process
MV010 FrameNotFoundError Frame not found
MV013 FileNotFoundError File not found
MV015 EmbeddingFailedError Embedding failed

Environment Variables

Variable Description
OPENAI_API_KEY For OpenAI embeddings and LLM synthesis
OPENAI_BASE_URL Custom OpenAI-compatible endpoint
NVIDIA_API_KEY For NVIDIA NIM embeddings
MEMVID_MODELS_DIR Local embedding model cache directory
MEMVID_API_KEY For capacity beyond the free tier
MEMVID_OFFLINE Set to 1 to disable network features

Platform Support

Platform Architecture Local Embeddings
macOS ARM64 (Apple Silicon) Yes
macOS x64 (Intel) Yes
Linux x64 (glibc) Yes
Windows x64 No (use OpenAI)

Requirements

  • Python 3.8 or later
  • For local embeddings: macOS or Linux (Windows requires OpenAI)

More Information

License

Apache-2.0

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

memvid_sdk-2.0.152.tar.gz (7.3 MB view details)

Uploaded Source

Built Distributions

If you're not sure about the file name format, learn more about wheel file names.

memvid_sdk-2.0.152-cp38-abi3-win_amd64.whl (7.4 MB view details)

Uploaded CPython 3.8+Windows x86-64

memvid_sdk-2.0.152-cp38-abi3-manylinux_2_35_x86_64.whl (99.8 MB view details)

Uploaded CPython 3.8+manylinux: glibc 2.35+ x86-64

memvid_sdk-2.0.152-cp38-abi3-manylinux_2_28_aarch64.whl (14.7 MB view details)

Uploaded CPython 3.8+manylinux: glibc 2.28+ ARM64

memvid_sdk-2.0.152-cp38-abi3-macosx_11_0_arm64.whl (64.1 MB view details)

Uploaded CPython 3.8+macOS 11.0+ ARM64

memvid_sdk-2.0.152-cp38-abi3-macosx_10_12_x86_64.whl (66.3 MB view details)

Uploaded CPython 3.8+macOS 10.12+ x86-64

File details

Details for the file memvid_sdk-2.0.152.tar.gz.

File metadata

  • Download URL: memvid_sdk-2.0.152.tar.gz
  • Upload date:
  • Size: 7.3 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.14

File hashes

Hashes for memvid_sdk-2.0.152.tar.gz
Algorithm Hash digest
SHA256 7b8d0f828d1f4d585a1267beec38ea81af5160f86767b9bdc740353bd58a7cb3
MD5 860cbb01e23a581d26e53be6145fe82c
BLAKE2b-256 b94b9189654caedf1ce1195f2428154b7029d4c1696be8acf58341f8adc35f0f

See more details on using hashes here.

File details

Details for the file memvid_sdk-2.0.152-cp38-abi3-win_amd64.whl.

File metadata

  • Download URL: memvid_sdk-2.0.152-cp38-abi3-win_amd64.whl
  • Upload date:
  • Size: 7.4 MB
  • Tags: CPython 3.8+, Windows x86-64
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.14

File hashes

Hashes for memvid_sdk-2.0.152-cp38-abi3-win_amd64.whl
Algorithm Hash digest
SHA256 517285869547e7ce87dd64b910cdd7358faaa213a028f126ed1db82e6df7e92e
MD5 93d3115315c2a00d8182e15a40f04775
BLAKE2b-256 5578b2befddd4ace84f695bc8090f05aec1fcacaaa1def0df184a76c7a4facdc

See more details on using hashes here.

File details

Details for the file memvid_sdk-2.0.152-cp38-abi3-manylinux_2_35_x86_64.whl.

File metadata

File hashes

Hashes for memvid_sdk-2.0.152-cp38-abi3-manylinux_2_35_x86_64.whl
Algorithm Hash digest
SHA256 86feecd621a15a6b914f4d69b9e25bbfaaeb6a3cb4dea1f35708d6a24a45aac7
MD5 9c3b1af6ee9a9186aa4ed83e24b2eee3
BLAKE2b-256 9a992a307e528b8a2a6e0455094acf024c3686da989890383091b45cf6fdf5fc

See more details on using hashes here.

File details

Details for the file memvid_sdk-2.0.152-cp38-abi3-manylinux_2_28_aarch64.whl.

File metadata

File hashes

Hashes for memvid_sdk-2.0.152-cp38-abi3-manylinux_2_28_aarch64.whl
Algorithm Hash digest
SHA256 5d235d0f6d52dca4aa16040497de0d70c22fe149d9274db2511e72061551b521
MD5 e328c23ce5053a744a9f0f7d03c1fc72
BLAKE2b-256 d1a5b6ff4b51a07ae24422f66f995e0e8dd70d0b5f9d7a3a9adf86565245f828

See more details on using hashes here.

File details

Details for the file memvid_sdk-2.0.152-cp38-abi3-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for memvid_sdk-2.0.152-cp38-abi3-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 ee2de1b9fd0b13a969b0ba1350bf3c82b46cdbd7b22a8985cc059be70d724c08
MD5 c7acff93c69eb7bd3bb9473dc99436bb
BLAKE2b-256 86ae55e2c0269910d5eb03be4f1aefbbc581bbf1d4b6e6d61d5fb0ea31527556

See more details on using hashes here.

File details

Details for the file memvid_sdk-2.0.152-cp38-abi3-macosx_10_12_x86_64.whl.

File metadata

File hashes

Hashes for memvid_sdk-2.0.152-cp38-abi3-macosx_10_12_x86_64.whl
Algorithm Hash digest
SHA256 c9b1cccf542849774ec59224b636c3b95b1f2d59f9bb8a277a06aed1c0bd7fd0
MD5 270600501f94aa03cbbb466f049e3667
BLAKE2b-256 7400ac5a5ddcd0df22b49449b773cc504c0c957eb156601f4f31efea1e327e13

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page