Skip to main content

Single-file AI memory system for Python. Store, search, and query documents with built-in RAG.

Project description

memvid-sdk

A single-file AI memory system for Python. Store documents, search with BM25 + vector ranking, and run RAG queries from a portable .mv2 file.

Built on Rust with PyO3 bindings. No database setup, no external services required.

Install

pip install memvid-sdk

For framework integrations:

pip install "memvid-sdk[langchain]"    # LangChain tools
pip install "memvid-sdk[llamaindex]"   # LlamaIndex query engine
pip install "memvid-sdk[openai]"       # OpenAI function schemas
pip install "memvid-sdk[full]"         # All integrations

Quick Start

from memvid_sdk import create

# Create a memory file
mv = create("notes.mv2")

# Store some documents
mv.put(
    title="Project Update",
    label="meeting",
    text="Discussed Q4 roadmap. Alice will handle the frontend refactor.",
    metadata={"date": "2024-01-15", "attendees": ["Alice", "Bob"]}
)

mv.put(
    title="Technical Decision",
    label="architecture",
    text="Decided to use PostgreSQL for the main database. Redis for caching.",
)

# Search by keyword
results = mv.find("database")
for hit in results["hits"]:
    print(f"{hit['title']}: {hit['snippet']}")

# Ask a question
answer = mv.ask("What database are we using?", model="openai:gpt-4o-mini")
print(answer["text"])

# Close the file
mv.seal()

Core API

Opening and Creating

from memvid_sdk import create, use

# Create a new memory file
mv = create("notes.mv2")

# Open an existing file
mv = use("basic", "notes.mv2", mode="open")

# Create or open (auto mode)
mv = use("basic", "notes.mv2", mode="auto")

# Open read-only
mv = use("basic", "notes.mv2", read_only=True)

# Context manager (auto-closes)
with use("basic", "notes.mv2") as mv:
    mv.put(title="Note", label="general", text="Content here")

Storing Documents

# Store text content
mv.put(
    title="Meeting Notes",
    label="meeting",
    text="Discussed the new API design.",
    metadata={"date": "2024-01-15", "priority": "high"},
    tags=["api", "design", "q1"]
)

# Store a file (PDF, DOCX, TXT, etc.)
mv.put(
    title="Q4 Report",
    label="reports",
    file="./documents/q4-report.pdf"
)

# Store with both text and file
mv.put(
    title="Contract Summary",
    label="legal",
    text="Key terms: 2-year agreement, auto-renewal clause.",
    file="./contracts/agreement.pdf"
)

Batch Ingestion

For large imports, put_many is significantly faster:

documents = [
    {"title": "Doc 1", "label": "notes", "text": "First document content..."},
    {"title": "Doc 2", "label": "notes", "text": "Second document content..."},
    # ... thousands more
]

frame_ids = mv.put_many(documents)
print(f"Added {len(frame_ids)} documents")

Searching

# Lexical search (BM25 ranking)
results = mv.find("machine learning", k=10)

for hit in results["hits"]:
    print(f"{hit['title']}: {hit['snippet']}")

Search parameters:

Parameter Type Description
k int Number of results (default: 5)
snippet_chars int Snippet length (default: 240)
mode str "lex", "sem", or "auto"
scope str Filter by URI prefix

Semantic Search

Semantic search requires embeddings. Generate them during ingestion:

# Using local embeddings (bge-small, nomic, etc.)
mv.put(
    title="Document",
    text="Content here...",
    enable_embedding=True,
    embedding_model="bge-small"
)

# Using OpenAI embeddings
mv.put(
    title="Document",
    text="Content here...",
    enable_embedding=True,
    embedding_model="openai-small"  # requires OPENAI_API_KEY
)

Then search semantically:

results = mv.find("neural networks", mode="sem")

Windows users: Local embedding models (bge-small, nomic, etc.) are not available on Windows due to ONNX runtime limitations. Use OpenAI embeddings instead by setting OPENAI_API_KEY.

Question Answering (RAG)

# Basic RAG query
answer = mv.ask("What did we decide about the database?")
print(answer["text"])

# With specific model
answer = mv.ask(
    "Summarize the meeting notes",
    model="openai:gpt-4o-mini",
    k=6  # number of documents to retrieve
)

# Get context only (no LLM synthesis)
context = mv.ask("What was discussed?", context_only=True)
print(context["context"])  # Retrieved document snippets

Timeline and Stats

# Get recent entries
entries = mv.timeline(limit=20)

# Get statistics
stats = mv.stats()
print(f"Documents: {stats['frame_count']}")
print(f"Size: {stats['size_bytes']} bytes")

Closing

Always close the memory when done:

mv.seal()

Or use a context manager for automatic cleanup.

External Embeddings

For more control over embeddings, use external providers:

from memvid_sdk import create
from memvid_sdk.embeddings import OpenAIEmbeddings

# Create memory with vector index enabled
mv = create("knowledge.mv2", enable_vec=True, enable_lex=True)

# Initialize embedding provider
embedder = OpenAIEmbeddings(model="text-embedding-3-small")

# Prepare documents
documents = [
    {"title": "ML Basics", "label": "ai", "text": "Machine learning enables systems to learn from data."},
    {"title": "Deep Learning", "label": "ai", "text": "Deep learning uses neural networks with multiple layers."},
]

# Generate embeddings
texts = [doc["text"] for doc in documents]
embeddings = embedder.embed_documents(texts)

# Store documents with pre-computed embeddings
frame_ids = mv.put_many(documents, embeddings=embeddings)

# Search using external embeddings
query = "neural networks"
query_embedding = embedder.embed_query(query)
results = mv.find(query, k=3, query_embedding=query_embedding, mode="sem")

for hit in results["hits"]:
    print(f"{hit['title']}: {hit['score']:.3f}")

Built-in providers:

  • OpenAIEmbeddings (requires OPENAI_API_KEY)
  • CohereEmbeddings (requires COHERE_API_KEY)
  • VoyageEmbeddings (requires VOYAGE_API_KEY)
  • NvidiaEmbeddings (requires NVIDIA_API_KEY)
  • GeminiEmbeddings (requires GOOGLE_API_KEY or GEMINI_API_KEY)
  • MistralEmbeddings (requires MISTRAL_API_KEY)
  • HuggingFaceEmbeddings (local, no API key)

Use the factory function for quick setup:

from memvid_sdk.embeddings import get_embedder

# Create any supported provider
embedder = get_embedder("openai")  # or "cohere", "voyage", "nvidia", "gemini", "mistral", "huggingface"

Framework Integrations

LangChain

mv = use("langchain", "notes.mv2")
tools = mv.tools  # List of StructuredTool instances

LlamaIndex

mv = use("llamaindex", "notes.mv2")
engine = mv.as_query_engine()
response = engine.query("What is the timeline?")

OpenAI Function Calling

mv = use("openai", "notes.mv2")
functions = mv.functions  # JSON schemas for tool_calls

CrewAI

mv = use("crewai", "notes.mv2")
tools = mv.tools  # CrewAI-compatible tools

Error Handling

Typed exceptions for programmatic handling:

from memvid_sdk import CapacityExceededError, LockedError, EmbeddingFailedError

try:
    mv.put(title="Doc", text="Content")
except CapacityExceededError:
    print("Storage capacity exceeded")
except LockedError:
    print("File is locked by another process")
except EmbeddingFailedError:
    print("Embedding generation failed")

Common exceptions:

Code Exception Description
MV001 CapacityExceededError Storage capacity exceeded
MV007 LockedError File locked by another process
MV010 FrameNotFoundError Frame not found
MV013 FileNotFoundError File not found
MV015 EmbeddingFailedError Embedding failed

Environment Variables

Variable Description
OPENAI_API_KEY For OpenAI embeddings and LLM synthesis
OPENAI_BASE_URL Custom OpenAI-compatible endpoint
NVIDIA_API_KEY For NVIDIA NIM embeddings
MEMVID_MODELS_DIR Local embedding model cache directory
MEMVID_API_KEY For capacity beyond the free tier
MEMVID_OFFLINE Set to 1 to disable network features

Platform Support

Platform Architecture Local Embeddings
macOS ARM64 (Apple Silicon) Yes
macOS x64 (Intel) Yes
Linux x64 (glibc) Yes
Windows x64 No (use OpenAI)

Requirements

  • Python 3.8 or later
  • For local embeddings: macOS or Linux (Windows requires OpenAI)

More Information

License

Apache-2.0

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

memvid_sdk-2.0.153.tar.gz (9.9 MB view details)

Uploaded Source

Built Distributions

If you're not sure about the file name format, learn more about wheel file names.

memvid_sdk-2.0.153-cp38-abi3-win_amd64.whl (7.4 MB view details)

Uploaded CPython 3.8+Windows x86-64

memvid_sdk-2.0.153-cp38-abi3-manylinux_2_35_x86_64.whl (99.9 MB view details)

Uploaded CPython 3.8+manylinux: glibc 2.35+ x86-64

memvid_sdk-2.0.153-cp38-abi3-manylinux_2_28_aarch64.whl (14.7 MB view details)

Uploaded CPython 3.8+manylinux: glibc 2.28+ ARM64

memvid_sdk-2.0.153-cp38-abi3-macosx_11_0_arm64.whl (64.1 MB view details)

Uploaded CPython 3.8+macOS 11.0+ ARM64

memvid_sdk-2.0.153-cp38-abi3-macosx_10_12_x86_64.whl (66.4 MB view details)

Uploaded CPython 3.8+macOS 10.12+ x86-64

File details

Details for the file memvid_sdk-2.0.153.tar.gz.

File metadata

  • Download URL: memvid_sdk-2.0.153.tar.gz
  • Upload date:
  • Size: 9.9 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.14

File hashes

Hashes for memvid_sdk-2.0.153.tar.gz
Algorithm Hash digest
SHA256 fba54e50e10b6942cf19716c4b8e5b8e67cc6ec8f68bf2408870989b8cdf32f1
MD5 64fbd4aab1b42c6d2b7a0db47a920a79
BLAKE2b-256 b0e201af256d439d5897d04c2b605a8345640e1aaa7007fe575ca5b2143fcd1d

See more details on using hashes here.

File details

Details for the file memvid_sdk-2.0.153-cp38-abi3-win_amd64.whl.

File metadata

  • Download URL: memvid_sdk-2.0.153-cp38-abi3-win_amd64.whl
  • Upload date:
  • Size: 7.4 MB
  • Tags: CPython 3.8+, Windows x86-64
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.14

File hashes

Hashes for memvid_sdk-2.0.153-cp38-abi3-win_amd64.whl
Algorithm Hash digest
SHA256 0d712e9fe8c9d5047647a4e298285476e3395af0348216367b3d7bff95c3b951
MD5 fef371956e3d31844c5db30469952551
BLAKE2b-256 d620c7b207bb68353698d4088489ac54c544c4fa8f7a072a2408a7ecc1785f9f

See more details on using hashes here.

File details

Details for the file memvid_sdk-2.0.153-cp38-abi3-manylinux_2_35_x86_64.whl.

File metadata

File hashes

Hashes for memvid_sdk-2.0.153-cp38-abi3-manylinux_2_35_x86_64.whl
Algorithm Hash digest
SHA256 fab0c1f8667a88ee1e5b55d8b94441fde6a82115ba06acd3483a487b97089120
MD5 d2f5e20f7c639df03f257fa5c4d0d56f
BLAKE2b-256 44864d0c94b28232517fdc72075ccc2a8c4f179a1bfaebd1372633cf22213492

See more details on using hashes here.

File details

Details for the file memvid_sdk-2.0.153-cp38-abi3-manylinux_2_28_aarch64.whl.

File metadata

File hashes

Hashes for memvid_sdk-2.0.153-cp38-abi3-manylinux_2_28_aarch64.whl
Algorithm Hash digest
SHA256 199ad2c37c3a96f51929be2c696d237eb01b1f61b18cc99fae8a6f8d722c5939
MD5 53ecc5e59e23c75785983fc92f0d03b9
BLAKE2b-256 3ab5482352e702a841a388fe8e06b1997bc9f0e3dd08fade2893d6706e14fa34

See more details on using hashes here.

File details

Details for the file memvid_sdk-2.0.153-cp38-abi3-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for memvid_sdk-2.0.153-cp38-abi3-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 9e02413c6c1cedcf58f56ddc604dda56d06e21addeb9e123877fbb2b3a78d613
MD5 92ed1d5d648d9c7b2130f6a0101fe182
BLAKE2b-256 125c9d2d8bd76b35aa7de3a93a623fceed8f20e79d5f344250df914209cc3598

See more details on using hashes here.

File details

Details for the file memvid_sdk-2.0.153-cp38-abi3-macosx_10_12_x86_64.whl.

File metadata

File hashes

Hashes for memvid_sdk-2.0.153-cp38-abi3-macosx_10_12_x86_64.whl
Algorithm Hash digest
SHA256 a50e0c9b1107cadc9d93e74a1afedfb8d62eacd680853e578d05faef832e9700
MD5 debbb3c22de8721362878d8e79a5d5aa
BLAKE2b-256 5e723ad61bf9b6afd76e12798a9a2b061ecee8cb2d0969569bbb90d257be4008

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page