Skip to main content

Single-file AI memory system for Python. Store, search, and query documents with built-in RAG.

Project description

memvid-sdk

A single-file AI memory system for Python. Store documents, search with BM25 + vector ranking, and run RAG queries from a portable .mv2 file.

Built on Rust with PyO3 bindings. No database setup, no external services required.

Install

pip install memvid-sdk

For framework integrations:

pip install "memvid-sdk[langchain]"    # LangChain tools
pip install "memvid-sdk[llamaindex]"   # LlamaIndex query engine
pip install "memvid-sdk[openai]"       # OpenAI function schemas
pip install "memvid-sdk[full]"         # All integrations

Quick Start

from memvid_sdk import create

# Create a memory file
mv = create("notes.mv2")

# Store some documents
mv.put(
    title="Project Update",
    label="meeting",
    text="Discussed Q4 roadmap. Alice will handle the frontend refactor.",
    metadata={"date": "2024-01-15", "attendees": ["Alice", "Bob"]}
)

mv.put(
    title="Technical Decision",
    label="architecture",
    text="Decided to use PostgreSQL for the main database. Redis for caching.",
)

# Search by keyword
results = mv.find("database")
for hit in results["hits"]:
    print(f"{hit['title']}: {hit['snippet']}")

# Ask a question
answer = mv.ask("What database are we using?", model="openai:gpt-4o-mini")
print(answer["text"])

# Close the file
mv.seal()

Core API

Opening and Creating

from memvid_sdk import create, use

# Create a new memory file
mv = create("notes.mv2")

# Open an existing file
mv = use("basic", "notes.mv2", mode="open")

# Create or open (auto mode)
mv = use("basic", "notes.mv2", mode="auto")

# Open read-only
mv = use("basic", "notes.mv2", read_only=True)

# Context manager (auto-closes)
with use("basic", "notes.mv2") as mv:
    mv.put(title="Note", label="general", text="Content here")

Storing Documents

# Store text content
mv.put(
    title="Meeting Notes",
    label="meeting",
    text="Discussed the new API design.",
    metadata={"date": "2024-01-15", "priority": "high"},
    tags=["api", "design", "q1"]
)

# Store a file (PDF, DOCX, TXT, etc.)
mv.put(
    title="Q4 Report",
    label="reports",
    file="./documents/q4-report.pdf"
)

# Store with both text and file
mv.put(
    title="Contract Summary",
    label="legal",
    text="Key terms: 2-year agreement, auto-renewal clause.",
    file="./contracts/agreement.pdf"
)

Batch Ingestion

For large imports, put_many is significantly faster:

documents = [
    {"title": "Doc 1", "label": "notes", "text": "First document content..."},
    {"title": "Doc 2", "label": "notes", "text": "Second document content..."},
    # ... thousands more
]

frame_ids = mv.put_many(documents)
print(f"Added {len(frame_ids)} documents")

Searching

# Lexical search (BM25 ranking)
results = mv.find("machine learning", k=10)

for hit in results["hits"]:
    print(f"{hit['title']}: {hit['snippet']}")

Search parameters:

Parameter Type Description
k int Number of results (default: 5)
snippet_chars int Snippet length (default: 240)
mode str "lex", "sem", or "auto"
scope str Filter by URI prefix

Semantic Search

Semantic search requires embeddings. Generate them during ingestion:

# Using local embeddings (bge-small, nomic, etc.)
mv.put(
    title="Document",
    text="Content here...",
    enable_embedding=True,
    embedding_model="bge-small"
)

# Using OpenAI embeddings
mv.put(
    title="Document",
    text="Content here...",
    enable_embedding=True,
    embedding_model="openai-small"  # requires OPENAI_API_KEY
)

Then search semantically:

results = mv.find("neural networks", mode="sem")

Windows users: Local embedding models (bge-small, nomic, etc.) are not available on Windows due to ONNX runtime limitations. Use OpenAI embeddings instead by setting OPENAI_API_KEY.

Question Answering (RAG)

# Basic RAG query
answer = mv.ask("What did we decide about the database?")
print(answer["text"])

# With specific model
answer = mv.ask(
    "Summarize the meeting notes",
    model="openai:gpt-4o-mini",
    k=6  # number of documents to retrieve
)

# Get context only (no LLM synthesis)
context = mv.ask("What was discussed?", context_only=True)
print(context["context"])  # Retrieved document snippets

Timeline and Stats

# Get recent entries
entries = mv.timeline(limit=20)

# Get statistics
stats = mv.stats()
print(f"Documents: {stats['frame_count']}")
print(f"Size: {stats['size_bytes']} bytes")

Closing

Always close the memory when done:

mv.seal()

Or use a context manager for automatic cleanup.

External Embeddings

For more control over embeddings, use external providers:

from memvid_sdk import create
from memvid_sdk.embeddings import OpenAIEmbeddings

# Create memory with vector index enabled
mv = create("knowledge.mv2", enable_vec=True, enable_lex=True)

# Initialize embedding provider
embedder = OpenAIEmbeddings(model="text-embedding-3-small")

# Prepare documents
documents = [
    {"title": "ML Basics", "label": "ai", "text": "Machine learning enables systems to learn from data."},
    {"title": "Deep Learning", "label": "ai", "text": "Deep learning uses neural networks with multiple layers."},
]

# Generate embeddings
texts = [doc["text"] for doc in documents]
embeddings = embedder.embed_documents(texts)

# Store documents with pre-computed embeddings
frame_ids = mv.put_many(documents, embeddings=embeddings)

# Search using external embeddings
query = "neural networks"
query_embedding = embedder.embed_query(query)
results = mv.find(query, k=3, query_embedding=query_embedding, mode="sem")

for hit in results["hits"]:
    print(f"{hit['title']}: {hit['score']:.3f}")

Built-in providers:

  • OpenAIEmbeddings (requires OPENAI_API_KEY)
  • CohereEmbeddings (requires COHERE_API_KEY)
  • VoyageEmbeddings (requires VOYAGE_API_KEY)
  • NvidiaEmbeddings (requires NVIDIA_API_KEY)
  • GeminiEmbeddings (requires GOOGLE_API_KEY or GEMINI_API_KEY)
  • MistralEmbeddings (requires MISTRAL_API_KEY)
  • HuggingFaceEmbeddings (local, no API key)

Use the factory function for quick setup:

from memvid_sdk.embeddings import get_embedder

# Create any supported provider
embedder = get_embedder("openai")  # or "cohere", "voyage", "nvidia", "gemini", "mistral", "huggingface"

Framework Integrations

LangChain

mv = use("langchain", "notes.mv2")
tools = mv.tools  # List of StructuredTool instances

LlamaIndex

mv = use("llamaindex", "notes.mv2")
engine = mv.as_query_engine()
response = engine.query("What is the timeline?")

OpenAI Function Calling

mv = use("openai", "notes.mv2")
functions = mv.functions  # JSON schemas for tool_calls

CrewAI

mv = use("crewai", "notes.mv2")
tools = mv.tools  # CrewAI-compatible tools

Error Handling

Typed exceptions for programmatic handling:

from memvid_sdk import CapacityExceededError, LockedError, EmbeddingFailedError

try:
    mv.put(title="Doc", text="Content")
except CapacityExceededError:
    print("Storage capacity exceeded")
except LockedError:
    print("File is locked by another process")
except EmbeddingFailedError:
    print("Embedding generation failed")

Common exceptions:

Code Exception Description
MV001 CapacityExceededError Storage capacity exceeded
MV007 LockedError File locked by another process
MV010 FrameNotFoundError Frame not found
MV013 FileNotFoundError File not found
MV015 EmbeddingFailedError Embedding failed

Environment Variables

Variable Description
OPENAI_API_KEY For OpenAI embeddings and LLM synthesis
OPENAI_BASE_URL Custom OpenAI-compatible endpoint
NVIDIA_API_KEY For NVIDIA NIM embeddings
MEMVID_MODELS_DIR Local embedding model cache directory
MEMVID_API_KEY For capacity beyond the free tier
MEMVID_OFFLINE Set to 1 to disable network features

Platform Support

Platform Architecture Local Embeddings
macOS ARM64 (Apple Silicon) Yes
macOS x64 (Intel) Yes
Linux x64 (glibc) Yes
Windows x64 No (use OpenAI)

Requirements

  • Python 3.8 or later
  • For local embeddings: macOS or Linux (Windows requires OpenAI)

More Information

License

Apache-2.0

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

memvid_sdk-2.0.149.tar.gz (7.3 MB view details)

Uploaded Source

Built Distributions

If you're not sure about the file name format, learn more about wheel file names.

memvid_sdk-2.0.149-cp38-abi3-win_amd64.whl (13.9 MB view details)

Uploaded CPython 3.8+Windows x86-64

memvid_sdk-2.0.149-cp38-abi3-manylinux_2_35_x86_64.whl (99.9 MB view details)

Uploaded CPython 3.8+manylinux: glibc 2.35+ x86-64

memvid_sdk-2.0.149-cp38-abi3-macosx_11_0_arm64.whl (64.1 MB view details)

Uploaded CPython 3.8+macOS 11.0+ ARM64

memvid_sdk-2.0.149-cp38-abi3-macosx_10_12_x86_64.whl (66.5 MB view details)

Uploaded CPython 3.8+macOS 10.12+ x86-64

File details

Details for the file memvid_sdk-2.0.149.tar.gz.

File metadata

  • Download URL: memvid_sdk-2.0.149.tar.gz
  • Upload date:
  • Size: 7.3 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.14

File hashes

Hashes for memvid_sdk-2.0.149.tar.gz
Algorithm Hash digest
SHA256 3ea46a519efe7304775728011fcecd607ef7a98aee21920a944f09141c58896c
MD5 d818074585d43e8010b031c26ae7d992
BLAKE2b-256 6954102b81ce80bc26eab5f996222abe1d741210f869b39afc7930220737da5d

See more details on using hashes here.

File details

Details for the file memvid_sdk-2.0.149-cp38-abi3-win_amd64.whl.

File metadata

  • Download URL: memvid_sdk-2.0.149-cp38-abi3-win_amd64.whl
  • Upload date:
  • Size: 13.9 MB
  • Tags: CPython 3.8+, Windows x86-64
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.14

File hashes

Hashes for memvid_sdk-2.0.149-cp38-abi3-win_amd64.whl
Algorithm Hash digest
SHA256 bd02e9a87b4f9f17b9be9177a560bf7591dc2b62605d14891ec6d11471c838e8
MD5 6ec93cc6c77f95edd12388febd640872
BLAKE2b-256 3cd742c784484804cf1a25d312797e4dba27025b6d04bd02998852fa1356ca25

See more details on using hashes here.

File details

Details for the file memvid_sdk-2.0.149-cp38-abi3-manylinux_2_35_x86_64.whl.

File metadata

File hashes

Hashes for memvid_sdk-2.0.149-cp38-abi3-manylinux_2_35_x86_64.whl
Algorithm Hash digest
SHA256 597206ac8b26b612ae4a8850099cb03b88a103a089e178e3b1d3f81390eddd9c
MD5 9df8bcf460eb97fe85b208f43fa48da0
BLAKE2b-256 afe6aaad9aeedd83ad5abdb18c905bee31a36e11533bb023f11788bfd40fc7b6

See more details on using hashes here.

File details

Details for the file memvid_sdk-2.0.149-cp38-abi3-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for memvid_sdk-2.0.149-cp38-abi3-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 5a0bb53bc479b8def05beb92a1a36c0e91f8f41bf351d4a75edf8bba330e04d1
MD5 9b57f0a9fae15ae190549b4807f94754
BLAKE2b-256 286bed90fcce5b5f6742fe62ba6cd7d3477bdd30f81ff3f0942b071ee1e1b6f2

See more details on using hashes here.

File details

Details for the file memvid_sdk-2.0.149-cp38-abi3-macosx_10_12_x86_64.whl.

File metadata

File hashes

Hashes for memvid_sdk-2.0.149-cp38-abi3-macosx_10_12_x86_64.whl
Algorithm Hash digest
SHA256 9e47073b46415ead9704a0fffe52125a694ee9150e749d1cb4375c3f242beb4d
MD5 49c322d84902349102ddcc58edf37ab0
BLAKE2b-256 194d9a2cbbbd57a1c74d31514523fa3621415cd9e1a45ce9f523051bc476e8a4

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page