Skip to main content

Single-file AI memory system for Python. Store, search, and query documents with built-in RAG.

Project description

memvid-sdk

A single-file AI memory system for Python. Store documents, search with BM25 + vector ranking, and run RAG queries from a portable .mv2 file.

Built on Rust with PyO3 bindings. No database setup, no external services required.

Install

pip install memvid-sdk

For framework integrations:

pip install "memvid-sdk[langchain]"    # LangChain tools
pip install "memvid-sdk[llamaindex]"   # LlamaIndex query engine
pip install "memvid-sdk[openai]"       # OpenAI function schemas
pip install "memvid-sdk[full]"         # All integrations

Quick Start

from memvid_sdk import create

# Create a memory file
mv = create("notes.mv2")

# Store some documents
mv.put(
    title="Project Update",
    label="meeting",
    text="Discussed Q4 roadmap. Alice will handle the frontend refactor.",
    metadata={"date": "2024-01-15", "attendees": ["Alice", "Bob"]}
)

mv.put(
    title="Technical Decision",
    label="architecture",
    text="Decided to use PostgreSQL for the main database. Redis for caching.",
)

# Search by keyword
results = mv.find("database")
for hit in results["hits"]:
    print(f"{hit['title']}: {hit['snippet']}")

# Ask a question
answer = mv.ask("What database are we using?", model="openai:gpt-4o-mini")
print(answer["text"])

# Close the file
mv.seal()

Core API

Opening and Creating

from memvid_sdk import create, use

# Create a new memory file
mv = create("notes.mv2")

# Open an existing file
mv = use("basic", "notes.mv2", mode="open")

# Create or open (auto mode)
mv = use("basic", "notes.mv2", mode="auto")

# Open read-only
mv = use("basic", "notes.mv2", read_only=True)

# Context manager (auto-closes)
with use("basic", "notes.mv2") as mv:
    mv.put(title="Note", label="general", text="Content here")

Storing Documents

# Store text content
mv.put(
    title="Meeting Notes",
    label="meeting",
    text="Discussed the new API design.",
    metadata={"date": "2024-01-15", "priority": "high"},
    tags=["api", "design", "q1"]
)

# Store a file (PDF, DOCX, TXT, etc.)
mv.put(
    title="Q4 Report",
    label="reports",
    file="./documents/q4-report.pdf"
)

# Store with both text and file
mv.put(
    title="Contract Summary",
    label="legal",
    text="Key terms: 2-year agreement, auto-renewal clause.",
    file="./contracts/agreement.pdf"
)

Batch Ingestion

For large imports, put_many is significantly faster:

documents = [
    {"title": "Doc 1", "label": "notes", "text": "First document content..."},
    {"title": "Doc 2", "label": "notes", "text": "Second document content..."},
    # ... thousands more
]

frame_ids = mv.put_many(documents)
print(f"Added {len(frame_ids)} documents")

Searching

# Lexical search (BM25 ranking)
results = mv.find("machine learning", k=10)

for hit in results["hits"]:
    print(f"{hit['title']}: {hit['snippet']}")

Search parameters:

Parameter Type Description
k int Number of results (default: 5)
snippet_chars int Snippet length (default: 240)
mode str "lex", "sem", or "auto"
scope str Filter by URI prefix

Semantic Search

Semantic search requires embeddings. Generate them during ingestion:

# Using local embeddings (bge-small, nomic, etc.)
mv.put(
    title="Document",
    text="Content here...",
    enable_embedding=True,
    embedding_model="bge-small"
)

# Using OpenAI embeddings
mv.put(
    title="Document",
    text="Content here...",
    enable_embedding=True,
    embedding_model="openai-small"  # requires OPENAI_API_KEY
)

Then search semantically:

results = mv.find("neural networks", mode="sem")

Windows users: Local embedding models (bge-small, nomic, etc.) are not available on Windows due to ONNX runtime limitations. Use OpenAI embeddings instead by setting OPENAI_API_KEY.

Question Answering (RAG)

# Basic RAG query
answer = mv.ask("What did we decide about the database?")
print(answer["text"])

# With specific model
answer = mv.ask(
    "Summarize the meeting notes",
    model="openai:gpt-4o-mini",
    k=6  # number of documents to retrieve
)

# Get context only (no LLM synthesis)
context = mv.ask("What was discussed?", context_only=True)
print(context["context"])  # Retrieved document snippets

Timeline and Stats

# Get recent entries
entries = mv.timeline(limit=20)

# Get statistics
stats = mv.stats()
print(f"Documents: {stats['frame_count']}")
print(f"Size: {stats['size_bytes']} bytes")

Closing

Always close the memory when done:

mv.seal()

Or use a context manager for automatic cleanup.

External Embeddings

For more control over embeddings, use external providers:

from memvid_sdk import create
from memvid_sdk.embeddings import OpenAIEmbeddings

# Create memory with vector index enabled
mv = create("knowledge.mv2", enable_vec=True, enable_lex=True)

# Initialize embedding provider
embedder = OpenAIEmbeddings(model="text-embedding-3-small")

# Prepare documents
documents = [
    {"title": "ML Basics", "label": "ai", "text": "Machine learning enables systems to learn from data."},
    {"title": "Deep Learning", "label": "ai", "text": "Deep learning uses neural networks with multiple layers."},
]

# Generate embeddings
texts = [doc["text"] for doc in documents]
embeddings = embedder.embed_documents(texts)

# Store documents with pre-computed embeddings
frame_ids = mv.put_many(documents, embeddings=embeddings)

# Search using external embeddings
query = "neural networks"
query_embedding = embedder.embed_query(query)
results = mv.find(query, k=3, query_embedding=query_embedding, mode="sem")

for hit in results["hits"]:
    print(f"{hit['title']}: {hit['score']:.3f}")

Built-in providers:

  • OpenAIEmbeddings (requires OPENAI_API_KEY)
  • CohereEmbeddings (requires COHERE_API_KEY)
  • VoyageEmbeddings (requires VOYAGE_API_KEY)
  • NvidiaEmbeddings (requires NVIDIA_API_KEY)
  • GeminiEmbeddings (requires GOOGLE_API_KEY or GEMINI_API_KEY)
  • MistralEmbeddings (requires MISTRAL_API_KEY)
  • HuggingFaceEmbeddings (local, no API key)

Use the factory function for quick setup:

from memvid_sdk.embeddings import get_embedder

# Create any supported provider
embedder = get_embedder("openai")  # or "cohere", "voyage", "nvidia", "gemini", "mistral", "huggingface"

Framework Integrations

LangChain

mv = use("langchain", "notes.mv2")
tools = mv.tools  # List of StructuredTool instances

LlamaIndex

mv = use("llamaindex", "notes.mv2")
engine = mv.as_query_engine()
response = engine.query("What is the timeline?")

OpenAI Function Calling

mv = use("openai", "notes.mv2")
functions = mv.functions  # JSON schemas for tool_calls

CrewAI

mv = use("crewai", "notes.mv2")
tools = mv.tools  # CrewAI-compatible tools

Error Handling

Typed exceptions for programmatic handling:

from memvid_sdk import CapacityExceededError, LockedError, EmbeddingFailedError

try:
    mv.put(title="Doc", text="Content")
except CapacityExceededError:
    print("Storage capacity exceeded")
except LockedError:
    print("File is locked by another process")
except EmbeddingFailedError:
    print("Embedding generation failed")

Common exceptions:

Code Exception Description
MV001 CapacityExceededError Storage capacity exceeded
MV007 LockedError File locked by another process
MV010 FrameNotFoundError Frame not found
MV013 FileNotFoundError File not found
MV015 EmbeddingFailedError Embedding failed

Environment Variables

Variable Description
OPENAI_API_KEY For OpenAI embeddings and LLM synthesis
OPENAI_BASE_URL Custom OpenAI-compatible endpoint
NVIDIA_API_KEY For NVIDIA NIM embeddings
MEMVID_MODELS_DIR Local embedding model cache directory
MEMVID_API_KEY For capacity beyond the free tier
MEMVID_OFFLINE Set to 1 to disable network features

Platform Support

Platform Architecture Local Embeddings
macOS ARM64 (Apple Silicon) Yes
macOS x64 (Intel) Yes
Linux x64 (glibc) Yes
Windows x64 No (use OpenAI)

Requirements

  • Python 3.8 or later
  • For local embeddings: macOS or Linux (Windows requires OpenAI)

More Information

License

Apache-2.0

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

memvid_sdk-2.0.144.tar.gz (9.8 MB view details)

Uploaded Source

Built Distributions

If you're not sure about the file name format, learn more about wheel file names.

memvid_sdk-2.0.144-cp38-abi3-win_amd64.whl (13.4 MB view details)

Uploaded CPython 3.8+Windows x86-64

memvid_sdk-2.0.144-cp38-abi3-manylinux_2_35_x86_64.whl (99.4 MB view details)

Uploaded CPython 3.8+manylinux: glibc 2.35+ x86-64

memvid_sdk-2.0.144-cp38-abi3-macosx_11_0_arm64.whl (63.8 MB view details)

Uploaded CPython 3.8+macOS 11.0+ ARM64

memvid_sdk-2.0.144-cp38-abi3-macosx_10_12_x86_64.whl (66.0 MB view details)

Uploaded CPython 3.8+macOS 10.12+ x86-64

File details

Details for the file memvid_sdk-2.0.144.tar.gz.

File metadata

  • Download URL: memvid_sdk-2.0.144.tar.gz
  • Upload date:
  • Size: 9.8 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.14

File hashes

Hashes for memvid_sdk-2.0.144.tar.gz
Algorithm Hash digest
SHA256 520ea656e9071f3d84efc59ffd11a6d99ce3e1c83b99f4e7eb744c6e5319d97b
MD5 bd8d462023e88cec099c228d23b0305b
BLAKE2b-256 12d1ca0b7493ec226d95e59b613d8965bf681ed4e3de71b55a258f2f9376f334

See more details on using hashes here.

File details

Details for the file memvid_sdk-2.0.144-cp38-abi3-win_amd64.whl.

File metadata

  • Download URL: memvid_sdk-2.0.144-cp38-abi3-win_amd64.whl
  • Upload date:
  • Size: 13.4 MB
  • Tags: CPython 3.8+, Windows x86-64
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.14

File hashes

Hashes for memvid_sdk-2.0.144-cp38-abi3-win_amd64.whl
Algorithm Hash digest
SHA256 0d5bcc7fcbe7b91b9444195b3ca709d9286b1aa7dd4b182d78005b0de7c6ccc6
MD5 0f51bd3eff093fc12ea92f6c70547ce8
BLAKE2b-256 0b33dcc80816ee2cd8be6fbd8e571e37f8d51dec580bcafc03d00109dbc02354

See more details on using hashes here.

File details

Details for the file memvid_sdk-2.0.144-cp38-abi3-manylinux_2_35_x86_64.whl.

File metadata

File hashes

Hashes for memvid_sdk-2.0.144-cp38-abi3-manylinux_2_35_x86_64.whl
Algorithm Hash digest
SHA256 23f8dcd89b09d74c27e248fb59bc9206ee5c32737c65fc7eba2a80695ac0201e
MD5 83e38a77ab00a5321f1e666581cad10d
BLAKE2b-256 535f2d70dfb6b8b719f02b0215991c0984762d40cfe2a44685a9963b128c390e

See more details on using hashes here.

File details

Details for the file memvid_sdk-2.0.144-cp38-abi3-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for memvid_sdk-2.0.144-cp38-abi3-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 b8e649d0fa1c4f0aaa2edf55e1dfd3d49bff98e8f992988e6ff2f4ce596dfb85
MD5 67c8242fc1eba87df27c4cd00262fd17
BLAKE2b-256 4f05b2a2966d703bb79bb6e4d4a551dc756740e9f83e18b99d9d22bf2a27216a

See more details on using hashes here.

File details

Details for the file memvid_sdk-2.0.144-cp38-abi3-macosx_10_12_x86_64.whl.

File metadata

File hashes

Hashes for memvid_sdk-2.0.144-cp38-abi3-macosx_10_12_x86_64.whl
Algorithm Hash digest
SHA256 9f230b9374991951f379c40454cae87361f056d570ac5a0dfe4f72384566d40e
MD5 5f4eb8d9e8162b2ff39d9227b83634b2
BLAKE2b-256 f81cf71dbbcdf87327e4dcd858a0875e846c0be350a3bc0757a33140f0310c38

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page