Skip to main content

Single-file AI memory system for Python. Store, search, and query documents with built-in RAG.

Project description

memvid-sdk

A single-file AI memory system for Python. Store documents, search with BM25 + vector ranking, and run RAG queries from a portable .mv2 file.

Built on Rust with PyO3 bindings. No database setup, no external services required.

Install

pip install memvid-sdk

For framework integrations:

pip install "memvid-sdk[langchain]"    # LangChain tools
pip install "memvid-sdk[llamaindex]"   # LlamaIndex query engine
pip install "memvid-sdk[openai]"       # OpenAI function schemas
pip install "memvid-sdk[full]"         # All integrations

Quick Start

from memvid_sdk import create

# Create a memory file
mv = create("notes.mv2")

# Store some documents
mv.put(
    title="Project Update",
    label="meeting",
    text="Discussed Q4 roadmap. Alice will handle the frontend refactor.",
    metadata={"date": "2024-01-15", "attendees": ["Alice", "Bob"]}
)

mv.put(
    title="Technical Decision",
    label="architecture",
    text="Decided to use PostgreSQL for the main database. Redis for caching.",
)

# Search by keyword
results = mv.find("database")
for hit in results["hits"]:
    print(f"{hit['title']}: {hit['snippet']}")

# Ask a question
answer = mv.ask("What database are we using?", model="openai:gpt-4o-mini")
print(answer["text"])

# Close the file
mv.seal()

Core API

Opening and Creating

from memvid_sdk import create, use

# Create a new memory file
mv = create("notes.mv2")

# Open an existing file
mv = use("basic", "notes.mv2", mode="open")

# Create or open (auto mode)
mv = use("basic", "notes.mv2", mode="auto")

# Open read-only
mv = use("basic", "notes.mv2", read_only=True)

# Context manager (auto-closes)
with use("basic", "notes.mv2") as mv:
    mv.put(title="Note", label="general", text="Content here")

Storing Documents

# Store text content
mv.put(
    title="Meeting Notes",
    label="meeting",
    text="Discussed the new API design.",
    metadata={"date": "2024-01-15", "priority": "high"},
    tags=["api", "design", "q1"]
)

# Store a file (PDF, DOCX, TXT, etc.)
mv.put(
    title="Q4 Report",
    label="reports",
    file="./documents/q4-report.pdf"
)

# Store with both text and file
mv.put(
    title="Contract Summary",
    label="legal",
    text="Key terms: 2-year agreement, auto-renewal clause.",
    file="./contracts/agreement.pdf"
)

Batch Ingestion

For large imports, put_many is significantly faster:

documents = [
    {"title": "Doc 1", "label": "notes", "text": "First document content..."},
    {"title": "Doc 2", "label": "notes", "text": "Second document content..."},
    # ... thousands more
]

frame_ids = mv.put_many(documents)
print(f"Added {len(frame_ids)} documents")

Searching

# Lexical search (BM25 ranking)
results = mv.find("machine learning", k=10)

for hit in results["hits"]:
    print(f"{hit['title']}: {hit['snippet']}")

Search parameters:

Parameter Type Description
k int Number of results (default: 5)
snippet_chars int Snippet length (default: 240)
mode str "lex", "sem", or "auto"
scope str Filter by URI prefix

Semantic Search

Semantic search requires embeddings. Generate them during ingestion:

# Using local embeddings (bge-small, nomic, etc.)
mv.put(
    title="Document",
    text="Content here...",
    enable_embedding=True,
    embedding_model="bge-small"
)

# Using OpenAI embeddings
mv.put(
    title="Document",
    text="Content here...",
    enable_embedding=True,
    embedding_model="openai-small"  # requires OPENAI_API_KEY
)

Then search semantically:

results = mv.find("neural networks", mode="sem")

Windows users: Local embedding models (bge-small, nomic, etc.) are not available on Windows due to ONNX runtime limitations. Use OpenAI embeddings instead by setting OPENAI_API_KEY.

Question Answering (RAG)

# Basic RAG query
answer = mv.ask("What did we decide about the database?")
print(answer["text"])

# With specific model
answer = mv.ask(
    "Summarize the meeting notes",
    model="openai:gpt-4o-mini",
    k=6  # number of documents to retrieve
)

# Get context only (no LLM synthesis)
context = mv.ask("What was discussed?", context_only=True)
print(context["context"])  # Retrieved document snippets

Timeline and Stats

# Get recent entries
entries = mv.timeline(limit=20)

# Get statistics
stats = mv.stats()
print(f"Documents: {stats['frame_count']}")
print(f"Size: {stats['size_bytes']} bytes")

Closing

Always close the memory when done:

mv.seal()

Or use a context manager for automatic cleanup.

External Embeddings

For more control over embeddings, use external providers:

from memvid_sdk import create
from memvid_sdk.embeddings import OpenAIEmbeddings

# Create memory with vector index enabled
mv = create("knowledge.mv2", enable_vec=True, enable_lex=True)

# Initialize embedding provider
embedder = OpenAIEmbeddings(model="text-embedding-3-small")

# Prepare documents
documents = [
    {"title": "ML Basics", "label": "ai", "text": "Machine learning enables systems to learn from data."},
    {"title": "Deep Learning", "label": "ai", "text": "Deep learning uses neural networks with multiple layers."},
]

# Generate embeddings
texts = [doc["text"] for doc in documents]
embeddings = embedder.embed_documents(texts)

# Store documents with pre-computed embeddings
frame_ids = mv.put_many(documents, embeddings=embeddings)

# Search using external embeddings
query = "neural networks"
query_embedding = embedder.embed_query(query)
results = mv.find(query, k=3, query_embedding=query_embedding, mode="sem")

for hit in results["hits"]:
    print(f"{hit['title']}: {hit['score']:.3f}")

Built-in providers:

  • OpenAIEmbeddings (requires OPENAI_API_KEY)
  • CohereEmbeddings (requires COHERE_API_KEY)
  • VoyageEmbeddings (requires VOYAGE_API_KEY)
  • NvidiaEmbeddings (requires NVIDIA_API_KEY)
  • GeminiEmbeddings (requires GOOGLE_API_KEY or GEMINI_API_KEY)
  • MistralEmbeddings (requires MISTRAL_API_KEY)
  • HuggingFaceEmbeddings (local, no API key)

Use the factory function for quick setup:

from memvid_sdk.embeddings import get_embedder

# Create any supported provider
embedder = get_embedder("openai")  # or "cohere", "voyage", "nvidia", "gemini", "mistral", "huggingface"

Framework Integrations

LangChain

mv = use("langchain", "notes.mv2")
tools = mv.tools  # List of StructuredTool instances

LlamaIndex

mv = use("llamaindex", "notes.mv2")
engine = mv.as_query_engine()
response = engine.query("What is the timeline?")

OpenAI Function Calling

mv = use("openai", "notes.mv2")
functions = mv.functions  # JSON schemas for tool_calls

CrewAI

mv = use("crewai", "notes.mv2")
tools = mv.tools  # CrewAI-compatible tools

Error Handling

Typed exceptions for programmatic handling:

from memvid_sdk import CapacityExceededError, LockedError, EmbeddingFailedError

try:
    mv.put(title="Doc", text="Content")
except CapacityExceededError:
    print("Storage capacity exceeded")
except LockedError:
    print("File is locked by another process")
except EmbeddingFailedError:
    print("Embedding generation failed")

Common exceptions:

Code Exception Description
MV001 CapacityExceededError Storage capacity exceeded
MV007 LockedError File locked by another process
MV010 FrameNotFoundError Frame not found
MV013 FileNotFoundError File not found
MV015 EmbeddingFailedError Embedding failed

Environment Variables

Variable Description
OPENAI_API_KEY For OpenAI embeddings and LLM synthesis
OPENAI_BASE_URL Custom OpenAI-compatible endpoint
NVIDIA_API_KEY For NVIDIA NIM embeddings
MEMVID_MODELS_DIR Local embedding model cache directory
MEMVID_API_KEY For capacity beyond the free tier
MEMVID_OFFLINE Set to 1 to disable network features

Platform Support

Platform Architecture Local Embeddings
macOS ARM64 (Apple Silicon) Yes
macOS x64 (Intel) Yes
Linux x64 (glibc) Yes
Windows x64 No (use OpenAI)

Requirements

  • Python 3.8 or later
  • For local embeddings: macOS or Linux (Windows requires OpenAI)

More Information

License

Apache-2.0

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

memvid_sdk-2.0.151.tar.gz (7.3 MB view details)

Uploaded Source

Built Distributions

If you're not sure about the file name format, learn more about wheel file names.

memvid_sdk-2.0.151-cp38-abi3-win_amd64.whl (7.5 MB view details)

Uploaded CPython 3.8+Windows x86-64

memvid_sdk-2.0.151-cp38-abi3-manylinux_2_35_x86_64.whl (99.7 MB view details)

Uploaded CPython 3.8+manylinux: glibc 2.35+ x86-64

memvid_sdk-2.0.151-cp38-abi3-manylinux_2_28_aarch64.whl (14.7 MB view details)

Uploaded CPython 3.8+manylinux: glibc 2.28+ ARM64

memvid_sdk-2.0.151-cp38-abi3-macosx_11_0_arm64.whl (64.2 MB view details)

Uploaded CPython 3.8+macOS 11.0+ ARM64

memvid_sdk-2.0.151-cp38-abi3-macosx_10_12_x86_64.whl (66.5 MB view details)

Uploaded CPython 3.8+macOS 10.12+ x86-64

File details

Details for the file memvid_sdk-2.0.151.tar.gz.

File metadata

  • Download URL: memvid_sdk-2.0.151.tar.gz
  • Upload date:
  • Size: 7.3 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.14

File hashes

Hashes for memvid_sdk-2.0.151.tar.gz
Algorithm Hash digest
SHA256 61ae8d8a48d2f3da41fe428d973a414f75cacb9e206e153b763a4cc5e020981d
MD5 cf3197b5305e2a0662d91bf864cd842d
BLAKE2b-256 8e11d7ae9139137bc8521c00cbf5157c2f4c4a75e78d34ff5c0cb59fae729c63

See more details on using hashes here.

File details

Details for the file memvid_sdk-2.0.151-cp38-abi3-win_amd64.whl.

File metadata

  • Download URL: memvid_sdk-2.0.151-cp38-abi3-win_amd64.whl
  • Upload date:
  • Size: 7.5 MB
  • Tags: CPython 3.8+, Windows x86-64
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.14

File hashes

Hashes for memvid_sdk-2.0.151-cp38-abi3-win_amd64.whl
Algorithm Hash digest
SHA256 d2380f2e7efda4d613288d7525d7c5c58712cf8704a1b8c0e957b742e684695d
MD5 bf091f0b653b350edbd087136bb95494
BLAKE2b-256 b7735fc7e1fd7c3868ad5999ee2eb3350ea554dbabd0a45155e4f726faabda2f

See more details on using hashes here.

File details

Details for the file memvid_sdk-2.0.151-cp38-abi3-manylinux_2_35_x86_64.whl.

File metadata

File hashes

Hashes for memvid_sdk-2.0.151-cp38-abi3-manylinux_2_35_x86_64.whl
Algorithm Hash digest
SHA256 bf31c4a43f422d121390ba6ca792dfe32787797aa62f3579d140ce23a8762175
MD5 5f77f1b500d7706b068f5c896324fd84
BLAKE2b-256 b849baf9920bbcbabd77be9f6899f0c69bb6be35d5ac3a42cf350525e7d2fead

See more details on using hashes here.

File details

Details for the file memvid_sdk-2.0.151-cp38-abi3-manylinux_2_28_aarch64.whl.

File metadata

File hashes

Hashes for memvid_sdk-2.0.151-cp38-abi3-manylinux_2_28_aarch64.whl
Algorithm Hash digest
SHA256 30f191407c9052588bbef20ad5074cff2c3329b00a7c95660108076ec1562d31
MD5 0e104f9401ed22065770ac75f59e497e
BLAKE2b-256 2f8360bec37f60f8758669ab5eb48990ad996321c4cab4b8df18b17f3dce504d

See more details on using hashes here.

File details

Details for the file memvid_sdk-2.0.151-cp38-abi3-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for memvid_sdk-2.0.151-cp38-abi3-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 891cf17691206509853516f4cadea6dc77cca057160de2b8c8beab42612507b3
MD5 17f98b792321c70b596484e4b9f46c39
BLAKE2b-256 7c9eda16b41c6a16740092241255c04d02db35c8beba1e446691182c47981536

See more details on using hashes here.

File details

Details for the file memvid_sdk-2.0.151-cp38-abi3-macosx_10_12_x86_64.whl.

File metadata

File hashes

Hashes for memvid_sdk-2.0.151-cp38-abi3-macosx_10_12_x86_64.whl
Algorithm Hash digest
SHA256 2a15a87bf724fae19a0cdc05be0db3688391805b17dd41aa3dbf574cf2596063
MD5 78a5bff78cafc1fc47aa59c8785fd7c4
BLAKE2b-256 7cb625cce67a288e87944a0c6cb6d288d8230c347eb1c7f366b052f1f2169c38

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page