MCP server for Zvec vector database
Project description
Zvec MCP Server
A Model Context Protocol (MCP) server for Zvec, a high-performance embedded vector database by Alibaba.
Overview
This MCP server enables LLMs to interact with Zvec vector database through well-designed tools. It provides comprehensive functionality for:
- Collection Management: Create, open, and manage vector database collections
- Document Operations: Insert, update, delete, and fetch documents with full CRUD support
- Vector Search: Single-vector and multi-vector similarity search with re-ranking
- Index Management: Create and manage vector indexes (HNSW, IVF, FLAT) for fast retrieval
- AI Embedding: OpenAI-powered dense embedding with automatic text-to-vector conversion
Features
- ๐ 17 Comprehensive Tools: Full API coverage for common vector database operations
- ๐ค AI-Powered Embedding: Built-in OpenAI embedding for semantic search
- ๐ Multiple Response Formats: Support both JSON and Markdown output formats
- ๐ Multi-Vector Search: Combine multiple embeddings with advanced re-ranking
- ๐ฏ Hybrid Search: Combine vector similarity with scalar filters
- ๐พ Session Management: Collection caching for efficient multi-operation workflows
- ๐ก๏ธ Type Safety: Full Pydantic v2 validation for all inputs
- ๐ Rich Documentation: Detailed tool descriptions with examples
- โ Tested: Comprehensive pytest test suite
Installation
Requirements
- Python 3.10 - 3.14
- Supported platforms: Linux (x86_64, ARM64), macOS (ARM64)
Using uv (Recommended)
uv is a fast Python package installer and resolver, 10-100x faster than pip.
# Install uv if you haven't already
curl -LsSf https://astral.sh/uv/install.sh | sh
# Clone the repository
git clone https://github.com/zvec-ai/zvec-mcp-server.git
cd zvec-mcp-server
# Create virtual environment
uv venv
# Activate virtual environment
source .venv/bin/activate # On macOS/Linux
# .venv\Scripts\activate # On Windows
# Install the package
uv pip install -e .
# Install with development dependencies (includes pytest)
uv pip install -e ".[dev]"
Using pip
# Clone the repository
git clone https://github.com/zvec-ai/zvec-mcp-server.git
cd zvec-mcp-server
# Create virtual environment
python -m venv .venv
source .venv/bin/activate
# Install the package
pip install -e .
# Install with development dependencies
pip install -e ".[dev]"
Quick Start
Running the Server
# Using the installed package
python -m zvec_mcp
# Or with uv
uv run python -m zvec_mcp
# Test with MCP Inspector
npx @modelcontextprotocol/inspector python -m zvec_mcp
Basic Usage Example
# 1. Create and open a collection
create_and_open_collection({
"path": "./my_vectors",
"collection_name": "docs_col",
"vector_fields": [
{
"name": "embedding",
"data_type": "VECTOR_FP32",
"dimension": 1536
}
],
"scalar_fields": [
{
"name": "title",
"data_type": "STRING",
"nullable": False
}
]
})
# 2. Insert documents with auto-generated embeddings (requires OPENAI_API_KEY)
embedding_write({
"collection_name": "docs_col",
"field_name": "embedding",
"documents": [
{
"id": "doc1",
"text": "This is a sample document about machine learning.",
"fields": {"title": "ML Introduction"}
}
]
})
# 3. Semantic search with natural language query
embedding_search({
"collection_name": "docs_col",
"field_name": "embedding",
"query_text": "artificial intelligence and neural networks",
"topk": 10
})
Available Tools
Collection Management (4 tools)
create_and_open_collection- Create new collection with schema and auto-create indexesopen_collection- Open existing collection into session cacheget_collection_info- Get schema and statisticsdestroy_collection- Permanently delete collection
Document Operations (5 tools)
insert_documents- Insert new documents (fail if exists)upsert_documents- Insert or update documentsupdate_documents- Update existing documentsdelete_documents- Delete documents by IDfetch_documents- Retrieve documents by ID
Vector Search (2 tools)
vector_query- Single-vector similarity search with optional filteringmulti_vector_query- Multi-vector search with re-ranking (Weighted/RRF)
Index Management (3 tools)
create_index- Create vector index (HNSW/IVF/FLAT) or scalar index (INVERT)drop_index- Remove index from fieldoptimize_collection- Optimize collection for better performance
AI Embedding (3 tools)
generate_dense_embedding- Generate embedding for text using OpenAI APIembedding_write- Auto-embed text documents and upsert to collectionembedding_search- Natural language semantic search with auto-embedding
Tool Details
Vector Data Types
VECTOR_FP32,VECTOR_FP64,VECTOR_FP16- Dense float vectorsVECTOR_INT8- Dense integer vectorsSPARSE_VECTOR_FP32,SPARSE_VECTOR_FP16- Sparse vectors (Dict[int, float])
Scalar Data Types
INT32,INT64,UINT32,UINT64- Integer typesFLOAT,DOUBLE- Floating point typesSTRING,BOOL- Text and boolean
Index Types
Vector Indexes:
HNSW- Hierarchical Navigable Small World (recommended for most cases)IVF- Inverted File Index (good for large datasets)FLAT- Brute-force exact search (small datasets)
Scalar Indexes:
INVERT- Inverted index for scalar fields with optional range optimization
Distance Metrics
COSINE- Cosine similarityIP- Inner productL2- Euclidean distance
Re-ranking Strategies (Multi-Vector Query)
WEIGHTED- Weighted score fusion with custom weights per fieldRRF- Reciprocal Rank Fusion (rank-based fusion)
Architecture
Modular Structure
zvec-mcp-server/
โโโ src/
โ โโโ zvec_mcp/
โ โโโ __init__.py # Package entry point
โ โโโ server.py # MCP server implementation (17 tools)
โ โโโ schemas.py # Pydantic input validation models
โ โโโ types.py # Enums and type definitions
โ โโโ utils.py # Helper functions and formatters
โโโ tests/
โ โโโ test_server.py # Pytest test suite
โโโ pyproject.toml # Project configuration
โโโ README.md # This file
โโโ CONTRIBUTING.md # Contribution guidelines
โโโ LICENSE # Apache 2.0 License
Session Management
The server maintains an in-memory cache of opened collections identified by collection_name. This allows:
- Multiple operations on the same collection without reopening
- Efficient workflow execution
- Clear separation between different collections
MCP Resources
The server exposes two MCP resources for introspection:
zvec://collections- List all opened collections in the current sessionzvec://collection/{collection_name}- Get detailed schema and stats for a specific collection
Error Handling
All tools provide clear, actionable error messages:
- Resource not found errors with suggestions
- Validation errors from Pydantic v2
- Zvec API errors with context
Response Formats
Tools support two output formats:
- JSON: Structured data for programmatic processing
- Markdown: Human-readable formatted text with headers and lists
Development
Running Tests
The project includes a comprehensive pytest test suite with 21 test cases covering all functionality.
# Install dev dependencies (includes pytest and pytest-asyncio)
uv pip install -e ".[dev]"
# Run all tests
pytest tests/test_server.py -v
# Run specific test class
pytest tests/test_server.py::TestMultiVectorQuery -v
# Run with coverage report
pytest tests/test_server.py --cov=zvec_mcp --cov-report=html
# Run tests with output
pytest tests/test_server.py -v -s
Testing with MCP Inspector
# Test the server interactively
npx @modelcontextprotocol/inspector python -m zvec_mcp
Code Quality
# Run linter
ruff check src/
# Format code
ruff format src/
Example Workflows
1. Quick Start with AI Embedding
# Set OPENAI_API_KEY before running
# Create a collection for 1536-dim OpenAI embeddings
create_and_open_collection({
"path": "./my_vectors",
"collection_name": "docs_col",
"vector_fields": [
{
"name": "embedding",
"data_type": "VECTOR_FP32",
"dimension": 1536
}
],
"scalar_fields": [
{"name": "title", "data_type": "STRING", "nullable": False},
{"name": "category", "data_type": "STRING", "nullable": True}
]
})
# Write documents with auto-generated embeddings
embedding_write({
"collection_name": "docs_col",
"field_name": "embedding",
"documents": [
{
"id": "doc1",
"text": "Machine learning is a subset of artificial intelligence...",
"fields": {"title": "ML Basics", "category": "AI"}
},
{
"id": "doc2",
"text": "Neural networks are inspired by biological neurons...",
"fields": {"title": "Neural Networks", "category": "AI"}
}
]
})
# Semantic search with natural language
embedding_search({
"collection_name": "docs_col",
"field_name": "embedding",
"query_text": "How do artificial neurons work?",
"topk": 5,
"filter": 'category == "AI"'
})
2. Filtered Semantic Search
# Search with scalar filters
embedding_search({
"collection_name": "docs_col",
"field_name": "embedding",
"query_text": "deep learning frameworks",
"topk": 10,
"filter": 'publish_year > 2020 AND category == "tech"'
})
3. Multi-Vector Search with Re-ranking
# Create collection with multiple vector fields
create_and_open_collection({
"path": "./multi_vectors",
"collection_name": "hybrid_col",
"vector_fields": [
{
"name": "dense_embedding",
"data_type": "VECTOR_FP32",
"dimension": 1536
},
{
"name": "sparse_embedding",
"data_type": "SPARSE_VECTOR_FP32",
"dimension": 250002
}
]
})
# Insert documents with multiple embeddings
insert_documents({
"collection_name": "hybrid_col",
"documents": [
{
"id": "doc1",
"vectors": {
"dense_embedding": [0.1, 0.2, ...],
"sparse_embedding": {1: 0.8, 5: 0.6, 10: 0.4}
}
}
]
})
# Multi-vector query with Weighted re-ranker
multi_vector_query({
"collection_name": "hybrid_col",
"vectors": [
{"field_name": "dense_embedding", "vector": [0.15, 0.25, ...]},
{"field_name": "sparse_embedding", "vector": {1: 0.7, 5: 0.5}}
],
"topk": 20,
"topn": 5,
"reranker_type": "weighted",
"weights": {"dense_embedding": 1.5, "sparse_embedding": 1.0},
"metric_type": "IP"
})
Multi-Vector Search Deep Dive
Why Multi-Vector Search?
Modern AI applications often use multiple embeddings for the same content:
- Dense + Sparse: Combines semantic understanding (dense) with keyword matching (sparse)
- Text + Image: Multi-modal search across different content types
- Multiple Models: Different embedding models capture different aspects
Re-ranking Strategies
Weighted Re-ranker
Combines normalized scores from each field using custom weights:
final_score = w1 * score1 + w2 * score2 + ...
Best for:
- When scores are comparable across fields
- You know the relative importance of each field
- Need fine-grained control over fusion
RRF (Reciprocal Rank Fusion)
Combines results based on rank positions:
rrf_score = sum(1 / (rank_constant + rank_i))
Best for:
- Different distance metrics across fields
- Scores not directly comparable
- Standard, parameter-free fusion (k=60 is typical)
Parameters Explained
- topk: Number of candidates retrieved from each vector field
- topn: Final number of documents returned after re-ranking
- weights: Custom weights for each field (Weighted re-ranker only)
- rank_constant: RRF parameter, typically 60 (RRF re-ranker only)
- metric_type: Distance metric for normalization (Weighted re-ranker only)
Example Use Cases
Hybrid BM25 + Dense Search:
# Combine traditional keyword search (sparse) with semantic search (dense)
multi_vector_query({
"vectors": [
{"field_name": "bm25_sparse", "vector": bm25_vector},
{"field_name": "bert_dense", "vector": bert_embedding}
],
"reranker_type": "weighted",
"weights": {"bm25_sparse": 0.4, "bert_dense": 0.6}
})
Cross-Modal Image-Text Search:
# Search across image and text embeddings
multi_vector_query({
"vectors": [
{"field_name": "clip_image", "vector": image_embedding},
{"field_name": "clip_text", "vector": text_embedding}
],
"reranker_type": "rrf",
"rank_constant": 60
})
References
License
Contributing
Please see CONTRIBUTING.md for guidelines on how to contribute to this project.
Please note that this project is released with a Contributor Code of Conduct. By participating in this project you agree to abide by its terms.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file zvec_mcp_server-0.1.0.tar.gz.
File metadata
- Download URL: zvec_mcp_server-0.1.0.tar.gz
- Upload date:
- Size: 30.2 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
f31041d9841c03a4ea21ee2553c59c189dfa8374af58ec9a74d749009649ab9b
|
|
| MD5 |
17aabec5f8a9d006b710aee68917b9af
|
|
| BLAKE2b-256 |
c2206faf2f18169e3f4d73a10fd2263a075fb0a171295fc5938d36fbe7120327
|
File details
Details for the file zvec_mcp_server-0.1.0-py3-none-any.whl.
File metadata
- Download URL: zvec_mcp_server-0.1.0-py3-none-any.whl
- Upload date:
- Size: 24.0 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
e5985fb33dcfd23dedb432e6dc480f98594fb8dd50e301a8577c8fd6a3929a78
|
|
| MD5 |
56964a2c214c471b630612729bb07532
|
|
| BLAKE2b-256 |
1b8ddd220508359f8f121f17301e3023b70611e893f110179f81518576f71168
|