LlamaIndex VectorStore for VelesDB: The Local AI Memory Database. Microsecond RAG retrieval.
Project description
LlamaIndex VelesDB Integration
VelesDB vector store integration for LlamaIndex.
Features
- 🚀 Sub-millisecond search — SIMD-optimized vector retrieval
- 📦 Self-contained — Single VelesDB binary, no external services required
- 🔒 Local-first — All data stays on your machine
- 🧠 RAG-ready — Built for Retrieval-Augmented Generation
- 🔀 Multi-Query Fusion — Native MQG support with RRF/Weighted strategies
Installation
pip install llama-index-vector-stores-velesdb
Quick Start
from llama_index.core import VectorStoreIndex, SimpleDirectoryReader
from llamaindex_velesdb import VelesDBVectorStore
# Create vector store
vector_store = VelesDBVectorStore(
path="./velesdb_data",
collection_name="my_docs",
metric="cosine",
)
# Load and index documents
documents = SimpleDirectoryReader("./data").load_data()
index = VectorStoreIndex.from_documents(
documents,
vector_store=vector_store,
)
# Query
query_engine = index.as_query_engine()
response = query_engine.query("What is VelesDB?")
print(response)
Usage with Existing Index
from llama_index.core import VectorStoreIndex
from llamaindex_velesdb import VelesDBVectorStore
# Connect to existing data
vector_store = VelesDBVectorStore(path="./existing_data")
index = VectorStoreIndex.from_vector_store(vector_store)
# Query
query_engine = index.as_query_engine()
response = query_engine.query("Summarize the key points")
API Reference
VelesDBVectorStore
VelesDBVectorStore(
path: str = "./velesdb_data", # Database directory
collection_name: str = "llamaindex", # Collection name
metric: str = "cosine", # Distance metric
storage_mode: str = "full", # Storage / quantization mode
search_quality: str = None, # Quality preset (see below)
)
Parameters:
| Parameter | Type | Default | Description |
|---|---|---|---|
path |
str |
"./velesdb_data" |
Path to database directory |
collection_name |
str |
"llamaindex" |
Name of the collection |
metric |
str |
"cosine" |
Distance metric: cosine, euclidean, dot (aliases: dotproduct, inner, ip), hamming, jaccard |
storage_mode |
str |
"full" |
Storage mode: full/f32, sq8/int8 (4× compression), binary/bit (32× compression), pq (8-32× compression), rabitq (32× with scalar correction) |
search_quality |
str | None |
None |
Quality preset: fast, balanced, accurate, perfect, autotune, custom:N, adaptive:MIN:MAX |
Methods:
| Method | Description |
|---|---|
| Core Operations | |
add(nodes) |
Add nodes with embeddings |
add_bulk(nodes) |
Bulk insert (2-3x faster for large batches) |
delete(ref_doc_id) |
Delete by document ID |
get_nodes(node_ids) |
Retrieve nodes by their IDs |
flush() |
Flush pending changes to disk |
| Search | |
query(query) |
Query with vector |
batch_query(queries) |
Batch query multiple vectors in parallel |
multi_query_search(query_embeddings, ...) |
Multi-query fusion search ⭐ NEW |
hybrid_query(query_str, query_embedding, ...) |
Hybrid vector+BM25 search |
text_query(query_str, ...) |
Full-text BM25 search |
velesql(query_str, params) |
Execute VelesQL query |
| Utilities | |
get_collection_info() |
Get collection metadata |
is_empty() |
Check if collection is empty |
scroll(batch_size) |
Iterate all points in stable batches without a query vector |
Advanced Features
Multi-Query Fusion (MQG)
Search with multiple query embeddings and fuse results using various strategies. Perfect for RAG pipelines using Multiple Query Generation (MQG).
from llamaindex_velesdb import VelesDBVectorStore
vector_store = VelesDBVectorStore(path="./velesdb_data")
# Basic usage with RRF (Reciprocal Rank Fusion)
results = vector_store.multi_query_search(
query_embeddings=[emb1, emb2, emb3], # Multiple query reformulations
similarity_top_k=10,
fusion="rrf",
fusion_params={"k": 60}
)
# With weighted fusion (like SearchXP's scoring)
results = vector_store.multi_query_search(
query_embeddings=[emb1, emb2],
similarity_top_k=10,
fusion="weighted",
fusion_params={
"avg_weight": 0.6, # Average score weight
"max_weight": 0.3, # Maximum score weight
"hit_weight": 0.1, # Hit ratio weight
}
)
for node in results.nodes:
print(f"{node.metadata}: {node.text[:50]}...")
Fusion Strategies:
"rrf"- Reciprocal Rank Fusion (default, robust to score scale differences)"average"- Mean score across all queries"maximum"- Maximum score from any query"weighted"- Custom combination of avg, max, and hit ratio"relative_score"- Linear blend of dense and sparse scores
# Relative Score Fusion
results = vector_store.multi_query_search(
query_embeddings=[emb1, emb2],
similarity_top_k=10,
fusion="relative_score",
fusion_params={"dense_weight": 0.7, "sparse_weight": 0.3}
)
Advanced Search
search_quality — Quality Presets
Control the recall/latency trade-off for all queries with a single parameter set
at construction time or overridden per-call via query() kwargs.
# Set once on the store — applies to every query() call
vector_store = VelesDBVectorStore(
path="./velesdb_data",
search_quality="accurate", # higher recall at the cost of latency
)
q = VectorStoreQuery(query_embedding=embedding, similarity_top_k=10)
results = vector_store.query(q)
# Override per-call via kwargs
results = vector_store.query(q, search_quality="fast")
Accepted values:
| Value | Description |
|---|---|
"fast" |
Lowest latency, reduced recall |
"balanced" |
Balanced latency/recall |
"accurate" |
Higher recall, higher latency |
"perfect" |
Exhaustive search, maximum recall |
"autotune" |
Runtime-adaptive quality |
"custom:N" |
Explicit ef_search (e.g. "custom:256") |
"adaptive:MIN:MAX" |
Adaptive ef range (e.g. "adaptive:32:512") |
query_with_ef(query_embedding, ef_search, top_k)
Search with an explicit HNSW ef_search parameter to trade query latency for recall.
# Higher ef_search = better recall, slower query
results = vector_store.query_with_ef(
query_embedding=embedding,
ef_search=256,
top_k=10
)
query_ids(query_embedding, top_k)
Search returning only node IDs and scores — no payloads transferred.
Faster than query() when only IDs are needed (e.g., for post-processing pipelines).
hits = vector_store.query_ids(query_embedding=embedding, top_k=50)
# [{"id": "abc", "score": 0.92}, ...]
Server Mode: URL Validation
When connecting to a remote velesdb-server via the path parameter as a URL,
validate_url is called automatically during initialization to reject
malformed URLs before any network request is issued.
Hybrid Search (Vector + BM25)
from llamaindex_velesdb import VelesDBVectorStore
vector_store = VelesDBVectorStore(path="./velesdb_data")
# Hybrid search combining semantic and keyword matching
results = vector_store.hybrid_query(
query_str="machine learning optimization",
query_embedding=embedding_model.get_query_embedding("machine learning optimization"),
similarity_top_k=10,
vector_weight=0.7 # 70% vector, 30% BM25
)
for node in results.nodes:
print(node.text)
Full-Text Search (BM25)
# Pure keyword-based search without embeddings
results = vector_store.text_query(
query_str="VelesDB performance",
similarity_top_k=5
)
Cross-Collection MATCH
Use the velesql() method with the _collection parameter to run MATCH queries
that enrich results with data from other collections. Nodes annotated with
@collection in the MATCH pattern have their payloads looked up from the named
collection after traversal.
results = vector_store.velesql(
"MATCH (p:Product)-[:STORED_IN]->(inv:Inventory@inventory) "
"RETURN p.name, inv.price, inv.stock LIMIT 20",
params={"_collection": "catalog_graph"}
)
for row in results:
print(row["p.name"], row["inv.price"])
Performance
Measured on CI runners (ubuntu-latest, 2-core). Local hardware will be faster.
Source: benchmarks/baseline.json.
| Operation | Latency | Notes |
|---|---|---|
| Search (10K / 128D, k=10) | ~0.34 ms | HNSW + SIMD cosine |
| Hybrid (vector + filter) | ~0.27 ms | Filtered vector search |
| Batch insert (10K / 128D) | ~9 s total | Sequential HNSW build |
Comparison with Other Stores
| Feature | VelesDB | Chroma | Pinecone |
|---|---|---|---|
| Deployment | Local binary | Docker | Cloud |
| Cost | Free | Free | $$$ |
| Offline | ✅ | ✅ | ❌ |
License
MIT License (this integration)
VelesDB Core and Server are licensed under VelesDB Core License 1.0 (source-available). See the root LICENSE for details.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file llama_index_vector_stores_velesdb-1.13.7.tar.gz.
File metadata
- Download URL: llama_index_vector_stores_velesdb-1.13.7.tar.gz
- Upload date:
- Size: 52.0 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.12.9
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
59cc2a8853bae6e6521e593e7fc58d64e86ad9a47114c9df734a6015d390509b
|
|
| MD5 |
db1959d4aa0d6e06e47c2e479a6f9adc
|
|
| BLAKE2b-256 |
89988fc94f22a2d574f840a5e939639034d380ab9db04a2091cdf416af837564
|
File details
Details for the file llama_index_vector_stores_velesdb-1.13.7-py3-none-any.whl.
File metadata
- Download URL: llama_index_vector_stores_velesdb-1.13.7-py3-none-any.whl
- Upload date:
- Size: 37.7 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.12.9
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
a3992d20ef0c15be3418d033393284eb38ad29a736a9a4955a492c1d5ade4d81
|
|
| MD5 |
9e5a9da6181cd9971986dea883d9ec2c
|
|
| BLAKE2b-256 |
037816cad09435b8b7e8f73bccf266a65c30c74800b0bf718d83075859e44a0f
|