Encrypted Vector Database for Secure and Fast ANN Searches
Project description
VectorX LlamaIndex Integration
This package provides an integration between VectorX (an encrypted vector database) and LlamaIndex, allowing you to use VectorX as a vector store backend for LlamaIndex.
Features
- Encrypted Vector Storage: Use VectorX's client-side encryption for your LlamaIndex embeddings
- Multiple Distance Metrics: Support for cosine, L2, and inner product distance metrics
- Metadata Filtering: Filter search results based on metadata
- High Performance: Optimized for speed and efficiency with encrypted data
Installation
pip install vecx-llamaindex
This will install both the vecx-llamaindex package and its dependencies (vecx and llama-index).
Quick Start
import os
from llama_index.core.schema import TextNode
from llama_index.core.vector_stores.types import VectorStoreQuery
from vecx_llamaindex import VectorXVectorStore
# Configure your VectorX credentials
api_token = os.environ.get("VECTORX_API_TOKEN")
encryption_key = os.environ.get("VECTORX_ENCRYPTION_KEY") # or generate a new one
index_name = "my_llamaindex_vectors"
dimension = 1536 # OpenAI ada-002 embedding dimension
# Initialize the vector store
vector_store = VectorXVectorStore.from_params(
api_token=api_token,
encryption_key=encryption_key,
index_name=index_name,
dimension=dimension,
space_type="cosine"
)
# Create a node with embedding
node = TextNode(
text="This is a sample document",
id_="doc1",
embedding=[0.1, 0.2, 0.3, ...], # Your embedding vector
metadata={
"doc_id": "doc1",
"source": "example",
"author": "VectorX"
}
)
# Add the node to the vector store
vector_store.add([node])
# Query the vector store
query = VectorStoreQuery(
query_embedding=[0.2, 0.3, 0.4, ...], # Your query vector
similarity_top_k=5
)
results = vector_store.query(query)
# Process results
for node, score in zip(results.nodes, results.similarities):
print(f"Node ID: {node.node_id}, Similarity: {score}")
print(f"Text: {node.text}")
print(f"Metadata: {node.metadata}")
Using with LlamaIndex
from llama_index.core import VectorStoreIndex, StorageContext
from llama_index.embeddings.openai import OpenAIEmbedding
# Initialize your nodes or documents
nodes = [...] # Your nodes with text but no embeddings yet
# Setup embedding function
embed_model = OpenAIEmbedding() # Or any other embedding model
# Initialize VectorX vector store
vector_store = VectorXVectorStore.from_params(
api_token=api_token,
encryption_key=encryption_key,
index_name=index_name,
dimension=1536, # Make sure this matches your embedding dimension
)
# Create storage context
storage_context = StorageContext.from_defaults(vector_store=vector_store)
# Create vector index
index = VectorStoreIndex(
nodes,
storage_context=storage_context,
embed_model=embed_model
)
# Query the index
query_engine = index.as_query_engine()
response = query_engine.query("Your query here")
print(response)
Configuration Options
The VectorXVectorStore constructor accepts the following parameters:
api_token: Your VectorX API tokenencryption_key: Your encryption key for the indexindex_name: Name of the VectorX indexdimension: Vector dimension (required when creating a new index)space_type: Distance metric, one of "cosine", "l2", or "ip" (default: "cosine")batch_size: Number of vectors to insert in a single API call (default: 100)text_key: Key to use for storing text in metadata (default: "text")remove_text_from_metadata: Whether to remove text from metadata (default: False)
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file vecx_llamaindex-0.1.2.tar.gz.
File metadata
- Download URL: vecx_llamaindex-0.1.2.tar.gz
- Upload date:
- Size: 6.3 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.11.11
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
f2ec0ef886427d40fa5526c211de43cd89b6d69386be5664189a209e3ef7e8c3
|
|
| MD5 |
35377b42fa809db6754ef981b419b481
|
|
| BLAKE2b-256 |
b686c55450a341eb1b6ef20ab4d890075447f700075a902b4951b01dad4b6e44
|
File details
Details for the file vecx_llamaindex-0.1.2-py3-none-any.whl.
File metadata
- Download URL: vecx_llamaindex-0.1.2-py3-none-any.whl
- Upload date:
- Size: 6.4 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.11.11
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
e5ded91e116686fb20fb56cee1cc2377053744d56b85d354f187ea48a5d5051f
|
|
| MD5 |
02ed818bb583acfb88d6497eacfe567f
|
|
| BLAKE2b-256 |
70cc02a0e03ec88d35924cfb562a791852e1a45d9148bc0cc0dcdd5c9960c2c6
|