Serverless vector database for bigdata
Project description
VectorLake
VectorLake is a robust, vector database designed for low maintenance, cost, efficient storage and querying of any size vector data distributed across S3 files.
🏷 Features
-
Inspired by article Which Vector Database Should I Use? A Comparison Cheatsheet
-
VectorLake created with tradeoff to minimize db maintenance, cost and provide custom data partitioning strategies
-
Native Big Data Support: Specifically designed to handle large datasets, making it ideal for big data projects.
-
Vector Data Handling: Capable of storing and querying high-dimensional vectors, commonly used for embedding storage in machine learning projects.projects.
-
Efficient Search: Efficient nearest neighbors search, ideal for querying similar vectors in high-dimensional spaces. This makes it especially useful for querying for similar vectors in a high-dimensional space.
-
Data Persistence: Supports data persistence on disk, network volume and S3, enabling long-term storage and retrieval of indexed data.
-
Customizable Partitioning: Trade-off design to minimize database maintenance, cost, and provide custom data partitioning strategies.
-
Native support of LLM Agents.
-
Feature store for experimental data.
📦 Installation
To get started with VectorLake, simply install the package using pip:
pip install vector_lake
⛓️ Quick Start
import numpy as np
from vector_lake import VectorLake
db = VectorLake(location="s3://vector-lake", dimension=5, approx_shards=243)
N = 100 # for example
D = 5 # Dimensionality of each vector
embeddings = np.random.rand(N, D)
for em in embeddings:
db.add(em, metadata={}, document="some document")
db.persist()
db = VectorLake(location="s3://vector-lake", dimension=5, approx_shards=243)
# re-init test
db.query([0.56325391, 0.1500543, 0.88579166, 0.73536349, 0.7719873])
Custom feature partition
Custom partition to group features by custom category
import numpy as np
from vector_lake.core.index import Partition
if __name__ == "__main__":
db = Partition(location="s3://vector-lake", partition_key="feature", dimension=5)
N = 100 # for example
D = 5 # Dimensionality of each vector
embeddings = np.random.rand(N, D)
for em in embeddings:
db.add(em, metadata={}, document="some document")
db.persist()
db = Partition(location="s3://vector-lake", key="feature", dimension=5)
# re-init test
db.buckets
db.query([0.56325391, 0.1500543, 0.88579166, 0.73536349, 0.7719873])
Why VectorLake?
VectorLake gives you the functionality of a simple, resilient vector database, but with very easy setup and low operational overhead. With it you've got a lightweight and reliable distributed vector store.
VectorLake leverages Hierarchical Navigable Small World (HNSW) for data partitioning across all vector data shards. This ensures that each modification to the system aligns with vector distance. You can learn more about the design here.
Limitations
TBD
🛠️ Roadmap
👋 Contributing
Contributions to VectorLake are welcome! If you'd like to contribute, please follow these steps:
- Fork the repository on GitHub
- Create a new branch for your changes
- Commit your changes to the new branch
- Push your changes to the forked repository
- Open a pull request to the main VectorLake repository
Before contributing, please read the contributing guidelines.
License
VectorLake is released under the MIT License.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
File details
Details for the file vector_lake-0.0.1.tar.gz
.
File metadata
- Download URL: vector_lake-0.0.1.tar.gz
- Upload date:
- Size: 12.4 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/1.4.0 CPython/3.10.6 Linux/5.15.0-1042-azure
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | ca551236dcd1a5d0f579767fa9e1997e5c388b9ea4e96e08f3d9f74dee58498b |
|
MD5 | 4ef0fff51aa112518e507659ff0b7797 |
|
BLAKE2b-256 | 4fc0290d40abeed4b79b5ef5d0729bf53e21f86b19866b7df7d32fb029390dca |