Production-grade embedding loader for CSV data to vector stores with multiple providers.
Project description
vector-dataloader: Embedding Loader Package
vector-dataloader is a robust, extensible Python library for loading CSV data from S3 or local files into vector stores (Postgres, FAISS, Chroma) with embedding generation. It supports multiple embedding providers (AWS Bedrock, Google Gemini, Sentence-Transformers, OpenAI) and two embedding modes:
- Combined: Concatenated text with a single embedding.
- Separated: Individual embeddings per column.
🚀 Features
- Data Loading: From S3 or local CSV files.
- Embedding Generation: Combined or separated modes.
- Embedding Providers: AWS Bedrock, Google Gemini, Sentence-Transformers, OpenAI.
- Vector Stores: Postgres (with pgvector), FAISS (in-memory), Chroma (persistent).
- Update Support: Detects new/updated/removed rows, handles soft deletes.
- Scalability: Batch operations, retries, connection pooling.
- Extensibility: Plugin-style for providers and stores.
- Validation: Schema, type, null checks.
---## Setup Instructions
To use this repo:
git clone <repo-url>
cd DataLoader
uv venv
.venv\Scripts\activate # On Windows
# source .venv/bin/activate # On Linux/Mac
uv pip install -r requirements.txt
uv pip install -e .[all,dev]
uv run main_local.py
# or
uv run main.py
## 📦 Installation
Install via pip or uv:
````bash
pip install vector-dataloader
# or
uv add vector-dataloader
Install optional dependencies for specific providers/stores:
pip install vector-dataloader[chroma,gemini]
# or
uv add vector-dataloader[chroma,gemini]
Available extras: gemini, sentence-transformers, openai, faiss, chroma, all.
⚙️ Usage
Below are example scripts for different combinations of vector stores and embedding providers. Save these as separate files (e.g., main_chroma_gemni.py) and run with uv run <filename>.py or python <filename>.py.
Chroma with Gemini
import asyncio
from dataload.infrastructure.vector_stores.chroma_store import ChromaVectorStore
from dataload.infrastructure.storage.loaders import LocalLoader
from dataload.application.services.embedding.gemini_provider import GeminiEmbeddingProvider
from dataload.application.use_cases.data_loader_use_case import dataloadUseCase
async def main():
repo = ChromaVectorStore(mode='persistent', path='./my_chroma_db')
embedding = GeminiEmbeddingProvider()
loader = LocalLoader()
use_case = dataloadUseCase(repo, embedding, loader)
await use_case.execute(
'data_to_load/sample.csv',
'test_table',
['name', 'description'],
['id'],
create_table_if_not_exists=True,
embed_type='separated'
)
if __name__ == '__main__':
asyncio.run(main())
Chroma with Sentence-Transformers
import asyncio
from dataload.infrastructure.vector_stores.chroma_store import ChromaVectorStore
from dataload.infrastructure.storage.loaders import LocalLoader
from dataload.application.services.embedding.sentence_transformers_provider import SentenceTransformersProvider
from dataload.application.use_cases.data_loader_use_case import dataloadUseCase
async def main():
repo = ChromaVectorStore(mode='persistent', path='./my_chroma_db')
embedding = SentenceTransformersProvider()
loader = LocalLoader()
use_case = dataloadUseCase(repo, embedding, loader)
await use_case.execute(
'data_to_load/sample.csv',
'test_table',
['name', 'description'],
['id'],
create_table_if_not_exists=True,
embed_type='separated'
)
if __name__ == '__main__':
asyncio.run(main())
FAISS with Gemini
import asyncio
from dataload.infrastructure.vector_stores.faiss_store import FaissVectorStore
from dataload.infrastructure.storage.loaders import LocalLoader
from dataload.application.services.embedding.gemini_provider import GeminiEmbeddingProvider
from dataload.application.use_cases.data_loader_use_case import dataloadUseCase
async def main():
repo = FaissVectorStore()
embedding = GeminiEmbeddingProvider()
loader = LocalLoader()
use_case = dataloadUseCase(repo, embedding, loader)
await use_case.execute(
'data_to_load/sample.csv',
'test_table',
['name', 'description'],
['id'],
create_table_if_not_exists=True,
embed_type='separated'
)
if __name__ == '__main__':
asyncio.run(main())
FAISS with Sentence-Transformers
import asyncio
from dataload.infrastructure.vector_stores.faiss_store import FaissVectorStore
from dataload.infrastructure.storage.loaders import LocalLoader
from dataload.application.services.embedding.sentence_transformers_provider import SentenceTransformersProvider
from dataload.application.use_cases.data_loader_use_case import dataloadUseCase
async def main():
repo = FaissVectorStore()
embedding = SentenceTransformersProvider()
loader = LocalLoader()
use_case = dataloadUseCase(repo, embedding, loader)
await use_case.execute(
'data_to_load/sample.csv',
'test_table',
['name', 'description'],
['id'],
create_table_if_not_exists=True,
embed_type='separated'
)
if __name__ == '__main__':
asyncio.run(main())
Postgres with Gemini
import asyncio
from dataload.infrastructure.db.db_connection import DBConnection
from dataload.infrastructure.db.data_repository import PostgresDataRepository
from dataload.infrastructure.storage.loaders import LocalLoader
from dataload.application.services.embedding.gemini_provider import GeminiEmbeddingProvider
from dataload.application.use_cases.data_loader_use_case import dataloadUseCase
async def main():
db_conn = DBConnection()
await db_conn.initialize()
repo = PostgresDataRepository(db_conn)
embedding = GeminiEmbeddingProvider()
loader = LocalLoader()
use_case = dataloadUseCase(repo, embedding, loader)
await use_case.execute(
'data_to_load/sample.csv',
'test_table',
['name', 'description'],
['id'],
create_table_if_not_exists=True,
embed_type='separated'
)
if __name__ == '__main__':
asyncio.run(main())
Postgres with Sentence-Transformers
import asyncio
from dataload.infrastructure.db.db_connection import DBConnection
from dataload.infrastructure.db.data_repository import PostgresDataRepository
from dataload.infrastructure.storage.loaders import LocalLoader
from dataload.application.services.embedding.sentence_transformers_provider import SentenceTransformersProvider
from dataload.application.use_cases.data_loader_use_case import dataloadUseCase
async def main():
db_conn = DBConnection()
await db_conn.initialize()
repo = PostgresDataRepository(db_conn)
embedding = SentenceTransformersProvider()
loader = LocalLoader()
use_case = dataloadUseCase(repo, embedding, loader)
await use_case.execute(
'data_to_load/sample.csv',
'test_table',
['name', 'description'],
['id'],
create_table_if_not_exists=True,
embed_type='separated'
)
if __name__ == '__main__':
asyncio.run(main())
⚙️ Configuring Environment Variables
dataload uses environment variables for configuration, loaded from a .env file or system variables.
Example .env
# Google Gemini API Key
GOOGLE_API_KEY=your_google_api_key_here
# Local Postgres DB config
LOCAL_POSTGRES_HOST=localhost
LOCAL_POSTGRES_PORT=5432
LOCAL_POSTGRES_DB=your_db_name
LOCAL_POSTGRES_USER=postgres
LOCAL_POSTGRES_PASSWORD=your_password
# Optional AWS configs (for Bedrock/S3)
AWS_REGION=ap-southeast-1
SECRET_NAME=your_secret_name
Notes
The .env file should be at the project root.
For AWS (Bedrock/S3), set use_aws=True in DBConnection to use AWS Secrets Manager.
Ensure data_to_load/sample.csv exists with columns id, name, description.
📚 License
MIT License
Copyright (c) 2025 Shashwat Roy
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.```
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file vector_dataloader-1.2.0.tar.gz.
File metadata
- Download URL: vector_dataloader-1.2.0.tar.gz
- Upload date:
- Size: 28.4 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.11.0
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
b7e5924e74ed7db6300cb2e36960c9db741fc1d79c8dc843085173b60d37e658
|
|
| MD5 |
93946e8efefe85c1f9efc0a4e0aba766
|
|
| BLAKE2b-256 |
17f8d79f846d9092f81cd0cae9c837845deb7d36ad5bceb7b382622452cae512
|
File details
Details for the file vector_dataloader-1.2.0-py3-none-any.whl.
File metadata
- Download URL: vector_dataloader-1.2.0-py3-none-any.whl
- Upload date:
- Size: 35.0 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.11.0
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
10910cd96111dd01c35e9e9e6e214342e20ecab1e274aa3c45f7f74a7b2451fc
|
|
| MD5 |
92fc0b1e7a2443352d2cd770f4780480
|
|
| BLAKE2b-256 |
c6bf5838106ceaaf9dd8c63d158ab3860da5ed0955d46a7341112ae08f64f1c5
|