Enterprise-grade vector database library for AI applications with ChromaDB, multi-modal support, and cloud integration
Project description
AI Prishtina VectorDB
🚀 Overview
AI Prishtina VectorDB is a comprehensive, production-ready Python library for building sophisticated vector database applications. Built on top of ChromaDB, it provides enterprise-grade features for semantic search, document processing, and AI-powered data management.
✨ Key Features
- 🔍 Advanced Vector Search: Semantic similarity search with multiple embedding models
- 📊 Multi-Modal Data Support: Text, images, audio, video, and documents
- ☁️ Cloud-Native: Native integration with AWS S3, Google Cloud, Azure, and MinIO
- 🔄 Streaming Processing: Efficient batch processing and real-time data streaming
- 🎯 Feature Extraction: Advanced text, image, and audio feature extraction
- 📈 Performance Monitoring: Built-in metrics collection and performance tracking
- 🐳 Docker Ready: Complete containerization support with Docker Compose
- 🔧 Extensible Architecture: Plugin-based system for custom embeddings and processors
📦 Installation
Quick Install
pip install ai-prishtina-vectordb
Development Install
git clone https://github.com/albanmaxhuni/ai-prishtina-chromadb-client.git
cd ai-prishtina-chromadb-client
pip install -e .
Docker Install
docker-compose up -d
🏃♂️ Quick Start
Basic Vector Search
from ai_prishtina_vectordb import Database, DataSource
# Initialize database
db = Database(collection_name="my_documents")
# Load and add documents
data_source = DataSource()
data = await data_source.load_data(
source="documents.csv",
text_column="content",
metadata_columns=["title", "author", "date"]
)
await db.add(
documents=data["documents"],
metadatas=data["metadatas"],
ids=data["ids"]
)
# Perform semantic search
results = await db.query(
query_texts=["machine learning algorithms"],
n_results=5
)
print(f"Found {len(results['documents'][0])} relevant documents")
Advanced Feature Extraction
from ai_prishtina_vectordb.features import FeatureExtractor, FeatureConfig
# Configure feature extraction
config = FeatureConfig(
embedding_function="all-MiniLM-L6-v2",
dimensionality_reduction=128,
feature_scaling=True
)
# Extract features
extractor = FeatureExtractor(config)
features = await extractor.extract_text_features(
"Advanced machine learning with neural networks"
)
📚 Comprehensive Examples
1. Multi-Modal Document Processing
import asyncio
from ai_prishtina_vectordb import Database, DataSource, EmbeddingModel
from ai_prishtina_vectordb.features import TextFeatureExtractor, ImageFeatureExtractor
async def process_multimodal_documents():
# Initialize components
db = Database(collection_name="multimodal_docs")
data_source = DataSource()
# Process text documents
text_data = await data_source.load_data(
source="research_papers.pdf",
text_column="content",
metadata_columns=["title", "authors", "year"]
)
# Process images
image_data = await data_source.load_data(
source="images/",
source_type="image",
metadata_columns=["filename", "category"]
)
# Add to database
await db.add(
documents=text_data["documents"] + image_data["documents"],
metadatas=text_data["metadatas"] + image_data["metadatas"],
ids=text_data["ids"] + image_data["ids"]
)
# Semantic search across modalities
results = await db.query(
query_texts=["neural network architecture"],
n_results=10
)
return results
# Run the example
results = asyncio.run(process_multimodal_documents())
2. Cloud Storage Integration
from ai_prishtina_vectordb import DataSource
import os
async def process_cloud_data():
data_source = DataSource()
# AWS S3 Integration
s3_data = await data_source.load_data(
source="s3://my-bucket/documents/",
text_column="content",
metadata_columns=["source", "timestamp"],
aws_access_key_id=os.getenv("AWS_ACCESS_KEY_ID"),
aws_secret_access_key=os.getenv("AWS_SECRET_ACCESS_KEY")
)
# Google Cloud Storage
gcs_data = await data_source.load_data(
source="gs://my-bucket/data/",
text_column="text",
metadata_columns=["category", "date"]
)
# Azure Blob Storage
azure_data = await data_source.load_data(
source="azure://container/path/",
text_column="content",
metadata_columns=["type", "version"]
)
return s3_data, gcs_data, azure_data
3. Real-time Data Streaming
from ai_prishtina_vectordb import Database, DataSource
from ai_prishtina_vectordb.metrics import MetricsCollector
async def stream_processing_pipeline():
db = Database(collection_name="streaming_data")
data_source = DataSource()
metrics = MetricsCollector()
# Stream data in batches
async for batch in data_source.stream_data(
source="large_dataset.csv",
batch_size=1000,
text_column="content",
metadata_columns=["category", "timestamp"]
):
# Process batch
start_time = metrics.start_timer("batch_processing")
await db.add(
documents=batch["documents"],
metadatas=batch["metadatas"],
ids=batch["ids"]
)
processing_time = metrics.end_timer("batch_processing", start_time)
print(f"Processed batch of {len(batch['documents'])} documents in {processing_time:.2f}s")
# Real-time analytics
if len(batch["documents"]) > 0:
sample_query = batch["documents"][0][:100] # First 100 chars
results = await db.query(query_texts=[sample_query], n_results=5)
print(f"Found {len(results['documents'][0])} similar documents")
4. Custom Embedding Models
from ai_prishtina_vectordb import EmbeddingModel, Database
from sentence_transformers import SentenceTransformer
async def custom_embeddings_example():
# Initialize custom embedding model
embedding_model = EmbeddingModel(
model_name="sentence-transformers/all-mpnet-base-v2",
device="cuda" if torch.cuda.is_available() else "cpu"
)
# Generate embeddings
texts = [
"Machine learning is transforming industries",
"Deep learning models require large datasets",
"Natural language processing enables text understanding"
]
embeddings = await embedding_model.encode(texts, batch_size=32)
# Use with database
db = Database(collection_name="custom_embeddings")
await db.add(
embeddings=embeddings,
documents=texts,
metadatas=[{"source": "example", "index": i} for i in range(len(texts))],
ids=[f"doc_{i}" for i in range(len(texts))]
)
return embeddings
🔧 Advanced Configuration
Database Configuration
from ai_prishtina_vectordb import Database, DatabaseConfig
# Advanced database configuration
config = DatabaseConfig(
persist_directory="./vector_db",
collection_name="advanced_collection",
embedding_function="all-MiniLM-L6-v2",
distance_metric="cosine",
index_params={
"hnsw_space": "cosine",
"hnsw_construction_ef": 200,
"hnsw_m": 16
}
)
db = Database(config=config)
Feature Extraction Configuration
from ai_prishtina_vectordb.features import FeatureConfig, FeatureProcessor
config = FeatureConfig(
normalize=True,
dimensionality_reduction=256,
feature_scaling=True,
cache_features=True,
batch_size=64,
device="cuda",
embedding_function="sentence-transformers/all-mpnet-base-v2"
)
processor = FeatureProcessor(config)
🐳 Docker Deployment
Quick Start with Docker Compose
# docker-compose.yml
version: '3.8'
services:
chromadb:
image: chromadb/chroma:latest
ports:
- "8000:8000"
volumes:
- chroma_data:/chroma/chroma
ai-prishtina-vectordb:
build: .
depends_on:
- chromadb
environment:
- CHROMA_HOST=chromadb
- CHROMA_PORT=8000
volumes:
- ./data:/app/data
- ./logs:/app/logs
volumes:
chroma_data:
# Start the services
docker-compose up -d
# Run tests
docker-compose run ai-prishtina-vectordb python -m pytest
# Run examples
docker-compose run ai-prishtina-vectordb python examples/basic_text_search.py
📊 Performance & Monitoring
Built-in Metrics Collection
from ai_prishtina_vectordb.metrics import MetricsCollector, PerformanceMonitor
# Initialize metrics
metrics = MetricsCollector()
monitor = PerformanceMonitor()
# Track operations
start_time = metrics.start_timer("database_query")
results = await db.query(query_texts=["example"], n_results=10)
query_time = metrics.end_timer("database_query", start_time)
# Performance monitoring
monitor.track_memory_usage()
monitor.track_cpu_usage()
# Get performance report
report = monitor.get_performance_report()
print(f"Query time: {query_time:.4f}s")
print(f"Memory usage: {report['memory_usage']:.2f}MB")
Logging Configuration
from ai_prishtina_vectordb.logger import AIPrishtinaLogger
# Configure logging
logger = AIPrishtinaLogger(
name="my_application",
level="INFO",
log_file="logs/app.log",
log_format="json" # or "standard"
)
await logger.info("Application started")
await logger.debug("Processing batch of documents")
await logger.error("Failed to process document", extra={"doc_id": "123"})
🧪 Testing
Running Tests
# Run all tests
./run_tests.sh
# Run specific test categories
python -m pytest tests/test_database.py -v
python -m pytest tests/test_features.py -v
python -m pytest tests/test_integration.py -v
# Run with coverage
python -m pytest --cov=ai_prishtina_vectordb --cov-report=html
# Run performance tests
python -m pytest tests/test_integration.py::TestPerformanceIntegration -v
Docker-based Testing
# Run tests in Docker
docker-compose -f docker-compose.yml run test-runner
# Run integration tests
docker-compose -f docker-compose.yml run integration-tests
# Run with ChromaDB service
docker-compose up chromadb -d
docker-compose run ai-prishtina-vectordb python -m pytest tests/test_integration.py
📖 API Reference
Core Classes
| Class | Description | Key Methods |
|---|---|---|
Database |
Main vector database interface | add(), query(), delete(), update() |
DataSource |
Data loading and processing | load_data(), stream_data() |
EmbeddingModel |
Text embedding generation | encode(), encode_batch() |
FeatureExtractor |
Multi-modal feature extraction | extract_text_features(), extract_image_features() |
ChromaFeatures |
Advanced ChromaDB operations | create_collection(), backup_collection() |
Supported Data Sources
- Files: CSV, JSON, Excel, PDF, Word, Text, Images, Audio, Video
- Cloud Storage: AWS S3, Google Cloud Storage, Azure Blob, MinIO
- Databases: SQL databases via connection strings
- Streaming: Real-time data streams and batch processing
- APIs: REST APIs and web scraping
Embedding Models
- Sentence Transformers: 400+ pre-trained models
- OpenAI: GPT-3.5, GPT-4 embeddings (API key required)
- Hugging Face: Transformer-based models
- Custom Models: Plugin architecture for custom embeddings
🚀 Production Deployment
Environment Variables
# Core Configuration
CHROMA_HOST=localhost
CHROMA_PORT=8000
PERSIST_DIRECTORY=/data/vectordb
# Cloud Storage
AWS_ACCESS_KEY_ID=your_access_key
AWS_SECRET_ACCESS_KEY=your_secret_key
GOOGLE_APPLICATION_CREDENTIALS=/path/to/credentials.json
AZURE_STORAGE_CONNECTION_STRING=your_connection_string
# Performance
MAX_BATCH_SIZE=1000
EMBEDDING_CACHE_SIZE=10000
LOG_LEVEL=INFO
Scaling Considerations
- Horizontal Scaling: Use multiple ChromaDB instances with load balancing
- Vertical Scaling: Optimize memory and CPU for large datasets
- Caching: Redis integration for embedding and query caching
- Monitoring: Prometheus metrics and Grafana dashboards
🤝 Contributing
We welcome contributions! Please see our Contributing Guide for details.
Development Setup
# Clone repository
git clone https://github.com/albanmaxhuni/ai-prishtina-chromadb-client.git
cd ai-prishtina-chromadb-client
# Create virtual environment
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
# Install development dependencies
pip install -r requirements.txt
pip install -r requirements-test.txt
pip install -e .
# Run tests
./run_tests.sh
Code Quality
# Format code
black src/ tests/
isort src/ tests/
# Lint code
flake8 src/ tests/
mypy src/
# Run security checks
bandit -r src/
📄 License
This project is licensed under the MIT License - see the LICENSE file for details.
🆘 Support
- 🐛 Issues: GitHub Issues
- 💬 Discussions: GitHub Discussions
- 📧 Email: info@albanmaxhuni.com
🗺️ Roadmap
Version 0.2.0 (Q2 2024)
- Multi-modal search capabilities
- Advanced caching strategies
- Performance optimizations
- Enhanced monitoring and metrics
Version 0.3.0 (Q3 2024)
- Distributed deployment support
- Advanced query language
- Real-time collaboration features
- Enhanced security features
Version 1.0.0 (Q4 2024)
- Production-ready enterprise features
- Advanced analytics and reporting
- Multi-tenant support
- Comprehensive API documentation
📊 Benchmarks
| Operation | Documents | Time | Memory |
|---|---|---|---|
| Indexing | 100K docs | 45s | 2.1GB |
| Query | Top-10 | 12ms | 150MB |
| Batch Insert | 10K docs | 8s | 800MB |
| Similarity Search | 1M docs | 25ms | 1.2GB |
Benchmarks run on: Intel i7-10700K, 32GB RAM, SSD storage
🏆 Acknowledgments
- ChromaDB Team for the excellent vector database foundation
- Sentence Transformers for state-of-the-art embedding models
- Hugging Face for the transformers ecosystem
- Open Source Community for continuous inspiration and contributions
📝 Citation
If you use AI Prishtina VectorDB in your research or production systems, please cite:
@software{ai_prishtina_vectordb,
author = {Alban Maxhuni, PhD and AI Prishtina Team},
title = {AI Prishtina VectorDB: Enterprise-Grade Vector Database Library},
year = {2024},
version = {0.1.0},
url = {https://github.com/albanmaxhuni/ai-prishtina-chromadb-client},
doi = {10.5281/zenodo.xxxxxxx}
}
```
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file ai_prishtina_vectordb-1.0.0.tar.gz.
File metadata
- Download URL: ai_prishtina_vectordb-1.0.0.tar.gz
- Upload date:
- Size: 1.5 MB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.12.2
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
13cfa8d129da72a28749075f410b81e9205e2865c576036e546c9a5dc9ad7103
|
|
| MD5 |
a7662bc9376a1c0f4a6c8cabe31685d8
|
|
| BLAKE2b-256 |
804627fb15986650cc10ff9c6746e5fd7ce3f07d029c2ace57dd630392da04bb
|
File details
Details for the file ai_prishtina_vectordb-1.0.0-py3-none-any.whl.
File metadata
- Download URL: ai_prishtina_vectordb-1.0.0-py3-none-any.whl
- Upload date:
- Size: 100.8 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.12.2
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
dc32069e49eebf69f4ea0ee6a947564fba1c3ae19ae516283463f509cf6ceff8
|
|
| MD5 |
335ea59b41fa7d599ed4a20230da6302
|
|
| BLAKE2b-256 |
76687c7870f68eba5316b5b9ddf35aa3294d60f886b555d82fdcd3d20de5e470
|