Skip to main content

This library uses a universal format for vector datasets to easily export and import data from all vector databases.

Project description

This library uses a universal format for vector datasets to easily export and import data from all vector databases.

See the Contributing section to add support for your favorite vector database.

Supported Vector Databases

(Request support for a VectorDB by voting/commenting here: https://github.com/AI-Northstar-Tech/vector-io/discussions/38)

Vector Database

Import

Export

Pinecone

Qdrant

Milvus

Azure AI Search

🔜

🔜

GCP Vertex AI Vector Search

🔜

🔜

KDB.AI

🔜

🔜

Rockset

🔜

🔜

Vespa

Weaviate

MongoDB Atlas

Epsilla

txtai

Redis Search

OpenSearch

Activeloop Deep Lake

Anari AI

Apache Cassandra

ApertureDB

Chroma

ClickHouse

CrateDB

DataStax Astra DB

Elasticsearch

LanceDB

Marqo

Meilisearch

MyScale

Neo4j

Nuclia DB

OramaSearch

pgvector

Turbopuffer

Typesense

USearch

Vald

Apache Solr

Universal Vector Dataset Format (VDF) specification

  1. VDF_META.json: It is a json file with the following schema:

interface Index {
  namespace: string;
  total_vector_count: number;
  exported_vector_count: number;
  dimensions: number;
  model_name: string;
  vector_columns: string[];
  data_path: string;
  metric: 'Euclid' | 'Cosine' | 'Dot';
}

interface VDFMeta {
  version: string;
  file_structure: string[];
  author: string;
  exported_from: 'pinecone' | 'qdrant'; // others when they are added
  indexes: {
    [key: string]: Index[];
  };
  exported_at: string;
}
  1. Parquet files/folders for metadata and vectors.

Installation

Using pip

pip install vdf-io

From source

git clone https://github.com/AI-Northstar-Tech/vector-io.git
cd vector-io
pip install -r requirements.txt

Export Script

export_vdf --help

usage: export.py [-h] [-m MODEL_NAME] [--max_file_size MAX_FILE_SIZE]
                 [--push_to_hub | --no-push_to_hub]
                 {pinecone,qdrant} ...

Export data from a vector database to VDF

options:
  -h, --help            show this help message and exit
  -m MODEL_NAME, --model_name MODEL_NAME
                        Name of model used
  --max_file_size MAX_FILE_SIZE
                        Maximum file size in MB (default: 1024)
  --push_to_hub, --no-push_to_hub
                        Push to hub

Vector Databases:
  Choose the vectors database to export data from

  {pinecone,qdrant,vertexai_vectorsearch}
    pinecone                 Export data from Pinecone
    qdrant                   Export data from Qdrant
    vertexai_vectorsearch    Export data from Vertex AI Vector Search
export_vdf pinecone --help
usage: export_vdf pinecone [-h] [-e ENVIRONMENT] [-i INDEX]
                          [-s ID_RANGE_START]
                          [--id_range_end ID_RANGE_END]
                          [-f ID_LIST_FILE]
                          [--modify_to_search MODIFY_TO_SEARCH]

options:
  -h, --help            show this help message and exit
  -e ENVIRONMENT, --environment ENVIRONMENT
                        Environment of Pinecone instance
  -i INDEX, --index INDEX
                        Name of index to export
  -s ID_RANGE_START, --id_range_start ID_RANGE_START
                        Start of id range
  --id_range_end ID_RANGE_END
                        End of id range
  -f ID_LIST_FILE, --id_list_file ID_LIST_FILE
                        Path to id list file
  --modify_to_search MODIFY_TO_SEARCH
                        Allow modifying data to search
export_vdf_cli.py qdrant --help
usage: export.py qdrant [-h] [-u URL] [-c COLLECTIONS]

options:
  -h, --help            show this help message and exit
  -u URL, --url URL     Location of Qdrant instance
  -c COLLECTIONS, --collections COLLECTIONS
                        Names of collections to export
export_vdf_cli.py milvus --help
usage: export_vdf_cli.py milvus [-h] [-u URI] [-t TOKEN] [-c COLLECTIONS]

optional arguments:
  -h, --help            show this help message and exit
  -u URI, --uri URI     Milvus connection URI
  -t TOKEN, --token TOKEN
                        Milvus connection token
  -c COLLECTIONS, --collections COLLECTIONS
                        Names of collections to export
export_vdf_cli.py vertexai_vectorsearch --help
usage: export_vdf_cli.py vertexai_vectorsearch [-h] [-p PROJECT_ID] [-i INDEX]
                          [-c GCLOUD_CREDENTIALS_FILE]

options:
  -h, --help            show this help message and exit
  -p PROJECT_ID, --project-id PROJECT_ID
                        Google Cloud Project ID
  -i INDEX, --index INDEX
                        Name of index/indexes to export (comma-separated)
  -c GCLOUD_CREDENTIALS_FILE, --gcloud-credentials-file GCLOUD_CREDENTIALS_FILE
                        Google Cloud Service Account Credentials file

Import script

import_vdf_cli.py --help
usage: import_vdf_cli.py [-h] [-d DIR] {pinecone,qdrant} ...

Import data from VDF to a vector database

options:
  -h, --help         show this help message and exit
  -d DIR, --dir DIR  Directory to import

Vector Databases:
  Choose the vectors database to export data from

  {pinecone,qdrant}
    pinecone         Import data to Pinecone
    qdrant           Import data to Qdrant

import_vdf_cli.py pinecone --help
usage: import_vdf_cli.py pinecone [-h] [-e ENVIRONMENT]

options:
  -h, --help            show this help message and exit
  -e ENVIRONMENT, --environment ENVIRONMENT
                        Pinecone environment

import_vdf_cli.py qdrant --help
usage: import_vdf_cli.py qdrant [-h] [-u URL]

options:
  -h, --help         show this help message and exit
  -u URL, --url URL  Qdrant url

import_vdf_cli.py vertexai_vectorsearch --help
usage: import_vdf_cli.py vertexai_vectorsearch [-h] [-p PROJECT_ID] [-l REGION]

options:
  -h, --help            show this help message and exit
  -p PROJECT_ID, --project-id PROJECT_ID
                        Google Cloud Project ID
  -l REGION, --location REGION
                        Google Cloud region hosting index

Re-embed script

This Python script is used to re-embed a vector dataset. It takes a directory of vector dataset in the VDF format and re-embeds it using a new model. The script also allows you to specify the name of the column containing text to be embedded.

reembed.py --help
usage: reembed.py [-h] -d DIR [-m NEW_MODEL_NAME]
                  [-t TEXT_COLUMN]

Reembed a vector dataset

options:
  -h, --help            show this help message and exit
  -d DIR, --dir DIR     Directory of vector dataset in
                        the VDF format
  -m NEW_MODEL_NAME, --new_model_name NEW_MODEL_NAME
                        Name of new model to be used
  -t TEXT_COLUMN, --text_column TEXT_COLUMN
                        Name of the column containing
                        text to be embedded

Examples

export_vdf -m hkunlp/instructor-xl --push_to_hub pinecone --environment gcp-starter

Follow the prompt to select the index and id range to export.

Contributing

Adding a new vector database

If you wish to add an import/export implementation for a new vector database, you must also implement the other side of the import/export for the same database. Please fork the repo and send a PR for both the import and export scripts.

Steps to add a new vector database (ABC):

Export:

  1. Add a new subparser in export_vdf_cli.py for the new vector database. Add database specific arguments to the subparser, such as the url of the database, any authentication tokens, etc.

  2. Add a new file in src/vdf_io/export_vdf/ for the new vector database. This file should define a class ExportABC which inherits from ExportVDF.

  3. Specify a DB_NAME_SLUG for the class

  4. The class should implement the get_data() function to download points (in a batched manner) with all the metadata from the specified index of the vector database. This data should be stored in a series of parquet files/folders. The metadata should be stored in a json file with the schema above.

  5. Use the script to export data from an example index of the vector database and verify that the data is exported correctly.

Import:

  1. Add a new subparser in import_vdf_cli.py for the new vector database. Add database specific arguments to the subparser, such as the url of the database, any authentication tokens, etc.

  2. Add a new file in src/vdf_io/import_vdf/ for the new vector database. This file should define a class ImportABC which inherits from ImportVDF. It should implement the upsert_data() function to upload points from a vdf dataset (in a batched manner) with all the metadata to the specified index of the vector database. All metadata about the dataset should be read fro mthe VDF_META.json file in the vdf folder.

  3. Use the script to import data from the example vdf dataset exported in the previous step and verify that the data is imported correctly.

Changing the VDF specification

If you wish to change the VDF specification, please open an issue to discuss the change before sending a PR.

Efficiency improvements

If you wish to improve the efficiency of the import/export scripts, please fork the repo and send a PR.

Questions

If you have any questions, please open an issue on the repo or message Dhruv Anand on LinkedIn

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

vdf_io-0.0.12.tar.gz (45.5 kB view details)

Uploaded Source

Built Distribution

vdf_io-0.0.12-py3-none-any.whl (75.9 kB view details)

Uploaded Python 3

File details

Details for the file vdf_io-0.0.12.tar.gz.

File metadata

  • Download URL: vdf_io-0.0.12.tar.gz
  • Upload date:
  • Size: 45.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.11.6

File hashes

Hashes for vdf_io-0.0.12.tar.gz
Algorithm Hash digest
SHA256 2624e3ed3b0a196653d6ef2fa36f659b3397f990c16059d5ca745a2784dc2fa6
MD5 1d4d485d2be13c68ffbe84f82a16c24e
BLAKE2b-256 6827dfd4c3ac18ca9e73bcbc9d1959f044cbe849c30eeb179d97bd86ee57743f

See more details on using hashes here.

File details

Details for the file vdf_io-0.0.12-py3-none-any.whl.

File metadata

  • Download URL: vdf_io-0.0.12-py3-none-any.whl
  • Upload date:
  • Size: 75.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.11.6

File hashes

Hashes for vdf_io-0.0.12-py3-none-any.whl
Algorithm Hash digest
SHA256 8aef93ac662a929603d4260fc771324ee1dd168d8920470f648f930a73cf4959
MD5 8dec4c62622dbbdeea9f4a8f9a6027a4
BLAKE2b-256 cd96cafa11bab045250089031b3b9eea86ddbdc47e8d76653e58f9edcc365093

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page