A Library For Dense Retrieval Oriented Vector Quantization
Project description
LibVQ
A Library For Dense Retrieval Oriented Vector Quantization
Introduction
Vector quantization (VQ) is widely applied to many ANN libraries, like FAISS, ScaNN, SPTAG, DiskANN to facilitate real-time and memory-efficient dense retrieval. However, conventional vector quantization methods, like IVF, PQ, OPQ, are not optimized for the retrieval quality. In this place, We present LibVQ, the first library developed for dense retrieval oriented vector quantization. LibVQ is highlighted for the following features:
-
Knowledge Distillation. The knowledge distillation based learning process can be directly applied to the off-the-shelf embeddings. It gives rise to the strongest retrieval performance in comparison with any existing VQ based ANN indexes.
-
Flexible usage and input conditions. LibVQ may flexibly support different usages, e.g., training VQ parameters only, or joint adaptation of query encoder. LibVQ is designed to handle a wide range of input conditions: it may work only with off-the-shelf embeddings; it may also leverage extra data, e.g., relevance labels, and source queries, for further enhancement.
-
Learning and Deployment. The learning is backended by PyTorch, which can be easily configured for the efficient training based on different computation resources. The well-trained VQ parameters are wrapped up with FAISS backend ANN indexes, e.g., IndexPQ, IndexIVFPQ, etc., which are directly deployable for large-scale dense retrieval applications.
Install
- From source
git clone https://github.com/staoxiao/LibVQ.git
cd LibVQ
pip install .
Workflow
In LibVQ, users can construct a index and train it by a simple way. Please refer to our docs for more details. Besides, we provide some examples below to illustrate the usage of LibVQ.
Examples
MSMARCO
- IVFPQ (Compression Ratio = 96)
Methods | MRR@10 | Recall@10 | Recall@100 |
---|---|---|---|
Faiss-IVFPQ | 0.1380 | 0.2820 | 0.5617 |
Faiss-IVFOPQ | 0.3102 | 0.5593 | 0.8148 |
Scann | 0.1791 | 0.3499 | 0.6345 |
LibVQ(contrastive_index) | 0.3179 | 0.5724 | 0.8214 |
LibVQ(distill_index) | 0.3253 | 0.5765 | 0.8256 |
LibVQ(distill_index_nolabel) | 0.3234 | 0.5813 | 0.8269 |
LibVQ(contrastive_index-and-query-encoder) | 0.3192 | 0.5799 | 0.8427 |
LibVQ(distill_index-and-query-encoder) | 0.3311 | 0.5907 | 0.8429 |
LibVQ(distill_index-and-query-encoder_nolabel) | 0.3285 | 0.5875 | 0.8401 |
- PQ (Compression Ratio = 96)
Methods | MRR@10 | Recall@10 | Recall@100 |
---|---|---|---|
Faiss-PQ | 0.1145 | 0.2369 | 0.5046 |
Faiss-OPQ | 0.3268 | 0.5939 | 0.8651 |
Scann | 0.1795 | 0.3516 | 0.6409 |
LibVQ(distill_index) | 0.3435 | 0.6203 | 0.8825 |
LibVQ(distill_index_nolabel) | 0.3467 | 0.6180 | 0.8849 |
LibVQ(distill_index-and-query-encoder) | 0.3446 | 0.6201 | 0.8837 |
LibVQ(distill_index-and-two-encoders) | 0.3475 | 0.6223 | 0.8901 |
NQ
- IVFPQ (Compression Ratio = 384)
Methods | Recall@5 | Recall@10 | Recall@20 | Recall@100 |
---|---|---|---|---|
Faiss-IVFPQ | 0.1504 | 0.2052 | 0.2722 | 0.4523 |
Faiss-IVFOPQ | 0.3332 | 0.4279 | 0.5110 | 0.6817 |
Scann | 0.2526 | 0.3351 | 0.4144 | 0.6016 |
LibVQ(contrastive_index) | 0.3398 | 0.4415 | 0.5232 | 0.6911 |
LibVQ(distill_index) | 0.3952 | 0.4900 | 0.5667 | 0.7232 |
LibVQ(distill_index_nolabel) | 0.4066 | 0.4936 | 0.5759 | 0.7301 |
LibVQ(contrastive_index-and-query-encoder) | 0.3548 | 0.4470 | 0.5390 | 0.7120 |
LibVQ(distill_index-and-query-encoder) | 0.4725 | 0.5681 | 0.6429 | 0.7739 |
LibVQ(distill_index-and-query-encoder_nolabel) | 0.4977 | 0.5822 | 0.6484 | 0.7764 |
- PQ (Compression Ratio = 384)
Methods | Recall@5 | Recall@10 | Recall@20 | Recall@100 |
---|---|---|---|---|
Faiss-PQ | 0.1301 | 0.1861 | 0.2495 | 0.4188 |
Faiss-OPQ | 0.3166 | 0.4105 | 0.4961 | 0.6836 |
Scann | 0.2526 | 0.3351 | 0.4144 | 0.6013 |
LibVQ(distill_index) | 0.3817 | 0.4806 | 0.5681 | 0.7357 |
LibVQ(distill_index_nolabel) | 0.3880 | 0.4858 | 0.5819 | 0.7423 |
LibVQ(distill_index-and-query-encoder) | 0.4709 | 0.5689 | 0.6481 | 0.7930 |
LibVQ(distill_index-and-query-encoder_nolabel) | 0.4883 | 0.5903 | 0.6678 | 0.7914 |
LibVQ(distill_index-and-two-encoders) | 0.5637 | 0.6515 | 0.7171 | 0.8257 |
LibVQ(distill_index-and-two-encoders_nolabel) | 0.5285 | 0.6144 | 0.7296 | 0.8096 |
Related Work
-
Distii-VQ: Unifies the learning of IVF and PQ within a knowledge distillation framework. Accpted as a full paper by SIGIR 2022.
-
BiDR: Applies the learnable PQ in large-scale index and proposes the progressively optimized docs' embeddings for the better retrieval performance. Accpted as a full paper by WWW 2022.
-
MoPQ: This work identifies the limitation of using reconstruction loss minimization as the training objective of learnable PQ and proposes the Multinoulli Contrastive Loss. Accpted as a full paper by EMNLP 2021.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file LibVQ-0.0.1.tar.gz
.
File metadata
- Download URL: LibVQ-0.0.1.tar.gz
- Upload date:
- Size: 23.6 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.0 CPython/3.8.3
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 777cb15a67b65bedee5b8b4f829f23b1c8c617ff0a35c0ff27f376997776089c |
|
MD5 | a993343b16faef9330e81746599b744f |
|
BLAKE2b-256 | 900f172a975d9327e4536e1f7a9d4084157a0c9287ca2309fe670ce1a8fef2e6 |
File details
Details for the file LibVQ-0.0.1-py3-none-any.whl
.
File metadata
- Download URL: LibVQ-0.0.1-py3-none-any.whl
- Upload date:
- Size: 33.4 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.0 CPython/3.8.3
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 770c2ccd81a850d89fcd1e09cb59418cdca642964fb294c21bd74295217f9a53 |
|
MD5 | 198a1f1c08eda61c5f18678bba235171 |
|
BLAKE2b-256 | a705da97647d1a8c55f9e53d5bd15505cd761435962275451d4a9a581fffb2b1 |