Skip to main content

InPars

Project description

InPars

Inquisitive Parrots for Search
A toolkit for end-to-end synthetic data generation using LLMs for IR

Installation

Use pip package manager to install InPars toolkit.

pip install inpars

Usage

To generate data for one of the BEIR datasets, you can use the following command:

python -m inpars.generate \
        --prompt="inpars" \
        --dataset="trec-covid" \
        --dataset_source="ir_datasets" \
        --base_model="EleutherAI/gpt-j-6B" \
        --output="trec-covid-queries.jsonl" 

Additionally, you can use your own custom dataset by specifying the corpus and queries arguments to local files.

These generated queries might be noisy, thus a filtering step is highly recommended:

python -m inpars.filter \
        --input="trec-covid-queries.jsonl" \
        --dataset="trec-covid" \
        --filter_strategy="scores" \
        --keep_top_k="10_000" \
        --output="trec-covid-queries-filtered.jsonl"

There are currently two filtering strategies available: scores, which uses probability scores from the LLM itself, and reranker, which uses an auxiliary reranker to filter queries as introduced by InPars-v2.

To prepare the training file, negative examples are mined by retrieving candidate documents with BM25 using the generated queries and sampling from these candidates. This is done using the following command:

python -m inpars.generate_triples \
        --input="trec-covid-queries-filtered.jsonl" \
        --dataset="trec-covid" \
        --output="trec-covid-triples.tsv"

With the generated triples file, you can train the model using the following command:

python -m inpars.train \
        --triples="trec-covid-triples.tsv" \
        --base_model="castorini/monot5-3b-msmarco-10k" \
        --output_dir="./reranker/" \
        --max_steps="156"

You can choose different base models, hyperparameters, and training strategies that are supported by HuggingFace Trainer.

After finetuning the reranker, you can rerank prebuilt runs from the BEIR benchmark or specify a custom run using the following command:

python -m inpars.rerank \
        --model="./reranker/" \
        --dataset="trec-covid" \
        --output_run="trec-covid-run.txt"

Finally, you can evaluate the reranked run using the following command:

python -m inpars.evaluate \
        --dataset="trec-covid" \
        --run="trec-covid-run.txt"

Resources

Generated datasets

Download synthetic datasets generated by InPars-v1:

Finetuned models

Download finetuned models from InPars-v2 on HuggingFace Hub.

Contributing

Pull requests are welcome. For major changes, please open an issue first to discuss what you would like to change.

Please make sure to update tests as appropriate.

References

Currently, if you use this tool you can cite the original InPars paper published at SIGIR or InPars-v2.

@inproceedings{inpars,
  author = {Bonifacio, Luiz and Abonizio, Hugo and Fadaee, Marzieh and Nogueira, Rodrigo},
  title = {{InPars}: Unsupervised Dataset Generation for Information Retrieval},
  year = {2022},
  isbn = {9781450387323},
  publisher = {Association for Computing Machinery},
  address = {New York, NY, USA},
  url = {https://doi.org/10.1145/3477495.3531863},
  doi = {10.1145/3477495.3531863},
  booktitle = {Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval},
  pages = {2387–2392},
  numpages = {6},
  keywords = {generative models, large language models, question generation, synthetic datasets, few-shot models, multi-stage ranking},
  location = {Madrid, Spain},
  series = {SIGIR '22}
}
@misc{inparsv2,
  doi = {10.48550/ARXIV.2301.01820},
  url = {https://arxiv.org/abs/2301.01820},
  author = {Jeronymo, Vitor and Bonifacio, Luiz and Abonizio, Hugo and Fadaee, Marzieh and Lotufo, Roberto and Zavrel, Jakub and Nogueira, Rodrigo},
  title = {{InPars-v2}: Large Language Models as Efficient Dataset Generators for Information Retrieval},
  publisher = {arXiv},
  year = {2023},
  copyright = {Creative Commons Attribution 4.0 International}
}

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

inpars-0.2.1.tar.gz (19.1 kB view details)

Uploaded Source

Built Distribution

inpars-0.2.1-py3-none-any.whl (21.6 kB view details)

Uploaded Python 3

File details

Details for the file inpars-0.2.1.tar.gz.

File metadata

  • Download URL: inpars-0.2.1.tar.gz
  • Upload date:
  • Size: 19.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.7.4

File hashes

Hashes for inpars-0.2.1.tar.gz
Algorithm Hash digest
SHA256 86d7c0367eca13a6d701598619ce57849db218f7b2b4a2ad91aa9afc55401ad6
MD5 9e99e8c263869e8fc4e1090282a08c93
BLAKE2b-256 47d093fdb8796d5ec1e842e2fed773aa1acea1a530475089e1ecb1f1a1cc03b7

See more details on using hashes here.

File details

Details for the file inpars-0.2.1-py3-none-any.whl.

File metadata

  • Download URL: inpars-0.2.1-py3-none-any.whl
  • Upload date:
  • Size: 21.6 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.7.4

File hashes

Hashes for inpars-0.2.1-py3-none-any.whl
Algorithm Hash digest
SHA256 035fb08fdeff6d3e7b89a08588c6dd1001167b0ee3e42a6a67ff339a80d1e895
MD5 5073b51874a46eb489fcf0843acc1397
BLAKE2b-256 e5a48ce52b19b273fe8ef18427f2754aa310402924cdada90d8715e0aa030ac5

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page