Skip to main content

AIR-Bench: Automated Heterogeneous Information Retrieval Benchmark

Project description

AIR-Bench: Automated Heterogeneous Information Retrieval Benchmark

Introduction | Documentation | Leaderboard | Citing

Introduction

Background & Motivation

Evaluation is crucial for the development of information retrieval models. In recent years, a series of milestone works have been introduced to the community, such as MSMARCO, Natural Question, (open-domain QA), MIRACL (Milti-lingual retrieval), BEIR and MTEB (general-domain zero-shot retrieval). However, the existing benchmarks are severely limited in the following perspectives.

  • Incapability of dealing with new domains. All of the existing benchmarks are static, which means they are established for the pre-defined domains based on human labeled data. Therefore, they are incapable of dealing with new domains which are interested by the users.
  • Potential risk of over-fitting and data leakage. The existing retrievers are intensively fine-tuned in order to achieve strong performances on popular benchmarks, like BEIR and MTEB. Despite that these benchmarks are initially designed for zero-shot evaluation of O.O.D. Evaluation, the in-domain training data is widely used during the fine-tuning process. What is worse, given the public availability of the existing evaluation datasets, the testing data could be falsely mixed into the retrievers' training set by mistake.

Features of AIR-Bench

The new benchmark is highlighted for the following new features.

  • Automated. The testing data is automatically generated by large language models without human intervention. Therefore, it is able to instantly support the evaluation of new domains at a very small cost. Besides, the new testing data is almost impossible to be covered by the training sets of any existing retrievers.
  • Heterogeneous and Dynamic: The testing data is generated w.r.t. diverse and constantly augmented domains and languages (i.e. Multi-domain, Multi-lingual). As a result, it is able to provide an increasingly comprehensive evaluation benchmark for the community developers.
  • Retrieval and RAG-oriented. The new benchmark is dedicated to the evaluation of retrieval performance. In addition to the typical evaluation scenarios, like open-domain question answering or paraphrase retrieval, the new benchmark also incorporates a new setting called inner-document retrieval which is closely related with today's LLM and RAG applications. In this new setting, the model is expected to retrieve the relevant chunks of a very long documents, which contain the critical infomration to answer the input question.

Documentation

Documentation
🏭 Pipeline The data generation pipeline of AIR-Bench
📋 Tasks Overview of available tasks in AIR-Bench
📈 Leaderboard The interactive leaderboard of AIR-Bench
🚀 Submit Information related to how to submit a model to AIR-Bench
🤝 Contributing How to contribute to AIR-Bench

Avaliable Evaluation Results

Detailed avaliable results are avaliable here.

Analysis about the results:

  • AIR-Bench performance scales with model size. For example, multilingual-e5-large is better than multilingual-e5-base and multilingual-e5-base is better than multilingual-e5-small. This can also be observed in bge-large-en-v1.5, bge-base-en-v1.5 and bge-small-en-v1.5.
  • The generated dataset maintains good consistency with the human-labeled dataset. The Spearman correlation between the rankings on the original MSMARCO dataset and the generated MSMARCO dataset is 0.8945.
  • The performance of the model varies across different domains. For example, e5-mistral-7b-instruct is better than bge-m3 in the healthcare domain, but e5-mistral-7b-instruct is worse than bge-m3 in the law domain.

Future Work

  • More datasets will be generated to cover more domains and languages in the future.

Acknowledgement

Citing

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distribution

airb-0.0.1-py3-none-any.whl (30.0 kB view details)

Uploaded Python 3

File details

Details for the file airb-0.0.1-py3-none-any.whl.

File metadata

  • Download URL: airb-0.0.1-py3-none-any.whl
  • Upload date:
  • Size: 30.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.0.0 CPython/3.9.18

File hashes

Hashes for airb-0.0.1-py3-none-any.whl
Algorithm Hash digest
SHA256 0520078967b01d488de6beb3eaf4faeef2a5994530b8101885415d7b2e423dcd
MD5 1f95c14d0c135ca5ec420a20187001a4
BLAKE2b-256 4f320b720fc3583e5dc38601cc6a9f0c2212b5268135b1938bd221d2bbd52978

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page