Dense Retriever
Project description
🦮 Golden Retriever
WIP: distributed-compatible codebase
A distributed-compatible codebase is under development. Check the distributed
branch for the latest updates.
How to use
Install the library from PyPI:
pip install goldenretriever-core
or from source:
git clone https://github.com/Riccorl/golden-retriever.git
cd golden-retriever
pip install -e .
Usage
How to run an experiment
Training
Here a simple example on how to train a DPR-like Retriever on the NQ dataset. First download the dataset from DPR. The run the following code:
from goldenretriever.trainer import Trainer
from goldenretriever import GoldenRetriever
from goldenretriever.data.datasets import InBatchNegativesDataset
# create a retriever
retriever = GoldenRetriever(
question_encoder="intfloat/e5-small-v2",
passage_encoder="intfloat/e5-small-v2"
)
# create a dataset
train_dataset = InBatchNegativesDataset(
name="webq_train",
path="path/to/webq_train.json",
tokenizer=retriever.question_tokenizer,
question_batch_size=64,
passage_batch_size=400,
max_passage_length=64,
shuffle=True,
)
val_dataset = InBatchNegativesDataset(
name="webq_dev",
path="path/to/webq_dev.json",
tokenizer=retriever.question_tokenizer,
question_batch_size=64,
passage_batch_size=400,
max_passage_length=64,
)
trainer = Trainer(
retriever=retriever,
train_dataset=train_dataset,
val_dataset=val_dataset,
max_steps=25_000,
wandb_online_mode=True,
wandb_project_name="golden-retriever-dpr",
wandb_experiment_name="e5-small-webq",
max_hard_negatives_to_mine=5,
)
# start training
trainer.train()
Evaluation
from goldenretriever.trainer import Trainer
from goldenretriever import GoldenRetriever
from goldenretriever.data.datasets import InBatchNegativesDataset
retriever = GoldenRetriever(
question_encoder="",
document_index="",
device="cuda",
precision="16",
)
test_dataset = InBatchNegativesDataset(
name="test",
path="",
tokenizer=retriever.question_tokenizer,
question_batch_size=64,
passage_batch_size=400,
max_passage_length=64,
)
trainer = Trainer(
retriever=retriever,
test_dataset=test_dataset,
log_to_wandb=False,
top_k=[20, 100]
)
trainer.test()
Inference
from goldenretriever import GoldenRetriever
retriever = GoldenRetriever(
question_encoder="path/to/question/encoder",
passage_encoder="path/to/passage/encoder",
document_index="path/to/document/index"
)
# retrieve documents
retriever.retrieve("What is the capital of France?", k=5)
Data format
Input data
The retriever expects a jsonl file similar to DPR:
[
{
"question": "....",
"answers": ["...", "...", "..."],
"positive_ctxs": [{
"title": "...",
"text": "...."
}],
"negative_ctxs": ["..."],
"hard_negative_ctxs": ["..."]
},
...
]
Index data
The document to index can be either a jsonl file or a tsv file similar to DPR:
jsonl
: each line is a json object with the following keys:id
,text
,metadata
tsv
: each line is a tab-separated string with theid
andtext
column, followed by any other column that will be stored in themetadata
field
jsonl example:
[
{
"id": "...",
"text": "...",
"metadata": ["{...}"]
},
...
]
tsv example:
id \t text \t any other column
...
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
goldenretriever_core-0.9.4.tar.gz
(79.6 kB
view hashes)
Built Distribution
Close
Hashes for goldenretriever_core-0.9.4.tar.gz
Algorithm | Hash digest | |
---|---|---|
SHA256 | 50cfca608368f7856b92c896c3907b6cdea82d383dc91b6f2b3b53c0375473da |
|
MD5 | 8f7d644b13da2d1eb3521cfc0642ced5 |
|
BLAKE2b-256 | 48f72f1e5056cc76e6aff5112036bdfb1130a1630e35bfa037569e5813072c21 |
Close
Hashes for goldenretriever_core-0.9.4-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 507438dd70d51a547860d6692cb219494cf0d863c06ecb90d4f487556f9596c7 |
|
MD5 | 38e60c02d6d111f5cc19146db3cb6d18 |
|
BLAKE2b-256 | f3a8ed2033363af7039e413f2d8e68165c74eae61184279e4c377655625cce41 |