Dense Retriever
Project description
🦮 Golden Retriever
How to use
Install the library from PyPI:
pip install goldenretriever
or from source:
git clone https://github.com/Riccorl/golden-retriever.git
cd goldenretriever
pip install -e .
Usage
How to run an experiment
Training
Here a simple example on how to train a DPR-like Retriever on the NQ dataset. First download the dataset from (DPR)[]. The run the following code:
from goldenretriever.trainer import Trainer
from goldenretriever import GoldenRetriever
from goldenretriever.data.datasets import InBatchNegativesDataset
# create a retriever
retriever = GoldenRetriever(
question_encoder="intfloat/e5-small-v2",
passage_encoder="intfloat/e5-small-v2"
)
# create a dataset
train_dataset = InBatchNegativesDataset(
name="webq_train",
path="path/to/webq_train.json",
tokenizer=retriever.question_tokenizer,
question_batch_size=64,
passage_batch_size=400,
max_passage_length=64,
shuffle=True,
)
val_dataset = InBatchNegativesDataset(
name="webq_dev",
path="path/to/webq_dev.json",
tokenizer=retriever.question_tokenizer,
question_batch_size=64,
passage_batch_size=400,
max_passage_length=64,
)
trainer = Trainer(
retriever=retriever,
train_dataset=train_dataset,
val_dataset=val_dataset,
max_steps=25_000,
wandb_online_mode=True,
wandb_project_name="golden-retriever-dpr",
wandb_experiment_name="e5-small-webq",
max_hard_negatives_to_mine=5,
)
# start training
trainer.train()
Evaluation
from goldenretriever.trainer import Trainer
from goldenretriever import GoldenRetriever
from goldenretriever.data.datasets import InBatchNegativesDataset
retriever = GoldenRetriever(
question_encoder="",
document_index="",
device="cuda",
precision="16",
)
test_dataset = InBatchNegativesDataset(
name="test",
path="",
tokenizer=retriever.question_tokenizer,
question_batch_size=64,
passage_batch_size=400,
max_passage_length=64,
)
trainer = Trainer(
retriever=retriever,
test_dataset=test_dataset,
log_to_wandb=False,
top_k=[20, 100]
)
trainer.test()
Inference
from goldenretriever import GoldenRetriever
retriever = GoldenRetriever(
question_encoder="path/to/question/encoder",
passage_encoder="path/to/passage/encoder",
document_index="path/to/document/index"
)
# retrieve documents
retriever.retrieve("What is the capital of France?", k=5)
Data format
Input data
The retriever expects a jsonl file similar to DPR:
[
{
"question": "....",
"answers": ["...", "...", "..."],
"positive_ctxs": [{
"title": "...",
"text": "...."
}],
"negative_ctxs": ["..."],
"hard_negative_ctxs": ["..."]
},
...
]
Index data
The document to index can be either a jsonl file or a tsv file similar to DPR:
jsonl
: each line is a json object with the following keys:id
,text
,metadata
tsv
: each line is a tab-separated string with theid
andtext
column, followed by any other column that will be stored in themetadata
field
jsonl example:
[
{
"id": "...",
"text": "...",
"metadata": ["{...}"]
},
...
]
tsv example:
id \t text \t any other column
...
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
goldenretriever-core-0.9.0.tar.gz
(79.1 kB
view hashes)
Built Distribution
Close
Hashes for goldenretriever-core-0.9.0.tar.gz
Algorithm | Hash digest | |
---|---|---|
SHA256 | 50efcc71df053b4f7fd4e826f3b938867fd99f8ec6fdcaaaf95660347bdc7287 |
|
MD5 | f2d5d92d1e9358aff29e7a6d8b6c20df |
|
BLAKE2b-256 | 329100ae709416a8bc22ac6559b38c6e9cc61272768f0bd8c68404041673ffc9 |
Close
Hashes for goldenretriever_core-0.9.0-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 568578363ee88a4312b1943365e5c7930552a4f96842da9ef70dc15afcafbba5 |
|
MD5 | f89b95e103ba8b7b348b0a58e974bcee |
|
BLAKE2b-256 | c7e3f9dec231d20f0fb694019a2de108ef56ba0336c0bae04829c00d861c50e1 |