Skip to main content

llama-index packs RAFT Dataset paper implementation

Project description

RAFT: Adapting Language Model to Domain Specific RAG Llama Pack

This LlamaPack implements RAFT: Adapting Language Model to Domain Specific RAG paper

Retrieval Augmented FineTuning (RAFT) is a training recipe introduced in this paper that aims to improve the performance of large language models (LLMs) in open-book, in-domain question-answering tasks. Given a question and a set of retrieved documents, RAFT trains the LLM to identify and cite verbatim the most relevant sequences from the documents that help answer the question, while ignoring irrelevant or distracting information. By explicitly training the model to distinguish between relevant and irrelevant information and to provide evidence from the relevant documents, RAFT encourages the LLM to develop better reasoning and explanation abilities, ultimately improving its ability to answer questions accurately and rationally in scenarios where additional context or knowledge is available.

A key component of RAFT is how the dataset is generated for fine-tuning. Each QA pair also includes an "oracle" document from which the answer to the question can be deduced as well as "distractor" documents which are irrelevant. During training this forces the model to learn which information is relevant/irrelevant and also memorize domain knowledge.

We've implemented the dataset generation part in a LlamaPack. Check out our full notebook here.

Installation

pip install llama-index

CLI Usage

You can download llamapacks directly using llamaindex-cli, which comes installed with the llama-index python package:

llamaindex-cli download-llamapack RAFTDatasetPack --download-dir ./raft_dataset_pack

You can then inspect the files at ./raft_dataset_pack and use them as a template for your own project.

Code Usage

You can download the pack to a the ./raft_dataset_pack directory:

from llama_index.core.llama_pack import download_llama_pack

# download and install dependencies
RAFTDatasetPack = download_llama_pack("RAFTDatasetPack", "./raft_dataset_pack")

# You can use any llama-hub loader to get documents!
raft_dataset = RAFTDatasetPack(file_path)

From here, you can use the pack, or inspect and modify the pack in ./raft_dataset_pack.

The run() function contains around logic behind RAFT: Adapting Language Model to Domain Specific RAG paper

dataset = raft_dataset.run()

This will return the dataset which can be further used for finetuned purpose. Please refer to original blog on using the dataset for fine-tuning.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llama_index_packs_raft_dataset-0.3.0.tar.gz (5.6 kB view details)

Uploaded Source

Built Distribution

File details

Details for the file llama_index_packs_raft_dataset-0.3.0.tar.gz.

File metadata

File hashes

Hashes for llama_index_packs_raft_dataset-0.3.0.tar.gz
Algorithm Hash digest
SHA256 b315076775e7446e0bac275fc95ccc0d2f9b41dfd9fd5fc119bf0d94183c7442
MD5 56847028976d34cac6b7407e6d9c2c32
BLAKE2b-256 6cc176b2f26638323fecc2ed648df628d474cf06f6e0f7ce6199e2909380a178

See more details on using hashes here.

File details

Details for the file llama_index_packs_raft_dataset-0.3.0-py3-none-any.whl.

File metadata

File hashes

Hashes for llama_index_packs_raft_dataset-0.3.0-py3-none-any.whl
Algorithm Hash digest
SHA256 e18a4d2c1e4233ef622a48f9c216c94cf4e750ea082a04bf7dfc20eab6f20767
MD5 b00b02a5a34d5fd0492d727c16fbd9d9
BLAKE2b-256 0c20143856ff42d7ac698a585c9f7271af193a0d34f802cfd0de2d54f1326110

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page