Skip to main content

llama-index packs searchain integration

Project description

LlamaIndex Packs Integration: Searchain

This LlamaPack implements a framework called SearChain, which implements the interaction between LLM and IR in the form of the global reasoning chain called Chain-of-Query (CoQ).

This follows the idea in the paper Search-in-the-Chain: Towards Accurate, Credible and Traceable Large Language Models for Knowledge-intensive Tasks.

Making content generated by large language models (LLMs) such as ChatGPT accurate, trustworthy, and traceable is critical, especially for knowledge-intensive tasks. Introducing information retrieval (IR) to provide LLM with external knowledge is likely to solve this problem, however, where and how to introduce IR is a big challenge. The SearChain framework generates a global reasoning chain called a Chain of Query (CoQ) for LLM, where each node contains an IR-oriented query and the answer to the query. Second, IR verifies the answer of each node of CoQ, it corrects the answer that is not consistent with the retrieved information when IR gives high confidence, which improves the credibility. Third, LLM can mark its missing knowledge in CoQ and IR can provide this knowledge to LLM. These three operations improve the accuracy of LLM for complex knowledge-intensive tasks in terms of reasoning ability and knowledge. This Pack implements the above🤗!

You can see its use case in the examples folder.

This implementation is adapted from the author's implementation. You can find the official code repository here.

Code Usage

First, you need to install SearChainpack using the following code,

from llama_index.core.llama_pack import download_llama_pack

download_llama_pack("SearChainPack", "./searchain_pack")

Next you can load and initialize a searchain object,

from searchain_pack.base import SearChainPack

searchain = SearChainPack(
    data_path="data",
    dprtokenizer_path="dpr_reader_multi",
    dprmodel_path="dpr_reader_multi",
    crossencoder_name_or_path="Quora_cross_encoder",
)

Relevant data can be found here. You can run searchain using the following method,

start_idx = 0
while not start_idx == -1:
    start_idx = execute(
        "/hotpotqa/hotpot_dev_fullwiki_v1_line.json", start_idx=start_idx
    )

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llama_index_packs_searchain-0.3.0.tar.gz (5.5 kB view details)

Uploaded Source

Built Distribution

File details

Details for the file llama_index_packs_searchain-0.3.0.tar.gz.

File metadata

File hashes

Hashes for llama_index_packs_searchain-0.3.0.tar.gz
Algorithm Hash digest
SHA256 5a53ff81e8dcdb1e8d62448d38fe97f04d67c57c6d64e2740068d8944063357c
MD5 c30dc676a2b62d4b3da6a4792ed8879d
BLAKE2b-256 ffc0dec4abe82f8c688f0c6b5ba1d4124c827302aeb61132f0cb376d12e2d759

See more details on using hashes here.

File details

Details for the file llama_index_packs_searchain-0.3.0-py3-none-any.whl.

File metadata

File hashes

Hashes for llama_index_packs_searchain-0.3.0-py3-none-any.whl
Algorithm Hash digest
SHA256 0665d26dba2f95fee0c12a51e4d255f67b5ec901b6574a89f5ad0267b9133563
MD5 543e2d6f691594bf2f6486cb637dfc70
BLAKE2b-256 a289f72e6070bc07890327149caeaea54a18be8f38e5ef5e59383aa4c2716711

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page