Skip to main content

llama-index postprocessor rankllm-rerank integration

Project description

LlamaIndex Postprocessor Integration: Rankllm-Rerank

RankLLM offers a suite of listwise rerankers, albeit with focus on open source LLMs finetuned for the task. Currently, RankLLM supports 2 of these models: RankZephyr (model="zephyr") and RankVicuna (model="vicuna"). RankLLM also support RankGPT usage (model="gpt", gpt_model="VALID_OPENAI_MODEL_NAME").

Please pip install llama-index-postprocessor-rankllm-rerank to install RankLLM rerank package.

Parameters:

  • top_n: Top N nodes to return from reranking.
  • model: Reranker model name/class (zephyr, vicuna, or gpt).
  • with_retrieval[Optional]: Perform retrieval before reranking with Pyserini.
  • step_size[Optional]: Step size of sliding window for reranking large corpuses.
  • gpt_model[Optional]: OpenAI model to use (e.g., gpt-3.5-turbo) if model="gpt"

💻 Example Usage

pip install llama-index-core
pip install llama-index-llms-openai
pip install llama-index-postprocessor-rankllm-rerank

First, build a vector store index with llama-index.

index = VectorStoreIndex.from_documents(
    documents,
)

To set up the retriever and reranker:

query_bundle = QueryBundle(query_str)

# configure retriever
retriever = VectorIndexRetriever(
    index=index,
    similarity_top_k=vector_top_k,
)

# configure reranker
reranker = RankLLMRerank(
    top_n=reranker_top_n,
    model=model,
    with_retrieval=with_retrieval,
    step_size=step_size,
    gpt_model=gpt_model,
)

To run retrieval+reranking:

# retrieve nodes
retrieved_nodes = retriever.retrieve(query_bundle)

# rerank nodes
reranked_nodes = reranker.postprocess_nodes(
    retrieved_nodes, query_bundle
)

🔧 Dependencies

Currently, RankLLM rerankers require CUDA and for rank-llm to be installed (pip install rank-llm). The built-in retriever, which uses Pyserini, requires JDK11, PyTorch, and Faiss.

castorini/rank_llm

Repository for prompt-decoding using LLMs (GPT3.5, GPT4, Vicuna, and Zephyr)
Website: http://rankllm.ai
Stars: 193

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

Built Distribution

File details

Details for the file llama_index_postprocessor_rankllm_rerank-0.2.0.tar.gz.

File metadata

File hashes

Hashes for llama_index_postprocessor_rankllm_rerank-0.2.0.tar.gz
Algorithm Hash digest
SHA256 7e818a4da60f5e6a054bf1fe4a3c414eb1e881816abc5f5b2760cb52d29c3409
MD5 2d65497bda68fcd47af3892319e36a14
BLAKE2b-256 dea17b07646e306ca11ab77cf208c54051ad46af859891a18ad25c64e197779d

See more details on using hashes here.

File details

Details for the file llama_index_postprocessor_rankllm_rerank-0.2.0-py3-none-any.whl.

File metadata

File hashes

Hashes for llama_index_postprocessor_rankllm_rerank-0.2.0-py3-none-any.whl
Algorithm Hash digest
SHA256 393de66757efe4931243dbd6abf7ea495730c9b07e14a171d755114cf16ca122
MD5 7731acb77a3118dfb50dbe656e3cee7f
BLAKE2b-256 2fd38b3cc6e34daf9284f1dd4be5f4e2b20688b03a6a67b7696e81eb18bb203d

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page