Skip to main content

llama-index packs raptor integration

Project description

Raptor Retriever LlamaPack

This LlamaPack shows how to use an implementation of RAPTOR with llama-index, leveraging the RAPTOR pack.

RAPTOR works by recursively clustering and summarizing clusters in layers for retrieval.

There two retrieval modes:

  • tree_traversal -- traversing the tree of clusters, performing top-k at each level in the tree.
  • collapsed -- treat the entire tree as a giant pile of nodes, perform simple top-k.

See the paper for full algorithm details.

CLI Usage

You can download llamapacks directly using llamaindex-cli, which comes installed with the llama-index python package:

llamaindex-cli download-llamapack RaptorPack --download-dir ./raptor_pack

You can then inspect/modify the files at ./raptor_pack and use them as a template for your own project.

Code Usage

You can alternaitvely install the package:

pip install llama-index-packs-raptor

Then, you can import and initialize the pack! This will perform clustering and summarization over your data.

from llama_index.packs.raptor import RaptorPack

pack = RaptorPack(documents, llm=llm, embed_model=embed_model)

The run() function is a light wrapper around retriever.retrieve().

nodes = pack.run(
    "query",
    mode="collapsed",  # or tree_traversal
)

You can also use modules individually.

# get the retriever
retriever = pack.retriever

Persistence

The RaptorPack comes with the RaptorRetriever, which offers ways of saving/reloading!

If you are using a remote vector-db, just pass it in

# Pack usage
pack = RaptorPack(..., vector_store=vector_store)

# RaptorRetriever usage
retriever = RaptorRetriever(..., vector_store=vector_store)

Then, to re-connect, just pass in the vector store again and an empty list of documents

# Pack usage
pack = RaptorPack([], ..., vector_store=vector_store)

# RaptorRetriever usage
retriever = RaptorRetriever([], ..., vector_store=vector_store)

Check out the notebook here for complete details!.

Configure Summary Module

Using the SummaryModule you can configure how the Raptor Pack does summaries and how many workers are applied to summaries.

You can configure the LLM.

You can configure summary_prompt. This will change the prompt sent to your LLM to summarize you docs.

You can configure num_workers, which will influence the number of workers or rather async semaphores allowing more summaries to process simulatneously. This might affect openai or other LLm provider API limits, be aware.

from llama_index.packs.raptor.base import SummaryModule
from llama_index.packs.raptor import RaptorRetriever

summary_prompt = "As a professional summarizer, create a concise and comprehensive summary of the provided text, be it an article, post, conversation, or passage with as much detail as possible."

# Adding SummaryModule you can configure the summary prompt and number of workers doing summaries.
summary_module = SummaryModule(
    llm=llm, summary_prompt=summary_prompt, num_workers=16
)

pack = RaptorPack(
    documents, llm=llm, embed_model=embed_model, summary_module=summary_module
)

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llama_index_packs_raptor-0.3.0.tar.gz (8.9 kB view details)

Uploaded Source

Built Distribution

llama_index_packs_raptor-0.3.0-py3-none-any.whl (8.4 kB view details)

Uploaded Python 3

File details

Details for the file llama_index_packs_raptor-0.3.0.tar.gz.

File metadata

  • Download URL: llama_index_packs_raptor-0.3.0.tar.gz
  • Upload date:
  • Size: 8.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.8.3 CPython/3.11.10 Darwin/22.3.0

File hashes

Hashes for llama_index_packs_raptor-0.3.0.tar.gz
Algorithm Hash digest
SHA256 16e1e377c698782be273ec4a5ed7ae28aabb5812c97b8299c32d3ee5808e61cb
MD5 90d52b2ad0d07bdc62b382052bbd4345
BLAKE2b-256 0d2083b953c2dcf95ce8de8c878494266816cae75f632c16d41247cf8d089a22

See more details on using hashes here.

File details

Details for the file llama_index_packs_raptor-0.3.0-py3-none-any.whl.

File metadata

File hashes

Hashes for llama_index_packs_raptor-0.3.0-py3-none-any.whl
Algorithm Hash digest
SHA256 80c57c92a9638a422969eea833b66c3fc7a737a7ddf6f156af5c684404bdf980
MD5 e9763c1d3ed867b95099a77794a4b728
BLAKE2b-256 61fbff34c50e724bb82d299308a9f72bb2c7a35cf2a86484d7c943b62b27436e

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page