Skip to main content

ragpipe: iterate quickly on your RAG pipelines.

Project description

ragpipe

    Ragpipe Logo

Ragpipe: Iterate fast on your RAG pipelines.

  Docs •   Examples •  Discord

Introduction

Ragpipe helps you extract insights from large document repositories quickly.

Ragpipe is lean and nimble. Makes it easy to iterate fast, tweak components of your RAG pipeline until you get desired responses.

Yet another RAG framework? Although popular RAG frameworks make it easy to setup RAG pipelines, they lack primitives that enable you to iterate and get to desired responses quickly.

Watch a quick video intro.

Note: Under active development. Expect breaking changes.


Instead of the usual chunk-embed-match-rank flow, Ragpipe adopts a holistic, end-to-end view of the pipeline:

  • build a hierachical document model,
  • decompose a complex query into sub-queries
  • resolve sub-queries and obtain responses
  • aggregate the query responses.

How do we resolve each sub-query?

  • choose representations for document parts relevant to a sub-query,
  • specify the bridges among those representations,
  • merge the retrieved docs across bridges to setup a context,
  • present the query and context to a language model to compute the final response

The represent-bridge-merge pattern is very powerful and allows us to build and iterate over all kinds of complex retrieval pipelines, including those based on the traditional retrieve-rank-rerank pattern and more recent advanced RAG patterns. Evals can be attached to bridge or merge nodes to verify intermediate results.

Installation

Using pip.

pip install ragpipe

Alternatively, clone the repository and use pip to install dependencies.

git clone https://github.com/ekshaks/ragpipe; cd ragpipe
#creating a new environment with python 3.10
conda create -n ragpipe python=3.10
#activating the environment
conda activate ragpipe
#install ragpipe dependencies
pip install -r requirements.txt

Note: For CUDA support on Windows/Linux you might need to install PyTorch with CUDA compiled. For instructions follow https://pytorch.org/get-started/locally/

Querying with Ragpipe

To query over a data repository,

  1. Build a hierachical data model over your data repositories, e.g., {"documents" : [{"text": ...}, ...]}.

  2. In config.yml:

  • Specify which document fields will be represented and how.
  • Specify which representations to compute for the query.
  • Specify bridges: which pair of query and doc field representation should be matched to find relevant documents.
  • Specify merges: how to combine multiple bridges, sequentially or in parallel, to yield the final ranked list of relevant documents.
  1. Specify how to generate response to the query using the above ranked document list and a large language model.
  2. Iterate by making quick changes to (1), (2) or (3).

Quick Start

Examples are in the examples directory.

For instance, run examples/insurance.

examples/insurance/
|
|-- insurance.py
|-- insurance.yml
python -m examples.insurance.insurance

The default LLM is Groq. Please set GROQ_API_KEY in .env. Alternatively, openai LLMs (set OPENAI_API_KEY) and ollama based local LLMs (ollama/.. or local/..) are also supported.

API Usage

Embed ragpipe into your Agents by delegating fine-grained retrieval to ragpipe.

def rag():
    from ragpipe.config import load_config
    config = load_config('examples/<project>/config.yml', show=True) #see examples/*/*.yml

    query_text = config.queries[0] #user-provided query
    D = build_data_model(config) # D.docs.<> contain documents

    from ragpipe import Retriever
    docs_retrieved = Retriever(config).eval(query_text, D)
    for doc in docs_retrieved: doc.show()

    from ragpipe.llms import respond_to_contextual_query as respond
    result = respond(query_text, docs_retrieved, config.prompts['qa'], config.llm_models['default']) 
    
    print(f'\nQuery: {query_text}')
    print('\nGenerated answer: ', result)

Tests

pytest examples/test_all.py

Key Ideas

Representations. Choose the query/document fields as well as how to represent each chosen query / document field to aid similarity/relevance computation (bridges) over the entire document repository. Representations can be text strings, dense/sparse vector embeddings or arbitrary data objects, and help bridge the gap between the query and the documents.

Bridges. Choose a pair of query and document representation to bridge. A bridge serves as a relevance indicator: one of the several criteria for identifying the relevant documents for a query. In practice, several bridges together determine the degree to which a document is relevant to a query. A bridge is a ranker and top-k selector, rolled into one. Computing each bridge creates a unique ranked list of documents with respect to the relevance criteria.

Merges. Specify how to combine the bridges, e.g., combine multiple ranked list of documents into a single ranked list using rank fusion.

Data Model. A hierarchical data structure that consists of all the (nested) documents. The data model is created from the original document files and is retained over the entire pipeline. We compute representations for arbitrary nested fields of the data, without flattening the data tree.

Key Dependencies

Ragpipe relies on

  • rank_bm25: for BM25 based retrieval
  • fastembed: dense and sparse embeddings
  • chromadb, qdrant-client: vector databases (more coming..)
  • litellm: interact with LLM APIs
  • jinja2: prompt formatting
  • LlamaIndex: for parsing documents

Contribute

Ragpipe is open-source and under active development. We welcome contributions:

  • Try out ragpipe on queries over your data. Open an issue or send a pull request.
  • Join us as an early contributor to build a new, powerful and flexible RAG framework.
  • Stuck on a RAG problem without progress? Share with us, iterate and overcome blockers.

Join discussion on our Discord channel.

Troubleshooting

  • If you encounter errors related to protocol buffers, use the following fix: export PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=python

Read More

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

ragpipe-0.0.2.5.tar.gz (32.9 kB view details)

Uploaded Source

Built Distribution

ragpipe-0.0.2.5-py3-none-any.whl (35.9 kB view details)

Uploaded Python 3

File details

Details for the file ragpipe-0.0.2.5.tar.gz.

File metadata

  • Download URL: ragpipe-0.0.2.5.tar.gz
  • Upload date:
  • Size: 32.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.8.3 CPython/3.12.4 Darwin/23.6.0

File hashes

Hashes for ragpipe-0.0.2.5.tar.gz
Algorithm Hash digest
SHA256 1bc14fde3547af321e684af2c2020de654ab5122f28ce76499e6b9597cf75bae
MD5 45ad15caf1f30e9cc6ca641ead08a177
BLAKE2b-256 2e93d7362144f0de0b4ea2b87f49c251f0ce2b81642008bb764dbbca8fcf3f51

See more details on using hashes here.

File details

Details for the file ragpipe-0.0.2.5-py3-none-any.whl.

File metadata

  • Download URL: ragpipe-0.0.2.5-py3-none-any.whl
  • Upload date:
  • Size: 35.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.8.3 CPython/3.12.4 Darwin/23.6.0

File hashes

Hashes for ragpipe-0.0.2.5-py3-none-any.whl
Algorithm Hash digest
SHA256 4a19eec6710bdbdbf964be91bb981c66fdd9683b8afc0f77b1adf302d233d524
MD5 d48d04271aa6e019dd0100a56d6e4f5c
BLAKE2b-256 5eab47c9ea05c19cac072666a448c6cf91b5fe291d67ca307ea7182266a1f3b0

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page