Automatic and optimized RAG Pattern generator
Project description
ai4rag
RAG Templates Optimization Engine
Initializes RAG Templates with optimal parameters
🎯 What is ai4RAG?
ai4RAG is an optimization engine for RAG Templates that is LLM and vector database provider-agnostic.
It accepts a variety of RAG Templates and a search space definition, then returns an initialized RAG Template with optimal parameter values (called a RAG Pattern).
[!IMPORTANT]
ai4ragis designed to be provider-agnostic: user may provide his own implementation for foundation model, embedding model or vector store and use them for the experiment. Out of the boxai4ragis designed to work with Llama Stack. To use the full capabilities ofai4rag, you'll need access to a Llama Stack server configured with at least one foundation model, one embedding model, and a vector database.
Llama Stack
ai4RAG can run experiments using a Llama Stack server for embeddings, vector storage, and text generation. Use the official client and API docs to connect and extend:
- Client: llama-stack-client >= 0.7.0 (Python package used by ai4RAG; installs with this project).
- Server: Llama Stack >= 0.7.0.
- API reference: Llama Stack API docs — HTTP API used by the client.
Features used by ai4rag
When using the Llama Stack backend, ai4rag relies on:
- Embeddings — Text embeddings via the client (e.g. for indexing and query encoding). See Embeddings API in the docs.
- Vector stores — Create, retrieve, and delete vector store instances (e.g. Milvus) with a chosen embedding model and dimension. See Vector stores in the API docs.
- Vector IO — Insert document chunks (with embeddings) into a store and run similarity search (query) for retrieval. See Vector IO and insert/query endpoints.
- Chat / responses — Foundation model integration for answer generation (e.g. chat completions or responses API) when evaluating RAG patterns.
Quick start
- Provide an instance of
llama-stack-clientto integrate with Llama Stack. - Prepare your knowledge base documents for the experiment.
- Prepare
benchmark_data.jsonwith evaluation questions and answers. - Define and constrain your search space.
- Configure the optimizer.
- Create and run the experiment.
Prepare llama-stack-client
To enable full integration with Llama Stack, instantiate a LlamaStackClient.
This allows ai4rag to use the models and vector stores available on your Llama Stack server.
[!tip] Store your credentials securely in a
.envfile.
import os
from dotenv import load_dotenv, find_dotenv
from llama_stack_client import LlamaStackClient
client = LlamaStackClient(base_url=os.getenv("BASE_URL"), api_key=os.getenv("API_KEY"))
Prepare knowledge base documents
Prepare a set of documents to serve as the knowledge base for retrieval. These documents will be used to ground the LLM's responses and should be stored in a local directory.
[!note] If you are using the project locally, you can load documents using the
FileStoreclass from thedev_utilsmodule. Supported document formats can be found in theFileStoreimplementation.
from pathlib import Path
from dev_utils.file_store import FileStore
documents_path = Path("<path to the documents folder>")
documents = FileStore(documents_path).load_as_documents()
Prepare benchmark_data.json
Create a benchmark_data.json file following this schema:
[
{
"question": "<question_1>",
"correct_answers": [
"<answer 1 for question 1>",
"<answer 2 for question 1>"
],
"correct_answer_document_ids": ["<list of documents ids based on which correct answers were generated>"]
},
{
"question": "<question_2>",
"correct_answers": [
"<answer 1 for question 2>",
"<answer 2 for question 2>"
],
"correct_answer_document_ids": ["<list of documents ids based on which correct answers were generated>"]
}
]
All benchmark questions and answers must be derived from your knowledge base documents.
from dev_utils.utils import read_benchmark_from_json
benchmark_data_path = Path("<path to benchmark_data.json>")
benchmark_data = read_benchmark_from_json(benchmark_data_path)
Define and constrain search space
The search space defines all possible parameter combinations, where each combination creates a unique RAG Pattern. During the experiment, the engine will optimize the RAG Pattern for the selected metric over the given search space, using an objective function to evaluate each configuration.
from ai4rag.search_space.src.parameter import Parameter
from ai4rag.search_space.src.search_space import AI4RAGSearchSpace
from ai4rag.rag.foundation_models.llama_stack import LSFoundationModel
from ai4rag.rag.embedding.llama_stack import LSEmbeddingModel
search_space = AI4RAGSearchSpace(
params=[
Parameter(
name="foundation_model",
param_type="C",
values=[LSFoundationModel(model_id="ollama/llama3.2:3b", client=client)],
),
Parameter(
name="embedding_model",
param_type="C",
values=[
LSEmbeddingModel(
model_id="ollama/nomic-embed-text:latest",
client=client,
params={"embedding_dimension": 768, "context_length": 8192},
)
]
)
]
)
[!tip] To run automatic models discovery with Llama Stack you may use
prepare_search_space_with_llama_stack()fromai4rag.search_space.prepare_search_space.
Configure optimizer
You have full control over the optimization algorithm. Configure the GAMOptimizer by adjusting GAMOptSettings.
from ai4rag.core.hpo.gam_opt import GAMOptSettings
optimizer_settings = GAMOptSettings(
max_evals=10, n_random_nodes=4
)
Run the experiment
Using the information from the previous steps, create an experiment and run the ai4rag optimization engine.
[!note] For Llama Stack vector stores, use the
"ls_<provider_id>"format where<provider_id>matches your Llama Stack provider configuration (e.g.,"ls_milvus","ls_qdrant"). To use ChromaDB in-memory, specify"chroma".
from ai4rag.core.experiment.experiment import AI4RAGExperiment
from ai4rag.utils.event_handler import LocalEventHandler
experiment = AI4RAGExperiment(
client=client,
documents=documents,
benchmark_data=benchmark_data,
search_space=search_space,
vector_store_type="ls_milvus",
optimizer_settings=optimizer_settings,
event_handler=LocalEventHandler(output_path="<local-path-to-store-your-output-files>"),
)
experiment.search()
best_eval = experiment.results.get_best_evaluations(k=1)[0]
print(best_eval)
print(best_eval.rag_pattern.generate("What ai4rag can be used for?"))
[!tip] For production use, implement your own custom
EventHandlerto handle status changes and artifacts produced during the experiment. See theBaseEventHandlerimplementation for reference.
Contribution
Pull requests are very welcome! Make sure your patches are well tested. Ideally create a topic branch for every separate change you make. For example:
- Fork the repo
- Create your feature branch (
git checkout -b my-new-feature) - Commit your changes (
git commit -am 'Added some feature') - Push to the branch (
git push origin my-new-feature) - Create new Pull Request
See more details in contributing section.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file ai4rag-0.5.4.tar.gz.
File metadata
- Download URL: ai4rag-0.5.4.tar.gz
- Upload date:
- Size: 70.8 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
8488cdd3eacfe5b668f2b81af8bb5895246fed2010be926bb4cf50e5262f8c83
|
|
| MD5 |
753c9904cc2b0302e2a1a4c0a78f1cc4
|
|
| BLAKE2b-256 |
4b03b8879c43c22f75e990f58691c0f24f22e6e86d15aa9e1c44b7dfe4e15a1c
|
Provenance
The following attestation bundles were made for ai4rag-0.5.4.tar.gz:
Publisher:
publish-pypi.yml on IBM/ai4rag
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
ai4rag-0.5.4.tar.gz -
Subject digest:
8488cdd3eacfe5b668f2b81af8bb5895246fed2010be926bb4cf50e5262f8c83 - Sigstore transparency entry: 1295330903
- Sigstore integration time:
-
Permalink:
IBM/ai4rag@827dff216510842384007009fbdbc392add63651 -
Branch / Tag:
refs/heads/main - Owner: https://github.com/IBM
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish-pypi.yml@827dff216510842384007009fbdbc392add63651 -
Trigger Event:
workflow_dispatch
-
Statement type:
File details
Details for the file ai4rag-0.5.4-py3-none-any.whl.
File metadata
- Download URL: ai4rag-0.5.4-py3-none-any.whl
- Upload date:
- Size: 93.4 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
2dabe30595a7a71d90cff7a52937766a0aa542f65b37383535dcf26f491b9c82
|
|
| MD5 |
3d93279801a7b8bc3845c7d09c160761
|
|
| BLAKE2b-256 |
8d104a695d82a887c272a441e0ee29690ba8f6f855e39b902941efe0b1779f45
|
Provenance
The following attestation bundles were made for ai4rag-0.5.4-py3-none-any.whl:
Publisher:
publish-pypi.yml on IBM/ai4rag
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
ai4rag-0.5.4-py3-none-any.whl -
Subject digest:
2dabe30595a7a71d90cff7a52937766a0aa542f65b37383535dcf26f491b9c82 - Sigstore transparency entry: 1295330980
- Sigstore integration time:
-
Permalink:
IBM/ai4rag@827dff216510842384007009fbdbc392add63651 -
Branch / Tag:
refs/heads/main - Owner: https://github.com/IBM
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish-pypi.yml@827dff216510842384007009fbdbc392add63651 -
Trigger Event:
workflow_dispatch
-
Statement type: