AutoRAG is a flexible and scalable solution for building Retrieval-Augmented Generation (RAG) systems.
Project description
AutoRAG
Powering seamless retrieval and generation workflows for our internal AI systems
Overview
AutoRAG is a flexible and scalable solution for building Retrieval-Augmented Generation (RAG) systems.
This SDK provides out-of-the-box functionality for creating and managing retrieval-augmented generation workflows, offering a modular, highly-configurable interface. It supports multiple vector stores and leverages http clients like httpx for handling requests, ensuring seamless integration.
Features
- Modular architecture: The SDK allows you to swap, extend, or customize components like retrieval models, vector stores, and response generation strategies.
- High scalability: Built to handle large-scale data retrieval and generation, enabling robust, production-ready applications.
- Celery for dependency injection: Efficient background tasks with support for distributed task execution.
- Multi-flow support: Easily integrate various vector databases (ex: Qdrant, Azure AI Search) with various language models providers (ex: OpenAI, vLLM, Ollama) using standardized public methods for seamless development.
Installation
- Create a virtual environment, we recommend Miniconda for environment management:
conda create -n autorag python=3.12 conda activate autorag
- Install the package:
pip install autonomize-autorag
To install with optional dependencies like Qdrant, Huggingface, OpenAI, Modelhub, etc., refer to the Installation Guide.
Usage
The full set of examples can be found in examples directory.
Sync Usage
import os
from autorag.language_models.openai import OpenAILanguageModel
llm = OpenAILanguageModel(
api_key=os.environ.get("OPENAI_API_KEY"),
)
generation = llm.generate(
message=[{"role": "user", "content": "What is attention in ML?"}],
model="gpt-4o"
)
Async Usage
Simply use sync methods with a
prefix and use await
for each call. Example: client.generate(...)
becomes await client.agenerate(...)
and everything else remains the same.
import os
from autorag.language_models.openai import OpenAILanguageModel
llm = OpenAILanguageModel(
api_key=os.environ.get("OPENAI_API_KEY"),
)
generation = await llm.agenerate(
message=[{"role": "user", "content": "What is attention in ML?"}],
model="gpt-4o"
)
Contribution
To contribute in our AutoRAG SDK, please refer to our Contribution Guidelines.
License
Copyright (C) Autonomize AI - All Rights Reserved
The contents of this repository cannot be copied and/or distributed without the explicit permission from Autonomize.ai
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file autonomize_autorag-0.1.22.tar.gz
.
File metadata
- Download URL: autonomize_autorag-0.1.22.tar.gz
- Upload date:
- Size: 23.1 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/1.8.4 CPython/3.12.7 Linux/6.5.0-1025-azure
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 83a48e6ccdc7914d0ba13ec592eb221ccfb4aeccb00958d208bb481bb8b1487a |
|
MD5 | 8379988de0cc2856629fdbe1e43cf9b8 |
|
BLAKE2b-256 | f2a86b000d6d17f7d0c6ecd8be5e10eec81c93c53250f2c53ea3d8bf12b3836a |
File details
Details for the file autonomize_autorag-0.1.22-py3-none-any.whl
.
File metadata
- Download URL: autonomize_autorag-0.1.22-py3-none-any.whl
- Upload date:
- Size: 38.9 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/1.8.4 CPython/3.12.7 Linux/6.5.0-1025-azure
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 0d205135ebfd15a9e5ea6954f330782a303f8411802da216a5341c7b6d6cc1e7 |
|
MD5 | 3bc677347b98eaecbeb6b3bdf9a5202c |
|
BLAKE2b-256 | 25468e2cc7561444ccc1ae7a774215668177386cdc0891b2970f1a0ef9dd71c1 |