A repository for general-purpose RAG applications.
Project description
Ragger
A package for general-purpose RAG applications.
Author(s):
- Dan Saattrup Nielsen (dan.nielsen@alexandra.dk)
- Anders Jess Pedersen (anders.j.pedersen@alexandra.dk)
Maintainer(s):
- Dan Saattrup Nielsen (dan.nielsen@alexandra.dk)
Installation
Installation with pip
, uv
, or poetry
:
pip install alexandrainst_ragger
uv add alexandrainst_ragger
poetry add alexandrainst_ragger
You can also add additional extras to the installation, such as:
onprem_cpu
to use anything that requires an on-premises installation, running on a CPU.onprem_gpu
to use anything that requires an on-premises installation, running on a GPU.postgres
to use anything PostgreSQL-related.keyword_search
to use theBM25Retriever
for keyword-based retrieval. Note that this is also required when usingHybridRetriever
with the default configuration.demo
to use the demo server.
Here is an example of how to install with the onprem_cpu
extra, with pip
, uv
, and
poetry
, respectively:
pip install alexandrainst_ragger[onprem_cpu]
uv add alexandrainst_ragger --extra onprem_cpu
poetry add alexandrainst_ragger --extras onprem_cpu
Development Installation
If you want to install the package for development, you can do so as follows:
git clone git@github.com:alexandrainst/alexandrainst_ragger.git
cd alexandrainst_ragger
make install
Quick Start
Checkout the tutorial.ipynb
notebook for a more detailed guide
Initialise a RAG system with default settings as follows:
from alexandrainst_ragger import RagSystem
rag_system = RagSystem()
rag_system.add_documents([
"København er hovedstaden i Danmark.",
"Danmark har 5,8 millioner indbyggere.",
"Danmark er medlem af Den Europæiske Union."
])
answer, supporting_documents = rag_system.answer("Hvad er hovedstaden i Danmark?")
The answer
is then the string answer, and the supporting_documents
is a list of
Document
objects that support the answer.
You can also start a demo server as follows:
from alexandrainst_ragger import Demo
demo = Demo(rag_system=rag_system)
demo.launch()
Run RAG Demo With Docker
Ensure that your SSH keys are in your SSH agent, by running ssh-add -L
. If not, you
can add them by running ssh-add
.
You can run a CPU-based Docker container with a RAG demo with the following commands:
docker build --ssh default --build-arg config=<config-name> -t alexandrainst_ragger -f Dockerfile.cpu .
docker run --rm -p 7860:7860 alexandrainst_ragger
Here <config-name>
is the name of a YAML or JSON file, with the following format (here
is a YAML example):
document_store:
name: JsonlDocumentStore
<key>: <value> # For any additional arguments to `JSONLDocumentStore`
retriever:
name: EmbeddingRetriever
embedder:
name: OpenAIEmbedder
<key>: <value> # For any additional arguments to `OpenAIEmbedder`
embedding_store:
name: NumpyEmbeddingStore
<key>: <value> # For any additional arguments to `NumpyEmbeddingStore`
generator:
name: OpenAIGenerator
<key>: <value> # For any additional arguments to `OpenAIGenerator`
<key>: <value> # For any additional arguments to `RagSystem` or `Demo`
The config can also just be empty, to use the defaults. This is typically not recommended, however, as you would probably need to at least specify the configuration of your stores.
Note that if you change the configuration then you might need extra dependencies when building your image.
Note that some components need additional environment variables to be set, such as
OPENAI_API_KEY
for the OpenAIEmbedder
and OpenAIGenerator
. These can be set by
including a .env
file in the working directory when building the Docker image, and it
will be copied into the image and used during compilation and running of the demo.
If you have any data on disk, you can simply mount it into the Docker container by
adding the -v
flag to the docker run
command.
To run a GPU-based Docker container, first ensure that the NVIDIA Container Toolkit is
installed
and
configured.
Ensure that the the CUDA version stated at the top of the Dockerfile matches the CUDA
version installed (which you can check using nvidia-smi
). After that, we build the
image as follows:
docker build --pull --ssh default --build-arg config=<config-name> -t alexandrainst_ragger -f Dockerfile.cuda .
docker run --gpus 1 --rm -p 7860:7860 alexandrainst_ragger
All Available Components
Ragger supports the following components:
Document Stores
These are the databases carrying all the documents. Documents are represented as objects
of the Document
data class, which has an id
and a text
field. These can all be
imported from alexandrainst_ragger.document_store
.
JsonlDocumentStore
: A document store that reads from a JSONL file. (default)SqliteDocumentStore
: A document store that uses a SQLite database to store documents.PostgresDocumentStore
: A document store that uses a PostgreSQL database to store documents. This assumes that the PostgreSQL server is already running.TxtDocumentStore
: A document store that reads documents from a single text file, separated by newlines.
Retrievers
Retrievers are used to retrieve documents related to a query. These can all be imported
from alexandrainst_ragger.retriever
.
-
EmbeddingRetriever
: A retriever that uses embeddings to retrieve documents. These embeddings are computed using an embedder, which can be one of the following:OpenAIEmbedder
: An embedder that uses the OpenAI Embeddings API. (default)E5Embedder
: An embedder that uses an E5 model.
The embeddings are stored in an embedding store, which can be one of the following:
NumpyEmbeddingStore
: An embedding store that stores embeddings in a NumPy array. (default)PostgresEmbeddingStore
: An embedding store that uses a PostgreSQL database to store embeddings, using thepgvector
extension. This assumes that the PostgreSQL server is already running, and that thepgvector
extension is installed. See here for more information on how to install the extension.
-
BM25Retriever
: A retriever that uses BM25 to retrieve documents. This is keyword based and is thus more suitable for keyword-based queries. -
HybridRetriever
: A retriever that fuses the results of multiple retrievers. This can for instance be used to combine the results of theEmbeddingRetriever
and theBM25Retriever
to get the best of both worlds (known as "Hybrid Retrieval").
Generators
Generators are used to generate answers from the retrieved documents and the question.
These can all be imported from alexandrainst_ragger.generator
.
OpenAIGenerator
: A generator that uses the OpenAI Chat API. (default)GGUFGenerator
: A generator that uses Llama.cpp to wrap any model from the Hugging Face Hub in GGUF format. Optimised for CPU generation.VllmGenerator
: A generator that uses vLLM to wrap almost any model from the Hugging Face Hub. Note that this requires a GPU to run.
Custom Components
You can also create custom components by subclassing the following classes:
DocumentStore
Retriever
(and by extension, alsoEmbedder
andEmbeddingStore
)Generator
These can then simply be added to a RagSystem
. Here is an example:
import typing
from alexandrainst_ragger import RagSystem, DocumentStore, Document, Index
class InMemoryDocumentStore(DocumentStore):
"""A document store that just keeps all documents in memory."""
def __init__(self, documents: list[str]):
self.documents = [
Document(id=str(i), text=text) for i, text in enumerate(documents)
]
def add_documents(self, documents: typing.Iterable[Document]):
self.documents.extend(documents)
def remove(self):
self.documents = []
def __getitem__(self, index: Index) -> str:
return self.documents[int(index)]
def __contains__(self, index: Index) -> bool:
return 0 <= int(index) < len(self.documents)
def __iter__(self) -> typing.Generator[Document, None, None]:
yield from self.documents
def __len__(self) -> int:
return len(self.documents)
document_store = InMemoryDocumentStore(documents=[
"København er hovedstaden i Danmark.",
"Danmark har 5,8 millioner indbyggere.",
"Danmark er medlem af Den Europæiske Union."
])
rag_system = RagSystem(document_store=document_store)
answer, supporting_documents = rag_system.answer("Hvad er hovedstaden i Danmark?")
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distributions
Built Distribution
File details
Details for the file alexandrainst_ragger-4.1.0-py3-none-any.whl
.
File metadata
- Download URL: alexandrainst_ragger-4.1.0-py3-none-any.whl
- Upload date:
- Size: 39.8 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.4.30
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 355d2446a7e7695d2a45c6704500d4999b7565c9065b70fb456fd74d800ef74b |
|
MD5 | ae0fadb4bc3e862d0902aacaa126f31e |
|
BLAKE2b-256 | 3d450fc1ce00f7fb2d1f65c503b2437a9101dcc7ee2f4fd539df126f44243a2d |