Skip to main content

Ragas evaluation as an out-of-tree Llama Stack provider

Project description

Llama Stack Provider

Ragas as an External Provider for Llama Stack

PyPI version

About

This repository implements Ragas as an out-of-tree Llama Stack evaluation provider.

Features

The goal is to provide all of Ragas' evaluation functionality over Llama Stack's eval API, while leveraging the Llama Stack's built-in APIs for inference (llms and embeddings), datasets, and benchmarks.

There are two versions of the provider:

  • inline: runs the Ragas evaluation in the same process as the Llama Stack server. This is always available with the base installation.
  • remote: runs the Ragas evaluation in a remote process, using Kubeflow Pipelines. Only available when remote dependencies are installed with pip install llama-stack-provider-ragas[remote].

Prerequisites

Setup

  • Clone this repository

    git clone <repository-url>
    cd llama-stack-provider-ragas
    
  • Create and activate a virtual environment

    uv venv
    source .venv/bin/activate
    
  • Install (optionally as an editable package). There's distro, remote and dev optional dependencies to run the sample LS distribution and the KFP-enabled remote provider. Installing the dev dependencies will also install the distro and remote dependencies.

    uv pip install -e ".[dev]"
    
  • The sample LS distributions (one for inline and one for remote provider) is a simple LS distribution that uses Ollama for inference and embeddings. See the provider-specific sections below for setup and run commands.

Inline provider (default with base installation)

Create a .env file with the required environment variable:

EMBEDDING_MODEL=ollama/all-minilm:l6-v2

Run the server:

dotenv run uv run llama stack run distribution/run.yaml

Remote provider (requires optional dependencies)

First install the remote dependencies:

uv pip install -e ".[remote]"

Create a .env file with the following:

# Required for both inline and remote
EMBEDDING_MODEL=ollama/all-minilm:l6-v2

# Required for remote provider
KUBEFLOW_LLAMA_STACK_URL=<your-llama-stack-url>
KUBEFLOW_PIPELINES_ENDPOINT=<your-kfp-endpoint>
KUBEFLOW_NAMESPACE=<your-namespace>
KUBEFLOW_BASE_IMAGE=registry.access.redhat.com/ubi9/python-312:latest
KUBEFLOW_PIPELINES_TOKEN=<your-pipelines-token>
KUBEFLOW_RESULTS_S3_PREFIX=s3://my-bucket/ragas-results
KUBEFLOW_RESULTS_S3_ENDPOINT=<aws s3 endpoint or minio service url>
KUBEFLOW_S3_CREDENTIALS_SECRET_NAME=<secret-name>

Where:

  • KUBEFLOW_LLAMA_STACK_URL: The URL of the llama stack server that the remote provider will use to run the evaluation (LLM generations and embeddings, etc.). If you are running Llama Stack locally, you can use ngrok to expose it to the remote provider.
  • KUBEFLOW_PIPELINES_ENDPOINT: You can get this via kubectl get routes -A | grep -i pipeline on your Kubernetes cluster.
  • KUBEFLOW_NAMESPACE: The name of the data science project where the Kubeflow Pipelines server is running.
  • KUBEFLOW_PIPELINES_TOKEN: Kubeflow Pipelines token with access to submit pipelines. If not provided, the token will be read from the local kubeconfig file.
  • KUBEFLOW_BASE_IMAGE: The base container image used to run the Ragas evaluation in the remote provider. Defaults to registry.access.redhat.com/ubi9/python-312:latest. The KFP components will automatically install llama-stack-provider-ragas[remote] and its dependencies on top of this base image. You can override this by setting the environment variable to use a custom image.
  • KUBEFLOW_RESULTS_S3_PREFIX: S3 location (bucket and prefix folder) where evaluation results will be stored, e.g., s3://my-bucket/ragas-results.
  • KUBEFLOW_RESULTS_S3_ENDPOINT: S3-compatible endpoint for results storage. Use your MinIO service URL when running on OpenShift.
  • KUBEFLOW_S3_CREDENTIALS_SECRET_NAME: Name of the Kubernetes secret containing AWS credentials with write access to the S3 bucket. Create with:
    oc create secret generic <secret-name> \
      --from-literal=AWS_ACCESS_KEY_ID=your-access-key \
      --from-literal=AWS_SECRET_ACCESS_KEY=your-secret-key \
      --from-literal=AWS_DEFAULT_REGION=us-east-1
    

Run the server:

dotenv run uv run llama stack run distribution/run.yaml

Usage

See the demos in the demos directory.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llama_stack_provider_ragas-0.6.1.tar.gz (1.3 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

llama_stack_provider_ragas-0.6.1-py3-none-any.whl (29.9 kB view details)

Uploaded Python 3

File details

Details for the file llama_stack_provider_ragas-0.6.1.tar.gz.

File metadata

  • Download URL: llama_stack_provider_ragas-0.6.1.tar.gz
  • Upload date:
  • Size: 1.3 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: uv/0.10.9 {"installer":{"name":"uv","version":"0.10.9","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}

File hashes

Hashes for llama_stack_provider_ragas-0.6.1.tar.gz
Algorithm Hash digest
SHA256 1447667d8c0dfafc012f7c38aec079710e814c7ce4db56a9564f3378a314df33
MD5 d8d2fae7d6f6beed51744963996f5155
BLAKE2b-256 8e4041f8abc0a4652cb27e4615cf16bf4bb777bce5cc7c9b5cbbc67f9faeb2a0

See more details on using hashes here.

File details

Details for the file llama_stack_provider_ragas-0.6.1-py3-none-any.whl.

File metadata

  • Download URL: llama_stack_provider_ragas-0.6.1-py3-none-any.whl
  • Upload date:
  • Size: 29.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: uv/0.10.9 {"installer":{"name":"uv","version":"0.10.9","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}

File hashes

Hashes for llama_stack_provider_ragas-0.6.1-py3-none-any.whl
Algorithm Hash digest
SHA256 1606bcff35bd6ba2365f14630c08b60dfedd5eee0b69b11afbbc39c024bf8bc1
MD5 388f971528a0669a1021f230dfcfd241
BLAKE2b-256 409c945b10edda28d2bd21a7c59e2d5bcc9473d102c49494d1b1541b8d912434

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page