Out-Of-Tree Llama Stack provider for Garak Red-teaming
Project description
TrustyAI Garak (trustyai_garak): Out-of-Tree Llama Stack Eval Provider for Garak Red Teaming
About
This repository implements Garak as a Llama Stack out-of-tree provider for security testing and red teaming of Large Language Models with optional Shield Integration for enhanced security testing. Please find the tutorial here to get started.
What It Does
- Automated Security Testing: Detects prompt injection, jailbreaks, toxicity, and bias vulnerabilities
- Compliance Scanning: OWASP LLM Top 10, AVID taxonomy benchmarks
- Shield Testing: Compare LLM security with/without guardrails
- Scalable Deployment: Local or Kubernetes/Kubeflow execution
- Comprehensive Reporting: JSON, HTML, and detailed logs with vulnerability scores (0.0-1.0)
Installation
Production (Remote Execution - Default)
git clone https://github.com/trustyai-explainability/llama-stack-provider-trustyai-garak.git
cd llama-stack-provider-trustyai-garak
python3 -m venv .venv && source .venv/bin/activate
pip install -e .
This installs the remote provider by default, which executes scans on Kubernetes/Kubeflow Pipelines. This is the recommended mode for production deployments with lightweight dependencies.
Development (With Inline Execution)
# Install with inline provider for local development/testing
pip install -e ".[inline]"
This adds support for inline execution (local scans), which requires heavier dependencies including garak and langchain.
Quick Start
1. Configure Environment
# Model serving endpoint
export VLLM_URL="http://your-model-endpoint/v1"
export INFERENCE_MODEL="your-model-name"
# Llama Stack endpoint (for inline: local, for remote: accessible from KFP pods)
export LLAMA_STACK_URL="http://localhost:8321"
2. Start Server
# Inline mode - local scanning (requires [inline] extra)
llama stack run run.yaml
# Inline mode with shields (requires [inline] extra)
llama stack run run-with-safety.yaml
# Remote mode - Kubernetes/KFP (default install)
llama stack run run-remote.yaml
# Remote mode with shields (default install)
llama stack run run-remote-safety.yaml
Server runs at http://localhost:8321
3. Run Security Scan
from llama_stack_client import LlamaStackClient
client = LlamaStackClient(base_url="http://localhost:8321")
# Quick 5-minute scan
job = client.alpha.eval.run_eval(
benchmark_id="trustyai_garak::quick",
benchmark_config={
"eval_candidate": {
"type": "model",
"model": "your-model-name",
"sampling_params": {"max_tokens": 100}
}
}
)
# Check status
status = client.alpha.eval.jobs.status(job_id=job.job_id, benchmark_id="trustyai_garak::quick")
print(f"Status: {status.status}")
# Get results when complete
if status.status == "completed":
results = client.alpha.eval.get_eval_job_result(job_id=job.job_id, benchmark_id="trustyai_garak::quick")
Available Benchmarks
Compliance Frameworks
| Benchmark ID | Framework | Duration |
|---|---|---|
trustyai_garak::owasp_llm_top10 |
OWASP LLM Top 10 | ~8 hours |
trustyai_garak::avid_security |
AVID Security | ~8 hours |
trustyai_garak::avid_ethics |
AVID Ethics | ~30 minutes |
trustyai_garak::avid_performance |
AVID Performance | ~40 minutes |
Test Profiles
| Benchmark ID | Description | Duration |
|---|---|---|
trustyai_garak::quick |
Essential security checks (3 probes) | ~5 minutes |
trustyai_garak::standard |
Standard attack vectors (5 categories) | ~1 hour |
Duration estimates based on Qwen2.5 7B via vLLM
Advanced Usage
Other Garak Probes
client.benchmarks.register(
benchmark_id="custom",
dataset_id="garak",
scoring_functions=["garak_scoring"],
provider_benchmark_id="custom",
provider_id="trustyai_garak",
metadata={
"probes": ["latentinjection.LatentJailbreak", "snowball.GraphConnectivity"],
"timeout": 900
}
)
Shield Testing
# Test with input shield
client.benchmarks.register(
benchmark_id="with_shield",
dataset_id="garak",
scoring_functions=["garak_scoring"],
provider_benchmark_id="with_shield",
provider_id="trustyai_garak",
metadata={
"probes": ["promptinject.HijackHateHumans"],
"shield_ids": ["Prompt-Guard-86M"] # Input shield only
}
)
# Test with input/output shields
metadata={
"probes": ["promptinject.HijackHateHumans"],
"shield_config": {
"input": ["Prompt-Guard-86M"],
"output": ["Llama-Guard-3-8B"]
}
}
Accessing Reports
# Get report file IDs from job status
job_id = job.job_id
status = client.alpha.eval.jobs.status(job_id=job_id, benchmark_id="trustyai_garak::quick")
# File IDs are in metadata (for remote: prefixed with job_id)
scan_report_id = status.metadata.get(f"{job_id}_scan.report.jsonl") or status.metadata.get("scan.report.jsonl")
scan_html_id = status.metadata.get(f"{job_id}_scan.report.html") or status.metadata.get("scan.report.html")
# Download via Files API
content = client.files.content(scan_report_id)
# Or via HTTP
import requests
report = requests.get(f"http://localhost:8321/v1/files/{scan_html_id}/content")
Remote Execution (Kubernetes/KFP)
Setup
# Llama Stack URL (must be accessible from Kubeflow pods - use ngrok if local)
export KUBEFLOW_LLAMA_STACK_URL="https://your-llama-stack-url.ngrok.io"
# Kubeflow Configuration
export KUBEFLOW_PIPELINES_ENDPOINT="https://your-kfp-endpoint"
export KUBEFLOW_NAMESPACE="your-namespace"
export KUBEFLOW_BASE_IMAGE="quay.io/rh-ee-spandraj/trustyai-lls-garak-provider-dsp:latest"
export KUBEFLOW_PIPELINES_TOKEN="" # Optional: If not set, uses kubeconfig
# Start server
llama stack run run-remote.yaml
Important Notes:
- For remote execution,
KUBEFLOW_LLAMA_STACK_URLmust be accessible from KFP pods. If running locally, use ngrok - Results are stored via the configured Files API provider (S3, LocalFS, GCS, etc.)
- Both server and KFP pods access the same Files API backend automatically
Usage
# Same API, runs as KFP pipeline
job = client.alpha.eval.run_eval(benchmark_id="trustyai_garak::owasp_llm_top10", ...)
# Monitor pipeline
status = client.alpha.eval.jobs.status(job_id=job.job_id, benchmark_id="trustyai_garak::owasp_llm_top10")
print(f"KFP Run ID: {status.metadata['kfp_run_id']}")
Configuration Reference
Inline Provider Config (run.yaml)
providers:
eval:
- provider_id: trustyai_garak
provider_type: inline::trustyai_garak
config:
llama_stack_url: ${env.LLAMA_STACK_URL:=http://localhost:8321}
timeout: ${env.GARAK_TIMEOUT:=10800}
max_concurrent_jobs: ${env.GARAK_MAX_CONCURRENT_JOBS:=5}
max_workers: ${env.GARAK_MAX_WORKERS:=5}
Remote Provider Config (run-remote.yaml)
providers:
eval:
- provider_id: trustyai_garak_remote
provider_type: remote::trustyai_garak
config:
llama_stack_url: ${env.KUBEFLOW_LLAMA_STACK_URL}
timeout: ${env.GARAK_TIMEOUT:=10800}
kubeflow_config:
pipelines_endpoint: ${env.KUBEFLOW_PIPELINES_ENDPOINT}
namespace: ${env.KUBEFLOW_NAMESPACE}
base_image: ${env.KUBEFLOW_BASE_IMAGE}
pipelines_api_token: ${env.KUBEFLOW_PIPELINES_TOKEN:=}
# Files provider (S3, LocalFS, or any other backend)
files:
- provider_id: s3
provider_type: remote::s3
config:
bucket_name: ${env.S3_BUCKET_NAME}
region: ${env.AWS_DEFAULT_REGION:=us-east-1}
# ... S3 configuration
Environment Variables
| Variable | Default | Description |
|---|---|---|
LLAMA_STACK_URL |
http://localhost:8321/v1 |
Llama Stack API URL (inline mode) |
KUBEFLOW_LLAMA_STACK_URL |
- | Llama Stack URL accessible from KFP pods (remote mode) |
GARAK_TIMEOUT |
10800 |
Max scan timeout (seconds) |
GARAK_MAX_CONCURRENT_JOBS |
5 |
Max concurrent scans (inline only) |
GARAK_MAX_WORKERS |
5 |
Shield scanning parallelism |
GARAK_SCAN_DIR |
/tmp/.cache/llama_stack_garak_scans |
Directory for scan files (must be writable) |
XDG_CACHE_HOME |
/tmp/.cache |
XDG cache directory (auto-configured) |
Deployment Modes
| Mode | Config File | Provider Type | Dependencies |
|---|---|---|---|
| Inline | run.yaml |
inline::trustyai_garak |
Requires [inline] extra |
| Inline+Safety | run-with-safety.yaml |
inline::trustyai_garak |
Requires [inline] extra |
| Remote (Default) | run-remote.yaml |
remote::trustyai_garak |
Default install |
| Remote+Safety | run-remote-safety.yaml |
remote::trustyai_garak |
Default install |
Architecture
Remote-First Design
The provider uses a remote-first architecture:
- Default Install: Lightweight, includes only KFP dependencies
- Remote Provider: No garak on server, runs scans in Kubernetes pods
- Files API Integration: Portable across any Files backend (S3, LocalFS, GCS)
- Automatic Configuration: XDG directories auto-configured for writable paths
Result Interpretation
- Score Range: 0.0 (secure) to 1.0 (vulnerable)
- Threshold: Scores ≥ 0.5 indicate security concerns
- Reports: Available in JSON, HTML, and log formats via Files API
Installation Options
| Install Command | Providers Available | Use Case |
|---|---|---|
pip install -e . |
Remote only | Production (default) |
pip install -e ".[inline]" |
Remote + Inline | Development/Testing |
Examples & Demos
| Notebook | Description |
|---|---|
| 01-getting_started | Basic usage and custom probes (inline mode) |
| 02-scan_with_shields | Shield integration testing (inline mode) |
| 03-remote_garak | KFP remote execution |
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file llama_stack_provider_trustyai_garak-0.1.7.tar.gz.
File metadata
- Download URL: llama_stack_provider_trustyai_garak-0.1.7.tar.gz
- Upload date:
- Size: 59.5 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
00ff97b40efef2d56d204636f02431706526bb197aa45c5779775c56944e1bce
|
|
| MD5 |
24e2d8b89fee51098437da7d3778167b
|
|
| BLAKE2b-256 |
b4211161072f3552fc1c5f8e9e7dbe19307f619158ae18d2df2421d597104ac0
|
Provenance
The following attestation bundles were made for llama_stack_provider_trustyai_garak-0.1.7.tar.gz:
Publisher:
build-and-publish.yaml on trustyai-explainability/llama-stack-provider-trustyai-garak
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
llama_stack_provider_trustyai_garak-0.1.7.tar.gz -
Subject digest:
00ff97b40efef2d56d204636f02431706526bb197aa45c5779775c56944e1bce - Sigstore transparency entry: 756769114
- Sigstore integration time:
-
Permalink:
trustyai-explainability/llama-stack-provider-trustyai-garak@3f31ebd3d802f1e241a6abeb4de362b985889307 -
Branch / Tag:
refs/tags/v0.1.7 - Owner: https://github.com/trustyai-explainability
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
build-and-publish.yaml@3f31ebd3d802f1e241a6abeb4de362b985889307 -
Trigger Event:
release
-
Statement type:
File details
Details for the file llama_stack_provider_trustyai_garak-0.1.7-py3-none-any.whl.
File metadata
- Download URL: llama_stack_provider_trustyai_garak-0.1.7-py3-none-any.whl
- Upload date:
- Size: 50.9 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
f1969f8287bdba109407c7dc1c985d75e7bf0676ad72b2bddbc71611f68b96c1
|
|
| MD5 |
6388f31518c164da0926797c4421b92b
|
|
| BLAKE2b-256 |
cda71a6abcf6199c3319e37280b264b426f481bf5f07296df249e8412c35a550
|
Provenance
The following attestation bundles were made for llama_stack_provider_trustyai_garak-0.1.7-py3-none-any.whl:
Publisher:
build-and-publish.yaml on trustyai-explainability/llama-stack-provider-trustyai-garak
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
llama_stack_provider_trustyai_garak-0.1.7-py3-none-any.whl -
Subject digest:
f1969f8287bdba109407c7dc1c985d75e7bf0676ad72b2bddbc71611f68b96c1 - Sigstore transparency entry: 756769142
- Sigstore integration time:
-
Permalink:
trustyai-explainability/llama-stack-provider-trustyai-garak@3f31ebd3d802f1e241a6abeb4de362b985889307 -
Branch / Tag:
refs/tags/v0.1.7 - Owner: https://github.com/trustyai-explainability
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
build-and-publish.yaml@3f31ebd3d802f1e241a6abeb4de362b985889307 -
Trigger Event:
release
-
Statement type: