Skip to main content

Out-Of-Tree Llama Stack provider for Garak Red-teaming

Project description

TrustyAI Garak (trustyai_garak): Out-of-Tree Llama Stack Eval Provider for Garak Red Teaming

About

This repository implements Garak as a Llama Stack out-of-tree provider for security testing and red teaming of Large Language Models with optional Shield Integration for enhanced security testing. Please find the tutorial here to get started.

What It Does

  • Automated Security Testing: Detects prompt injection, jailbreaks, toxicity, and bias vulnerabilities
  • Compliance Scanning: OWASP LLM Top 10, AVID taxonomy benchmarks
  • Shield Testing: Compare LLM security with/without guardrails
  • Scalable Deployment: Local or Kubernetes/Kubeflow execution
  • Comprehensive Reporting: JSON, HTML, and detailed logs with vulnerability scores (0.0-1.0)

Installation

git clone https://github.com/trustyai-explainability/llama-stack-provider-trustyai-garak.git
cd llama-stack-provider-trustyai-garak
python3 -m venv .venv && source .venv/bin/activate
pip install -e .
# For remote execution: pip install -e ".[remote]"

Quick Start

1. Configure Environment

# Model serving endpoint
export VLLM_URL="http://your-model-endpoint/v1"
export INFERENCE_MODEL="your-model-name"

# Llama Stack endpoint (for inline: local, for remote: accessible from KFP pods)
export LLAMA_STACK_URL="http://localhost:8321"

2. Start Server

# Basic mode (standard scanning)
llama stack run run.yaml

# Enhanced mode (with shield testing)
llama stack run run-with-safety.yaml

# Remote mode (Kubernetes/KFP)
llama stack run run-remote.yaml

Server runs at http://localhost:8321

3. Run Security Scan

from llama_stack_client import LlamaStackClient

client = LlamaStackClient(base_url="http://localhost:8321")

# Quick 5-minute scan
job = client.alpha.eval.run_eval(
    benchmark_id="trustyai_garak::quick",
    benchmark_config={
        "eval_candidate": {
            "type": "model",
            "model": "your-model-name",
            "sampling_params": {"max_tokens": 100}
        }
    }
)

# Check status
status = client.alpha.eval.jobs.status(job_id=job.job_id, benchmark_id="trustyai_garak::quick")
print(f"Status: {status.status}")

# Get results when complete
if status.status == "completed":
    results = client.alpha.eval.get_eval_job_result(job_id=job.job_id, benchmark_id="trustyai_garak::quick")

Available Benchmarks

Compliance Frameworks

Benchmark ID Framework Duration
trustyai_garak::owasp_llm_top10 OWASP LLM Top 10 ~8 hours
trustyai_garak::avid_security AVID Security ~8 hours
trustyai_garak::avid_ethics AVID Ethics ~30 minutes
trustyai_garak::avid_performance AVID Performance ~40 minutes

Test Profiles

Benchmark ID Description Duration
trustyai_garak::quick Essential security checks (3 probes) ~5 minutes
trustyai_garak::standard Standard attack vectors (5 categories) ~1 hour

Duration estimates based on Qwen2.5 7B via vLLM

Advanced Usage

Other Garak Probes

client.benchmarks.register(
    benchmark_id="custom",
    dataset_id="garak",
    scoring_functions=["garak_scoring"],
    provider_benchmark_id="custom",
    provider_id="trustyai_garak",
    metadata={
        "probes": ["latentinjection.LatentJailbreak", "snowball.GraphConnectivity"],
        "timeout": 900
    }
)

Shield Testing

# Test with input shield
client.benchmarks.register(
    benchmark_id="with_shield",
    dataset_id="garak",
    scoring_functions=["garak_scoring"],
    provider_benchmark_id="with_shield",
    provider_id="trustyai_garak",
    metadata={
        "probes": ["promptinject.HijackHateHumans"],
        "shield_ids": ["Prompt-Guard-86M"]  # Input shield only
    }
)

# Test with input/output shields
metadata={
    "probes": ["promptinject.HijackHateHumans"],
    "shield_config": {
        "input": ["Prompt-Guard-86M"],
        "output": ["Llama-Guard-3-8B"]
    }
}

Accessing Reports

# Get report file IDs from job status
scan_report_id = status.metadata["scan.report.jsonl"]
scan_html_id = status.metadata["scan.report.html"]

# Download via Files API
content = client.files.content(scan_report_id)

# Or via HTTP
import requests
report = requests.get(f"http://localhost:8321/v1/files/{scan_html_id}/content")

Remote Execution (Kubernetes/KFP)

Setup

# Llama Stack URL (must be accessible from Kubeflow pods - use ngrok if local)
export LLAMA_STACK_URL="https://your-llama-stack-url.ngrok.io"

# Kubeflow Configuration
export KUBEFLOW_PIPELINES_ENDPOINT="https://your-kfp-endpoint"
export KUBEFLOW_NAMESPACE="your-namespace"
export KUBEFLOW_BASE_IMAGE="quay.io/rh-ee-spandraj/trustyai-lls-garak-provider-dsp:latest"
export KUBEFLOW_RESULTS_S3_PREFIX="s3://garak-results/scans"  # S3 path: bucket/prefix
export KUBEFLOW_S3_CREDENTIALS_SECRET_NAME="aws-connection-pipeline-artifacts"  # K8s secret name
export KUBEFLOW_PIPELINES_TOKEN=""  # Optional: If not set, uses kubeconfig

# S3 Configuration (for server-side S3 access to retrieve results)
# These are also stored in the Kubernetes secret specified above for pod access
export AWS_ACCESS_KEY_ID="your-key"
export AWS_SECRET_ACCESS_KEY="your-secret"
export AWS_S3_ENDPOINT="https://your-s3-endpoint" # if using MinIO
export AWS_DEFAULT_REGION="us-east-1"

# Start server
llama stack run run-remote.yaml

Note: For remote execution, LLAMA_STACK_URL must be accessible from KFP pods. If running locally, use ngrok to create an accessible endpoint.

Usage

# Same API, runs as KFP pipeline
job = client.alpha.eval.run_eval(benchmark_id="trustyai_garak::owasp_llm_top10", ...)

# Monitor pipeline
status = client.alpha.eval.jobs.status(job_id=job.job_id, benchmark_id="trustyai_garak::owasp_llm_top10")
print(f"KFP Run ID: {status.metadata['kfp_run_id']}")

Configuration Reference

Provider Config (run.yaml)

providers:
  eval:
    - provider_id: trustyai_garak
      config:
        llama_stack_url: ${env.LLAMA_STACK_URL:=http://localhost:8321}
        timeout: ${env.GARAK_TIMEOUT:=10800}
        max_concurrent_jobs: ${env.GARAK_MAX_CONCURRENT_JOBS:=5}
        max_workers: ${env.GARAK_MAX_WORKERS:=5}

Environment Variables

Variable Default Description
LLAMA_STACK_URL http://localhost:8321/v1 Llama Stack API URL
GARAK_TIMEOUT 10800 Max scan timeout (seconds)
GARAK_MAX_CONCURRENT_JOBS 5 Max concurrent scans (inline only)
GARAK_MAX_WORKERS 5 Shield scanning parallelism

Deployment Modes

Mode Config File Features
Basic run.yaml Standard scanning
Enhanced run-with-safety.yaml + Shield integration
Remote run-remote.yaml KFP execution
Remote+Safety run-remote-safety.yaml + Shield integration

Result Interpretation

  • Score Range: 0.0 (secure) to 1.0 (vulnerable)
  • Threshold: Scores ≥ 0.5 indicate security concerns
  • Reports: Available in JSON, HTML, and log formats via Files API

Examples & Demos

Notebook Description
01-getting_started Basic usage and custom probes
02-scan_with_shields Shield integration testing
03-remote_garak KFP remote execution

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llama_stack_provider_trustyai_garak-0.1.6.tar.gz (54.2 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

File details

Details for the file llama_stack_provider_trustyai_garak-0.1.6.tar.gz.

File metadata

File hashes

Hashes for llama_stack_provider_trustyai_garak-0.1.6.tar.gz
Algorithm Hash digest
SHA256 3e4b31a32c1b471394c9d2fa96a896e5b88fe7c0bdf4ced8aa9968caab07e2c2
MD5 c9379b67ac1c30cb9f72cce6837ae079
BLAKE2b-256 060fac6a61b4121ece53632bc2724169b8f9ecc58f222604b272a06b7c63a7b7

See more details on using hashes here.

Provenance

The following attestation bundles were made for llama_stack_provider_trustyai_garak-0.1.6.tar.gz:

Publisher: build-and-publish.yaml on trustyai-explainability/llama-stack-provider-trustyai-garak

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file llama_stack_provider_trustyai_garak-0.1.6-py3-none-any.whl.

File metadata

File hashes

Hashes for llama_stack_provider_trustyai_garak-0.1.6-py3-none-any.whl
Algorithm Hash digest
SHA256 00f9fe33ddccebde257a5c2dada961d3a9a37c1ba41fbc906ff7b9306d4a7c70
MD5 83e767cb9e644a9397d3b5827399db4c
BLAKE2b-256 342c6988863c6022a43e92091a638e74280bad75e31f0c9ded98b3817df0527d

See more details on using hashes here.

Provenance

The following attestation bundles were made for llama_stack_provider_trustyai_garak-0.1.6-py3-none-any.whl:

Publisher: build-and-publish.yaml on trustyai-explainability/llama-stack-provider-trustyai-garak

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page