Skip to main content

garak (LLM vulnerability scanner) - packaged by NVIDIA Eval Factory

Project description

NVIDIA NeMo Evaluator

The goal of NVIDIA NeMo Evaluator is to advance and refine state-of-the-art methodologies for model evaluation, and deliver them as modular evaluation packages (evaluation containers and pip wheels) that teams can use as standardized building blocks.

Quick start guide

NVIDIA NeMo Evaluator provide you with evaluation clients, that are specifically built to evaluate model endpoints using our Standard API.

Launching an evaluation for an LLM

  1. Install the package

    pip install nvidia-eval-factory-garak
    
  2. (Optional) Set a token to your API endpoint if it's protected

    export MY_API_KEY="your_api_key_here"
    
  3. List the available evaluations:

    $ nemo-evaluator ls
    Available tasks:
    * garak (in garak)
    ...
    
  4. Run the evaluation:

    nemo-evaluator run_eval \
        --eval_type garak \
        --model_id microsoft/phi-4-mini-instruct \
        --model_url https://integrate.api.nvidia.com/v1/chat/completions \
        --model_type chat \
        --api_key_name MY_API_KEY \
        --output_dir /workspace/results
    
  5. Gather the results

    cat /workspace/results/results.yml
    

Command-Line Tool

Each package comes pre-installed with a set of command-line tools, designed to simplify the execution of evaluation tasks. Below are the available commands and their usage for the garak:

Commands

1. List Evaluation Types

nemo-evaluator ls

Displays the evaluation types available within the harness.

2. Run an evaluation

The nemo-evaluator run_eval command executes the evaluation process. Below are the flags and their descriptions:

Required flags

  • --eval_type <string> The type of evaluation to perform
  • --model_id <string> The name or identifier of the model to evaluate.
  • --model_url <url> The API endpoint where the model is accessible.
  • --model_type <string> The type of the model to evaluate, currently either "chat", "completions", or "vlm".
  • --output_dir <directory> The directory to use as the working directory for the evaluation. The results, including the results.yml output file, will be saved here. Make sure to use the absolute path.

Optional flags

  • --api_key_name <string> The name of the environment variable that stores the Bearer token for the API, if authentication is required.
  • --run_config <path> Specifies the path to a YAML file containing the evaluation definition.

Example

nemo-evaluator run_eval \
    --eval_type garak \
    --model_id my_model \
    --model_type chat \
    --model_url http://localhost:8000/v1/chat/completions \
    --output_dir /workspace/evaluation_results

If the model API requires authentication, set the API key in an environment variable and reference it using the --api_key_name flag:

export MY_API_KEY="your_api_key_here"

nemo-evaluator run_eval \
    --eval_type garak \
    --model_id my_model \
    --model_type chat \
    --model_url http://localhost:8000/v1/chat/completions \
    --api_key_name MY_API_KEY \
    --output_dir /workspace/evaluation_results

Configuring evaluations via YAML

Evaluations in NVIDIA NeMo Evaluator are configured using YAML files that define the parameters and settings required for the evaluation process. These configuration files follow a standard API which ensures consistency across evaluations.

Example of a YAML config:

config:
  type: garak
  params:
    parallelism: 50
    limit_samples: 20
    extra:
      probes: atkgen.Tox
target:
  api_endpoint:
    model_id: microsoft/phi-4-mini-instruct
    type: chat
    url: https://integrate.api.nvidia.com/v1/chat/completions
    api_key: NVIDIA_API_KEY

The priority of overrides is as follows:

  1. command line arguments
  2. user config (as seen above)
  3. task defaults (defined per task type)
  4. framework defaults

--dry_run option allows you to print the final run configuration and command without executing the evaluation.

Example:

nemo-evaluator run_eval \
    --eval_type garak \
    --model_id my_model \
    --model_type chat \
    --model_url http://localhost:8000/v1/chat/completions \
    --output_dir /workspace/evaluation_results \
    --dry_run

Output:

Rendered config:

command: "cat > garak_config.yaml << 'EOF'\nplugins:\n  {% if config.params.extra.probes\
  \ is not none %}probe_spec: {{config.params.extra.probes}}{% endif %}\n  extended_detectors:\
  \ true\n  model_type: {% if target.api_endpoint.type == \"completions\" %}nim.NVOpenAICompletion{%\
  \ elif target.api_endpoint.type == \"chat\" %}nim.NVOpenAIChat{% endif %}\n  model_name:\
  \ {{target.api_endpoint.model_id}}\n  generators:\n    nim:\n      uri: {{target.api_endpoint.url\
  \ | replace('/chat/completions', '') | replace('/completions', '')}}\n      {% if\
  \ config.params.temperature is not none %}temperature: {{config.params.temperature}}{%\
  \ endif %}\n      {% if config.params.top_p is not none %}top_p: {{config.params.top_p}}{%\
  \ endif %}\n      {% if config.params.max_new_tokens is not none %}max_tokens: {{config.params.max_new_tokens}}{%\
  \ endif %}\nsystem:\n  parallel_attempts: {{config.params.parallelism}}\n  lite:\
  \ false\nEOF\n{% if target.api_endpoint.api_key is not none %}\nexport NIM_API_KEY=${{target.api_endpoint.api_key}}\
  \ &&\n{% else %}\nexport NIM_API_KEY=dummy &&\n{% endif %}\nexport XDG_DATA_HOME={{config.output_dir}}\
  \ &&\ngarak --config garak_config.yaml --report_prefix=results\n"
framework_name: garak
pkg_name: garak
config:
  output_dir: /workspace/evaluation_results
  params:
    limit_samples: null
    max_new_tokens: 150
    max_retries: null
    parallelism: 32
    task: garak
    temperature: 0.1
    request_timeout: null
    top_p: 0.7
    extra:
      probes: null
  supported_endpoint_types:
  - chat
  - completions
  type: garak
target:
  api_endpoint:
    api_key: null
    model_id: my_model
    stream: null
    type: chat
    url: http://localhost:8000/v1/chat/completions


Rendered command:

cat > garak_config.yaml << 'EOF'
plugins:
  
  extended_detectors: true
  model_type: nim.NVOpenAIChat
  model_name: my_model
  generators:
    nim:
      uri: http://localhost:8000/v1
      temperature: 0.1
      top_p: 0.7
      max_tokens: 150
system:
  parallel_attempts: 32
  lite: false
EOF

export NIM_API_KEY=dummy &&

export XDG_DATA_HOME=/workspace/evaluation_results &&
garak --config garak_config.yaml --report_prefix=results

FAQ

Deploying a model as an endpoint

NVIDIA NeMo Evaluator utilize a client-server communication architecture to interact with the model. As a prerequisite, the model must be deployed as an endpoint with a NIM-compatible API.

Users have the flexibility to deploy their model using their own infrastructure and tooling.

Servers with APIs that conform to the OpenAI/NIM API standard are expected to work seamlessly out of the box.

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

nvidia_eval_factory_garak-25.11-py3-none-any.whl (14.9 kB view details)

Uploaded Python 3

File details

Details for the file nvidia_eval_factory_garak-25.11-py3-none-any.whl.

File metadata

File hashes

Hashes for nvidia_eval_factory_garak-25.11-py3-none-any.whl
Algorithm Hash digest
SHA256 b9a615ff018bbfcc7096f660bf8478e2e98daec3132cf1cfdaec956a0dd056ce
MD5 0590c3bf23a9d68a190146440c9b6178
BLAKE2b-256 6f50eda978f6e6bcf8d32fed84b6934392a0489edda2359314b4f78e39f57139

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page