Skip to main content

Evaluation framework for LLM Workers

Project description

llm-workers-evaluation

Evaluation framework for LLM Workers.

Overview

llm-workers-evaluation provides tools for running evaluation suites against LLM scripts and reporting scores.

  • llm-workers-evaluate: CLI tool for running evaluation suites

Installation

pip install llm-workers-evaluation

This will install llm-workers (core) as a dependency.

Usage

Running Evaluations

# Basic usage
llm-workers-evaluate my-script.yaml my-suite.yaml

# With custom iteration count
llm-workers-evaluate -n 5 my-script.yaml my-suite.yaml

# With verbose output
llm-workers-evaluate --verbose my-script.yaml my-suite.yaml

# With debug mode
llm-workers-evaluate --debug my-script.yaml my-suite.yaml

Evaluation Suite Format

Evaluation suites are YAML files defining tests that return scores between 0.0 and 1.0:

shared:
  data:
    expected: "hello"
  tools: []

iterations: 10

suites:
  basic:
    data: {}
    tools: []
    tests:
      always-pass:
        do:
          eval: 1.0
      always-fail:
        do:
          eval: 0.0
      conditional:
        data:
          value: "hello"
        do:
          eval: "${1.0 if value == expected else 0.0}"

Output Format

Results are output as YAML:

final_score: 0.75
per_suite:
  basic:
    final_score: 0.75
    per_test:
      always-pass: 1.0
      always-fail: 0.0
      conditional: 1.0

Score Handling

  • Tests must return a float between 0.0 and 1.0
  • None results are treated as 0.0
  • Non-numeric results are treated as 0.0
  • Scores below 0.0 are clamped to 0.0
  • Scores above 1.0 are clamped to 1.0
  • Exceptions during test execution result in 0.0 for that iteration

Data and Tool Merging

Data and tools are merged in order: shared -> suite -> test

  • For data: later values override earlier ones
  • For tools: lists are concatenated

Documentation

Full documentation: https://mrbagheera.github.io/llm-workers/

License

See main repository for license information.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llm_workers_evaluation-1.1.1.tar.gz (6.8 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

llm_workers_evaluation-1.1.1-py3-none-any.whl (9.0 kB view details)

Uploaded Python 3

File details

Details for the file llm_workers_evaluation-1.1.1.tar.gz.

File metadata

  • Download URL: llm_workers_evaluation-1.1.1.tar.gz
  • Upload date:
  • Size: 6.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/2.1.3 CPython/3.13.4 Darwin/23.6.0

File hashes

Hashes for llm_workers_evaluation-1.1.1.tar.gz
Algorithm Hash digest
SHA256 b7d063dd01d2835d9ab4ad4cb230919bddb301745fb2ad95be747ef319f8fd20
MD5 b44cecb9bddf9ebf000865c0696fe95e
BLAKE2b-256 2b7f4e01c8744fc4252e2ed8e8751ff17b3a6f4f68d7b73797df0860e5b075f2

See more details on using hashes here.

File details

Details for the file llm_workers_evaluation-1.1.1-py3-none-any.whl.

File metadata

File hashes

Hashes for llm_workers_evaluation-1.1.1-py3-none-any.whl
Algorithm Hash digest
SHA256 5283fb69f59652b292bb4beeacb5a1efc0fff5961de5bce5677bfb1e62620d11
MD5 b3363bb1dffffb17f046bdf218a662df
BLAKE2b-256 1ae1ff05284d6b45b2c8892a0d7806e210757dec546967af7c40b4f98c7f0b55

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page