Skip to main content

A library containing LLM benchmarking tools

Project description

Flow Benchmark Tools

Create and run LLM benchmarks.

Installation

Just the library:

pip install flow-benchmark-tools:1.5.0

Library + Example benchmarks (see below):

pip install "flow-benchmark-tools[examples]:1.5.0"

Usage

Running example RAG benchmarks

Two end-to-end benchmark examples are provided in the examples folder: a LangChain RAG application and an OpenAI Assistant agent.

To run the LangChain RAG benchmark:

python src/examples/langchain_rag_agent.py

To run the OpenAI Assistant benchmark:

python src/examples/openai_assistant_agent.py

The rag benchmark cases are defined in data/rag_benchmark.jsonl.

The two examples follow the typical usage pattern of the library:

  • define an agent by implementing the BenchmarkAgent interface and overriding the run_benchmark_case method (you can also override the before and after methods, if needed),
  • create a set of benchmark cases, typically as a JSONL file such as data/rag_benchmark.jsonl,
  • use a BenchmarkRunner to run the benchmark.

Running example criteria benchmark

An application of a criteria benchmark is also provided in examples folder: a Criteria application that assesses the quality of pre-computed LLM outputs based on the criteria defined in each benchmark case.

To run the Criteria benchmark:

python src/examples/criteria_evaluation_agent.py

The criteria benchmark cases are defined in data/criteria_benchmark.jsonl.

This example follows a different application of the library:

  • define an agent implementing the BenchmarkAgent interface. In this application, each case already has the output we want to evaluate, so we override the run_benchmark_case method to simply repackage each BenchmarkCase as BenchmarkCaseResponse.
  • create a set of quality benchmark cases, typically as a JSONL file such as data/criteria_benchmark.jsonl. In this application, each case's "extra" dictionary includes a "criteria" string.
  • use a custom CriteriaBenchmarkRunner which overrides the _execute_benchmark_case method, to run the benchmark using an evaluator that inherits from CriteriaEvaluator

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

flow_benchmark_tools-1.5.0.tar.gz (855.9 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

flow_benchmark_tools-1.5.0-py3-none-any.whl (26.9 kB view details)

Uploaded Python 3

File details

Details for the file flow_benchmark_tools-1.5.0.tar.gz.

File metadata

  • Download URL: flow_benchmark_tools-1.5.0.tar.gz
  • Upload date:
  • Size: 855.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.11.8

File hashes

Hashes for flow_benchmark_tools-1.5.0.tar.gz
Algorithm Hash digest
SHA256 fd845e29e278a98e9099e7d3475ce14cf375a4aac7d7d6f39d7b65c454fdb6b4
MD5 f7de385f7499afcbb1a83b4950b28cad
BLAKE2b-256 3b6ab73d1267120af320b8b10552fef68bc6e92bcb2a5c98d66c1cf9d525e799

See more details on using hashes here.

File details

Details for the file flow_benchmark_tools-1.5.0-py3-none-any.whl.

File metadata

File hashes

Hashes for flow_benchmark_tools-1.5.0-py3-none-any.whl
Algorithm Hash digest
SHA256 710dff2646a17326a346f739e3c56e671dc2e2b15e90e13d0be7c91082d1a149
MD5 8791644cdf66a2dba4faf6a2531e6c4c
BLAKE2b-256 bcbb09ae6ddf9abdacf7325f9d80b625d0f7493d27de2dad458c0608647399be

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page