Skip to main content

A library containing LLM benchmarking tools

Project description

Flow Benchmark Tools

Create and run LLM benchmarks.

Installation

Just the library:

pip install flow-benchmark-tools:1.3.0

Library + Example benchmarks (see below):

pip install "flow-benchmark-tools[examples]:1.3.0"

Usage

Running example RAG benchmarks

Two end-to-end benchmark examples are provided in the examples folder: a LangChain RAG application and an OpenAI Assistant agent.

To run the LangChain RAG benchmark:

python src/examples/langchain_rag_agent.py

To run the OpenAI Assistant benchmark:

python src/examples/openai_assistant_agent.py

The rag benchmark cases are defined in data/rag_benchmark.jsonl.

The two examples follow the typical usage pattern of the library:

  • define an agent by implementing the BenchmarkAgent interface and overriding the run_benchmark_case method (you can also override the before and after methods, if needed),
  • create a set of benchmark cases, typically as a JSONL file such as data/rag_benchmark.jsonl,
  • use a BenchmarkRunner to run the benchmark.

Running example criteria benchmark

An application of a criteria benchmark is also provided in examples folder: a Criteria application that assesses the quality of pre-computed LLM outputs based on the criteria defined in each benchmark case.

To run the Criteria benchmark:

python src/examples/criteria_evaluation_agent.py

The criteria benchmark cases are defined in data/criteria_benchmark.jsonl.

This example follows a different application of the library:

  • define an agent implementing the BenchmarkAgent interface. In this application, each case already has the output we want to evaluate, so we override the run_benchmark_case method to simply repackage each BenchmarkCase as BenchmarkCaseResponse.
  • create a set of quality benchmark cases, typically as a JSONL file such as data/criteria_benchmark.jsonl. In this application, each case's "extra" dictionary includes a "criteria" string.
  • use a custom CriteriaBenchmarkRunner which overrides the _execute_benchmark_case method, to run the benchmark using an evaluator that inherits from CriteriaEvaluator

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

flow_benchmark_tools-1.4.0.tar.gz (855.6 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

flow_benchmark_tools-1.4.0-py3-none-any.whl (26.8 kB view details)

Uploaded Python 3

File details

Details for the file flow_benchmark_tools-1.4.0.tar.gz.

File metadata

  • Download URL: flow_benchmark_tools-1.4.0.tar.gz
  • Upload date:
  • Size: 855.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.11.8

File hashes

Hashes for flow_benchmark_tools-1.4.0.tar.gz
Algorithm Hash digest
SHA256 f0cf07f8348f77300326f140ffc3fbd0b245c278390b1bdc14bdc8aaa73228f6
MD5 9f5416ef0e3c17735c24e5f9273fb1f3
BLAKE2b-256 876622706b1934d66105d4321c3e4a55225be145e1ed545f704f1e2b69a7abf8

See more details on using hashes here.

File details

Details for the file flow_benchmark_tools-1.4.0-py3-none-any.whl.

File metadata

File hashes

Hashes for flow_benchmark_tools-1.4.0-py3-none-any.whl
Algorithm Hash digest
SHA256 1c6e8936420a75f13852b0e997674b7b76391a2b711cd9d6eecd640f899923a1
MD5 b198b7376961e11ede9430d64dee5473
BLAKE2b-256 12f8cf5d492119eebc8d63375529b99150e5a31640273fc8b119663904c3b939

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page