validate models for production
Project description
Bench
Bench is a tool for evaluating LLMs for production use cases. Whether you are comparing different LLMs, considering different prompts, or testing generation hyperparameters like temperature and # tokens, Bench provides one touch point for all your LLM performance evaluation.
If you have encountered a need for any of the following in your LLM work, then Bench can help with your evaluation:
- to standardize the workflow of LLM evaluation with a common interface across tasks and use cases
- to test whether open source LLMs can do as well as the top closed-source LLM API providers on your specific data
- to translate the rankings on LLM leaderboards and benchmarks into scores that you care about for your actual use case
Join the bench community on Discord.
For bug fixes and feature requests, please file a Github issue.
Package installation
Install Bench to your python environment with optional dependencies for serving results locally (recommended):
pip install 'arthur-bench[server]'
Alternatively, install Bench to your python environment with minimum dependencies:
pip install arthur-bench
For further setup instructions visit our installation guide
Using Bench
For a more in-depth walkthrough of using bench, visit our quickstart walkthrough and our test suite creation guide on our docs.
To make sure you can run test suites in bench, you can run the following code snippets to create a test suite and run it to give a score to candidate outputs.
from arthur_bench.run.testsuite import TestSuite
suite = TestSuite(
"bench_quickstart",
"exact_match",
input_text_list=["What year was FDR elected?", "What is the opposite of down?"],
reference_output_list=["1932", "up"]
)
suite.run("quickstart_run", candidate_output_list=["1932", "up is the opposite of down"])
Saved test suites can be loaded later on to benchmark test performance over time, without needing to re-prepare reference data:
existing_suite = TestSuite("bench_quickstart", "exact_match")
existing_suite.run("quickstart_new_run", candidate_output_list=["1936", "up"])
To view the results for these runs in the local UI that comes with the bench
package, run bench
from the command line (this requires the bench optional server dependencies to be installed):
bench
Viewing examples in the bench UI will look something like this:
Running Bench from source
To launch Bench from source:
- Install the dependencies
pip install -e '.[server]'
- Build the Front End
cd arthur_bench/server/js
npm i
npm run build
- Launch the server
bench
Because the server was installed with pip -e
, local changes will be picked up. However, the server will need to be restarted between
changes in order for those changes to be picked up.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file arthur_bench-0.3.0.tar.gz
.
File metadata
- Download URL: arthur_bench-0.3.0.tar.gz
- Upload date:
- Size: 5.1 MB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.2 CPython/3.9.18
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 9d85029715b208d3bacfd33e8753f637828cf2a74df57b369c2a4e5f235ad0f6 |
|
MD5 | ae9f0a11f1ce8b6f34ed4a632c922295 |
|
BLAKE2b-256 | 2ecc10062e19db1535f7552982007c85c8ade763c5f3b91240e608821f3cbfd5 |
File details
Details for the file arthur_bench-0.3.0-py3-none-any.whl
.
File metadata
- Download URL: arthur_bench-0.3.0-py3-none-any.whl
- Upload date:
- Size: 5.1 MB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.2 CPython/3.9.18
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 10a42f73ff3ab719798bc1a115de2e1cfc4dce90e9365a7c3764f2c572192997 |
|
MD5 | aee30290943f6f98c8861fa51f04993d |
|
BLAKE2b-256 | ba82208abb7e25c3a952799496a3217cf97e5287b2ccef972d7b856cd7e15de6 |