Benchmark performance of **any Foundation Model (FM)** deployed on **any AWS Generative AI service**, be it **Amazon SageMaker**, **Amazon Bedrock**, **Amazon EKS**, or **Amazon EC2**. The FMs could be deployed on these platforms either directly through `FMbench`, or, if they are already deployed then also they could be benchmarked through the **Bring your own endpoint** mode supported by `FMBench`.
Project description
Table of Contents generated with DocToc
- Foundation Model benchmarking tool (FMBench)
- Overview
Overview
Benchmark any Foundation Model (FM) on any AWS Generative AI service [Amazon SageMaker, Amazon Bedrock, Amazon EKS, Bring your own endpoint etc.]
Amazon Bedrock | Amazon SageMaker | Amazon EKS | Amazon EC2
A key challenge with FMs is the ability to benchmark their performance in terms of inference latency, throughput and cost so as to determine which model running with what combination of the hardware and serving stack provides the best price-performance combination for a given workload.
Stated as business problem, the ask is “What is the dollar cost per transaction for a given generative AI workload that serves a given number of users while keeping the response time under a target threshold?”
But to really answer this question, we need to answer an engineering question (an optimization problem, actually) corresponding to this business problem: “What is the minimum number of instances N, of most cost optimal instance type T, that are needed to serve a workload W while keeping the average transaction latency under L seconds?”
W: = {R transactions per-minute, average prompt token length P, average generation token length G}
This foundation model benchmarking tool (a.k.a. FMBench
) is a tool to answer the above engineering question and thus answer the original business question about how to get the best price performance for a given workload. Here is one of the plots generated by FMBench
to help answer the above question (the instance types in the legend have been blurred out on purpose, you can find them in the actual plot generated on running FMBench
).
Models benchmarked
Configuration files are available in the configs folder for the following models in this repo.
Llama3 on Amazon SageMaker
Llama3 is now available on SageMaker (read blog post), and you can now benchmark it using FMBench
. Here are the config files for benchmarking Llama3-8b-instruct
and Llama3-70b-instruct
on ml.p4d.24xlarge
, ml.inf2.24xlarge
and ml.g5.12xlarge
instances.
- Config file for
Llama3-8b-instruct
onml.p4d.24xlarge
andml.g5.12xlarge
. - Config file for
Llama3-70b-instruct
onml.p4d.24xlarge
andml.g5.48xlarge
. - Config file for
Llama3-8b-instruct
onml.inf2.24xlarge
andml.g5.12xlarge
.
Full list of benchmarked models
Model | SageMaker g4dn/g5/p3 | SageMaker Inf2 | SageMaker P4 | SageMaker P5 | Bedrock On-demand throughput | Bedrock provisioned throughput |
---|---|---|---|---|---|---|
Anthropic Claude-3 Sonnet | ✅ | ✅ | ||||
Anthropic Claude-3 Haiku | ✅ | |||||
Mistral-7b-instruct | ✅ | ✅ | ✅ | ✅ | ||
Mistral-7b-AWQ | ✅ | |||||
Mixtral-8x7b-instruct | ✅ | |||||
Llama3-8b instruct | ✅ | ✅ | ✅ | ✅ | ✅ | |
Llama3-70b instruct | ✅ | ✅ | ✅ | ✅ | ||
Llama2-13b chat | ✅ | ✅ | ✅ | ✅ | ||
Llama2-70b chat | ✅ | ✅ | ✅ | ✅ | ||
Amazon Titan text lite | ✅ | |||||
Amazon Titan text express | ✅ | |||||
Cohere Command text | ✅ | |||||
Cohere Command light text | ✅ | |||||
AI21 J2 Mid | ✅ | |||||
AI21 J2 Ultra | ✅ | |||||
Gemma-2b | ✅ | |||||
Phi-3-mini-4k-instruct | ✅ | |||||
distilbert-base-uncased | ✅ |
New in this release
v1.0.49
- Streaming support for Amazon SageMaker and Amazon Bedrock.
- Per-token latency metrics such as time to first token (TTFT) and mean time per-output token (TPOT).
- Misc. bug fixes.
v1.0.48
- Faster result file download at the end of a test run.
Phi-3-mini-4k-instruct
configuration file.- Tokenizer and misc. bug fixes.
v1.0.47
- Run
FMBench
as a Docker container. - Bug fixes for GovCloud support.
- Updated README for EKS cluster creation.
Description
FMBench
is a Python package for running performance benchmarks for any Foundation Model (FM) deployed on any AWS Generative AI service, be it Amazon SageMaker, Amazon Bedrock, Amazon EKS, or Amazon EC2. The FMs could be deployed on these platforms either directly through FMbench
, or, if they are already deployed then also they could be benchmarked through the Bring your own endpoint mode supported by FMBench
.
Here are some salient features of FMBench
:
-
Highly flexible: in that it allows for using any combinations of instance types (
g5
,p4d
,p5
,Inf2
), inference containers (DeepSpeed
,TensorRT
,HuggingFace TGI
and others) and parameters such as tensor parallelism, rolling batch etc. as long as those are supported by the underlying platform. -
Benchmark any model: it can be used to be benchmark open-source models, third party models, and proprietary models trained by enterprises on their own data.
-
Run anywhere: it can be run on any AWS platform where we can run Python, such as Amazon EC2, Amazon SageMaker, or even the AWS CloudShell. It is important to run this tool on an AWS platform so that internet round trip time does not get included in the end-to-end response time latency.
Workflow for FMBench
The workflow for FMBench
is as follows:
Create configuration file
|
|-----> Deploy model on SageMaker/Use models on Bedrock/Bring your own endpoint
|
|-----> Run inference against deployed endpoint(s)
|
|------> Create a benchmarking report
-
Create a dataset of different prompt sizes and select one or more such datasets for running the tests.
- Currently
FMBench
supports datasets from LongBench and filter out individual items from the dataset based on their size in tokens (for example, prompts less than 500 tokens, between 500 to 1000 tokens and so on and so forth). Alternatively, you can download the folder from this link to load the data.
- Currently
-
Deploy any model that is deployable on SageMaker on any supported instance type (
g5
,p4d
,Inf2
).- Models could be either available via SageMaker JumpStart (list available here) as well as models not available via JumpStart but still deployable on SageMaker through the low level boto3 (Python) SDK (Bring Your Own Script).
- Model deployment is completely configurable in terms of the inference container to use, environment variable to set,
setting.properties
file to provide (for inference containers such as DJL that use it) and instance type to use.
-
Benchmark FM performance in terms of inference latency, transactions per minute and dollar cost per transaction for any FM that can be deployed on SageMaker.
- Tests are run for each combination of the configured concurrency levels i.e. transactions (inference requests) sent to the endpoint in parallel and dataset. For example, run multiple datasets of say prompt sizes between 3000 to 4000 tokens at concurrency levels of 1, 2, 4, 6, 8 etc. so as to test how many transactions of what token length can the endpoint handle while still maintaining an acceptable level of inference latency.
-
Generate a report that compares and contrasts the performance of the model over different test configurations and stores the reports in an Amazon S3 bucket.
- The report is generated in the Markdown format and consists of plots, tables and text that highlight the key results and provide an overall recommendation on what is the best combination of instance type and serving stack to use for the model under stack for a dataset of interest.
- The report is created as an artifact of reproducible research so that anyone having access to the model, instance type and serving stack can run the code and recreate the same results and report.
-
Multiple configuration files that can be used as reference for benchmarking new models and instance types.
Getting started
FMBench
is available as a Python package on PyPi and is run as a command line tool once it is installed. All data that includes metrics, reports and results are stored in an Amazon S3 bucket.
While technically you can run FMBench
on any AWS compute but practically speaking we either run it on a SageMaker Notebook or on EC2. Both these options are described below.
👉 The following sections are discussing running FMBench
the tool, as different from where the FM is actually deployed. For example, we could run FMBench
on EC2 but the model being deployed is on SageMaker or even Bedrock.
Quickstart
FMBench on a SageMaker Notebook
-
Each
FMBench
run works with a configuration file that contains the information about the model, the deployment steps, and the tests to run. A typicalFMBench
workflow involves either directly using an already provided config file from theconfigs
folder in theFMBench
GitHub repo or editing an already provided config file as per your own requirements (say you want to try benchmarking on a different instance type, or a different inference container etc.).👉 A simple config file with key parameters annotated is included in this repo, see
config-llama2-7b-g5-quick.yml
. This file benchmarks performance of Llama2-7b on anml.g5.xlarge
instance and anml.g5.2xlarge
instance. You can use this config file as it is for this Quickstart. -
Launch the AWS CloudFormation template included in this repository using one of the buttons from the table below. The CloudFormation template creates the following resources within your AWS account: Amazon S3 buckets, Amazon IAM role and an Amazon SageMaker Notebook with this repository cloned. A read S3 bucket is created which contains all the files (configuration files, datasets) required to run
FMBench
and a write S3 bucket is created which will hold the metrics and reports generated byFMBench
. The CloudFormation stack takes about 5-minutes to create.AWS Region Link us-east-1 (N. Virginia) us-west-2 (Oregon) us-gov-west-1 (GovCloud N. California) -
Once the CloudFormation stack is created, navigate to SageMaker Notebooks and open the
fmbench-notebook
. -
On the
fmbench-notebook
open a Terminal and run the following commands.conda create --name fmbench_python311 -y python=3.11 ipykernel source activate fmbench_python311; pip install -U fmbench
-
Now you are ready to
fmbench
with the following command line. We will use a sample config file placed in the S3 bucket by the CloudFormation stack for a quick first run.-
We benchmark performance for the
Llama2-7b
model on aml.g5.xlarge
and aml.g5.2xlarge
instance type, using thehuggingface-pytorch-tgi-inference
inference container. This test would take about 30 minutes to complete and cost about $0.20. -
It uses a simple relationship of 750 words equals 1000 tokens, to get a more accurate representation of token counts use the
Llama2 tokenizer
(instructions are provided in the next section). It is strongly recommended that for more accurate results on token throughput you use a tokenizer specific to the model you are testing rather than the default tokenizer. See instructions provided later in this document on how to use a custom tokenizer.account=`aws sts get-caller-identity | jq .Account | tr -d '"'` region=`aws configure get region` fmbench --config-file s3://sagemaker-fmbench-read-${region}-${account}/configs/llama2/7b/config-llama2-7b-g5-quick.yml > fmbench.log 2>&1
-
Open another terminal window and do a
tail -f
on thefmbench.log
file to see all the traces being generated at runtime.tail -f fmbench.log
-
👉 For streaming support on SageMaker and Bedrock checkout these config files:
-
-
The generated reports and metrics are available in the
sagemaker-fmbench-write-<replace_w_your_aws_region>-<replace_w_your_aws_account_id>
bucket. The metrics and report files are also downloaded locally and in theresults
directory (created byFMBench
) and the benchmarking report is available as a markdown file calledreport.md
in theresults
directory. You can view the rendered Markdown report in the SageMaker notebook itself or download the metrics and report files to your machine for offline analysis.
If you would like to understand what is being done under the hood by the CloudFormation template, see the DIY version with gory details
FMBench
on GovCloud
No special steps are required for running FMBench
on GovCloud. The CloudFormation link for us-gov-west-1
has been provided in the section above.
- Not all models available via Bedrock or other services may be available in GovCloud. The following commands show how to run
FMBench
to benchmark the Amazon Titan Text Express model in the GovCloud. See the Amazon Bedrock GovCloud page for more details.
account=`aws sts get-caller-identity | jq .Account | tr -d '"'`
region=`aws configure get region`
fmbench --config-file s3://sagemaker-fmbench-read-${region}-${account}/configs/bedrock/config-bedrock-titan-text-express.yml > fmbench.log 2>&1
Run FMBench
on Amazon EC2
For some enterprise scenarios it might be desirable to run FMBench
directly on an EC2 instance with no dependency on S3. Here are the steps to do this:
-
Have a
t3.xlarge
(or larger) instance in theRunning
stage. Make sure that the instance has at least 50GB of disk space and the IAM role associated with your EC2 instance hasAmazonSageMakerFullAccess
policy associated with it andsagemaker.amazonaws.com
added to its Trust relationships.{ "Effect": "Allow", "Principal": { "Service": "sagemaker.amazonaws.com" }, "Action": "sts:AssumeRole" }
-
Setup the
fmbench_python311
conda environment. This step required conda to be installed on the EC2 instance, see instructions for downloading Anaconda.conda create --name fmbench_python311 -y python=3.11 ipykernel source activate fmbench_python311; pip install -U fmbench
-
Create local directory structure needed for
FMBench
and copy all publicly available dependencies from the AWS S3 bucket forFMBench
. This is done by running thecopy_s3_content.sh
script available as part of theFMBench
repo.curl -s https://raw.githubusercontent.com/aws-samples/foundation-model-benchmarking-tool/main/copy_s3_content.sh | sh
-
Run
FMBench
with a quickstart config file.fmbench --config-file /tmp/fmbench-read/configs/llama2/7b/config-llama2-7b-g5-quick.yml --local-mode yes > fmbench.log 2>&1
-
Open a new Terminal and navigate to the
foundation-model-benchmarking-tool
directory and do atail
onfmbench.log
to see a live log of the run.tail -f fmbench.log
-
All metrics are stored in the
/tmp/fmbench-write
directory created automatically by thefmbench
package. Once the run completes all files are copied locally in aresults-*
folder as usual.
Results
Depending upon the experiments in the config file, the FMBench
run may take a few minutes to several hours. Once the run completes, you can find the report and metrics in the local results-*
folder in the directory from where FMBench
was run. The rpeort and metrics are also written to the write S3 bucket set in the config file.
Here is a screenshot of the report.md
file generated by FMBench
.
An internal FMBench
website
You can create an internal FMBench
website to view results from multiple runs in a single place. All FMBench
reports are generated as a Markdown file, these files can be rendered together in a website that is viewable in a web browser on your machine. We use Quarto
to do this. The steps below describe the process you can follow.
[Prerequisites] If you have followed the
Quickstart
then these are already taken care of for you.
- You will need to clone the
FMBench
code repo from GitHub.- The
results-*
folders that contain the reports and metrics from a run are present in the root folder of theFMBench
code repo.
-
Run the
render_fmbench_website.py
Python script using the following command. This will generate a_quarto.yml
file and render the website in thefmbench-website
folder in the root directory of yourFMBench
repo. The website is rendered using theQuarto
container downloaded fromregistry.gitlab.com/quarto-forge/docker/quarto
.source activate fmbench_python311 curl -s https://raw.githubusercontent.com/aws-samples/foundation-model-benchmarking-tool/main/render_fmbench_website.py python render_fmbench_website.py
-
The website is created in the local directory
fmbench-website
. You can copy this folder into a webserver that you have OR the easiest option is to zip up this folder and download to your local machine and use the Python3http.server
to host the website.cd fmbench-website; zip -r9 ../fmbench-website.zip *;cd -
-
Download
fmbench-website.zip
to your local machine. Extract the contents from thefmbench-website.zip
file. Navigate tofmbench-website
directory and run the Python3 webserver. This will start a local webserver. You should see traces being printed out on the console indicating that the webserver has started.python http.server 8080
-
Open
http://localhost:8080/
in your browser and you should be able to see theFMBench
website with all the reports that were present in theresults-*
folder in yourFMBench
installation. The following screenshot shows a picture of theFMBench
website with links to multiple reports.
Benchmark models deployed on different AWS Generative AI services
FMBench
comes packaged with configuration files for benchmarking models on different AWS Generative AI services.
Benchmark models on Bedrock
Choose any config file from the bedrock
folder and either run these directly or use them as templates for creating new config files specific to your use-case. Here is an example for benchmarking the Llama3
models on Bedrock.
fmbench --config-file https://raw.githubusercontent.com/aws-samples/foundation-model-benchmarking-tool/main/src/fmbench/configs/bedrock/config-bedrock-llama3.yml > fmbench.log 2>&1
Benchmark models on SageMaker
Choose any config file from the model specific folders, for example the Llama3
folder for Llama3
family of models. These configuration files also include instructions for FMBench
to first deploy the model on SageMaker using your configured instance type and inference parameters of choice and then run the benchmarking. Here is an example for benchmarking Llama3-8b
model on an ml.inf2.24xlarge
and ml.g5.12xlarge
instance.
fmbench --config-file https://raw.githubusercontent.com/aws-samples/foundation-model-benchmarking-tool/main/src/fmbench/configs/llama3/8b/config-llama3-8b-inf2-g5.yml > fmbench.log 2>&1
Benchmark models on EKS
You can use FMBench
to benchmark models on hosted on EKS. This can be done in one of two ways:
- Deploy the model on your EKS cluster independantly of
FMBench
and then benchmark it through the Bring your own endpoint mode. - Deploy the model on your EKS cluster through
FMBench
and then benchmark it.
The steps for deploying the model on your EKS cluster are described below.
👉 EKS cluster creation itself is not a part of the FMBench
functionality, the cluster needs to exist before you run the following steps. Steps for cluster creation are provided in this file but it would be best to consult the DoEKS repo on GitHub for comprehensive instructions.
-
Add the following IAM policies to your existing
FMBench
Role:-
AmazonEKSClusterPolicy: This policy provides Kubernetes the permissions it requires to manage resources on your behalf.
-
AmazonEKS_CNI_Policy: This policy provides the Amazon VPC CNI Plugin (amazon-vpc-cni-k8s) the permissions it requires to modify the IP address configuration on your EKS worker nodes. This permission set allows the CNI to list, describe, and modify Elastic Network Interfaces on your behalf.
-
AmazonEKSWorkerNodePolicy: This policy allows Amazon EKS worker nodes to connect to Amazon EKS Clusters.
-
-
Once the EKS cluster is available you can use either the following two files or create your own config files using these files as examples for running benchmarking for these models. These config files require that the EKS cluster has been created as per the steps in these instructions.
-
config-llama3-8b-eks-inf2.yml: Deploy Llama3 on Trn1/Inf2 instances.
-
config-mistral-7b-eks-inf2.yml: Deploy Mistral 7b on Trn1/Inf2 instances.
For more information about the blueprints used by FMBench to deploy these models, view: DoEKS docs gen-ai.
-
-
Run the
Llama3-8b
benchmarking using the command below (replace the config file as needed for a different model). This will first deploy the model on your EKS cluster and then run benchmarking on the deployed model.fmbench --config-file https://raw.githubusercontent.com/aws-samples/foundation-model-benchmarking-tool/main/src/fmbench/configs/llama3/8b/config-llama3-8b-eks-inf2.yml > fmbench.log 2>&1
-
As the model is getting deployed you might want to run the following
kubectl
commands to monitor the deployment progress. Set the model_namespace tollama3
ormistral
or a different model as appropriate.kubectl get pods -n <model_namespace> -w
: Watch the pods in the model specific namespace.kubectl -n karpenter get pods
: Get the pods in the karpenter namespace.kubectl describe pod -n <model_namespace> <pod-name>
: Describe a specific pod in the mistral namespace to view the live logs.
Benchmark models on EC2
You can use FMBench
to benchmark models on hosted on EC2. This can be done in one of two ways:
- Deploy the model on your EC2 instance independantly of
FMBench
and then benchmark it through the Bring your own endpoint mode. - Deploy the model on your EC2 instance through
FMBench
and then benchmark it.
The steps for deploying the model on your EC2 instance are described below.
👉 In this configuration both the model being benchmarked and FMBench
are deployed on the same EC2 instance.
-
Create a new EC2 instance suitable for hosting an LMI as per the steps described here.
-
Install
FMBench
on this instance and run benchmarking for a desired model using one of the config files included in theFMbench
repo or create your own.-
Connect to your instance using any of the options in EC2 (SSH/EC2 Connect), run the following in the EC2 terminal. This command installs Anaconda on the instance which is then used to create a new
conda
environment forFMBench
.# see instructions for downloading anaconda from https://www.anaconda.com/download curl -O https://repo.anaconda.com/archive/Anaconda3-2023.09-0-Linux-x86_64.sh chmod +x Anaconda3-2023.09-0-Linux-x86_64.sh ./Anaconda3-2023.09-0-Linux-x86_64.sh export PATH=/home/ubuntu/anaconda3/bin:$PATH
-
Setup the
fmbench_python311
conda environment.conda create --name fmbench_python311 -y python=3.11 ipykernel source activate fmbench_python311; pip install -U fmbench
-
Create local directory structure needed for
FMBench
and copy all publicly available dependencies from the AWS S3 bucket forFMBench
. This is done by running thecopy_s3_content.sh
script available as part of theFMBench
repo.curl -s https://raw.githubusercontent.com/aws-samples/foundation-model-benchmarking-tool/main/copy_s3_content.sh | sh
-
To download the model files from HuggingFace, create a
hf_token.txt
file in the/tmp/fmbench-read/scripts/
directory containing the Hugging Face token you would like to use. In the command below replace thehf_yourtokenstring
with your hugging Face token.echo hf_yourtokenstring > /tmp/fmbench-read/scripts/hf_token.txt
-
Run
FMBench
with a packaged or a custom config file. This step will also deploy the model on the EC2 instance.# the --write-bucket parameter value is just a placeholder and an actual S3 bucket is not required fmbench --config-file /tmp/fmbench-read/configs/llama3/8b/config-ec2-llama3-8b.yml --local-mode yes --write-bucket placeholder > fmbench.log 2>&1
-
Open a new Terminal and navigate to the
foundation-model-benchmarking-tool
directory and do atail
onfmbench.log
to see a live log of the run.tail -f fmbench.log
-
All metrics are stored in the
/tmp/fmbench-write
directory created automatically by thefmbench
package. Once the run completes all files are copied locally in aresults-*
folder as usual.
-
⚠️Experimental
You can now run FMBench
on any platform where you can run a Docker container, for example on an EC2 VM, SageMaker Notebook etc. The advantage is that you do not have to install anything locally, so no conda
installs needed anymore. Here are the steps to do that.
-
Create local directory structure needed for
FMBench
and copy all publicly available dependencies from the AWS S3 bucket forFMBench
. This is done by running thecopy_s3_content.sh
script available as part of theFMBench
repo. You can place model specific tokenizers and any new configuration files you create in the/tmp/fmbench-read
directory that is created after running the following command.curl -s https://raw.githubusercontent.com/aws-samples/foundation-model-benchmarking-tool/main/copy_s3_content.sh | sh
-
That's it! You are now ready to run the container.
# set the config file path to point to the config file of interest CONFIG_FILE=https://raw.githubusercontent.com/aws-samples/foundation-model-benchmarking-tool/main/src/fmbench/configs/llama2/7b/config-llama2-7b-g5-quick.yml docker run -v $(pwd)/fmbench:/app \ -v /tmp/fmbench-read:/tmp/fmbench-read \ -v /tmp/fmbench-write:/tmp/fmbench-write \ aarora79/fmbench:v1.0.47 \ "fmbench --config-file ${CONFIG_FILE} --local-mode yes --write-bucket placeholder > fmbench.log 2>&1"
-
The above command will create a
fmbench
directory inside the current working directory. This directory contains thefmbench.log
and theresults-*
folder that is created once the run finished.
Advanced functionality
Beyond running FMBench
with the configuraton files provided, you may want try out bringing your own dataset or endpoint to FMBench
.
Bring your own endpoint (a.k.a. support for external endpoints)
If you have an endpoint deployed on say Amazon EKS
or Amazon EC2
or have your models hosted on a fully-managed service such as Amazon Bedrock
, you can still bring your endpoint to FMBench
and run tests against your endpoint. To do this you need to do the following:
-
Create a derived class from
FMBenchPredictor
abstract class and provide implementation for the constructor, theget_predictions
method and theendpoint_name
property. SeeSageMakerPredictor
for an example. Save this file locally as saymy_custom_predictor.py
. -
Upload your new Python file (
my_custom_predictor.py
) for your custom FMBench predictor to yourFMBench
read bucket and the scripts prefix specified in thes3_read_data
section (read_bucket
andscripts_prefix
). -
Edit the configuration file you are using for your
FMBench
for the following:- Skip the deployment step by setting the
2_deploy_model.ipynb
step underrun_steps
tono
. - Set the
inference_script
under any experiment in theexperiments
section for which you want to use your new custom inference script to point to your new Python file (my_custom_predictor.py
) that contains your custom predictor.
- Skip the deployment step by setting the
Bring your own REST Predictor
(data-on-eks
version)
FMBench
now provides an example of bringing your own endpoint as a REST Predictor
for benchmarking. View this script
as an example. This script is an inference file for the NousResearch/Llama-2-13b-chat-hf
model deployed on an Amazon EKS cluster using Ray Serve. The model is deployed via data-on-eks
which is a comprehensive resource for scaling your data and machine learning workloads on Amazon EKS and unlocking the power of Gen AI. Using data-on-eks
, you can harness the capabilities of AWS Trainium, AWS Inferentia and NVIDIA GPUs to scale and optimize your Gen AI workloads and benchmark those models on FMBench with ease.
Bring your own dataset
By default FMBench
uses the LongBench dataset
dataset for testing the models, but this is not the only dataset you can test with. You may want to test with other datasets available on HuggingFace or use your own datasets for testing. You can do this by converting your dataset to the JSON lines
format. We provide a code sample for converting any HuggingFace dataset into JSON lines format and uploading it to the S3 bucket used by FMBench
in the bring_your_own_dataset
notebook. Follow the steps described in the notebook to bring your own dataset for testing with FMBench
.
Support for Open-Orca dataset
Support for Open-Orca dataset and corresponding prompts for Llama3, Llama2 and Mistral, see:
Building the FMBench
Python package
If you would like to build a dev version of FMBench
for your own development and testing purposes, the following steps describe how to do that.
-
Clone the
FMBench
repo from GitHub. -
Make any code changes as needed.
-
Install
poetry
.pip install poetry
-
Change directory to the
FMBench
repo directory and run poetry build.poetry build
-
The
.whl
file is generated in thedist
folder. Install the.whl
in your current Python environment.pip install dist/fmbench-X.Y.Z-py3-none-any.whl
-
Run
FMBench
as usual through theFMBench
CLI command.
Pending enhancements
View the ISSUES on GitHub and add any you might think be an beneficial iteration to this benchmarking harness.
Security
See CONTRIBUTING for more information.
License
This library is licensed under the MIT-0 License. See the LICENSE file.
Star History
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.