"EvalPlus for rigourous evaluation of LLM-synthesized code"
Project description
EvalPlus(📖) => 📚
🔥Quick Start • 💻LLM code • 📜Papers • 🔨Tools • 👷Development • 🙏Acknowledgement
Warning
🚨 Evaluating LLM-generated code over datasets with "3 test-cases" is **NOT** enough! 🚨
To address this, we started the EvalPlus project -- a rigourous evaluation framework for LLM4Code that:
- ✨ improves code benchmarks by adding up to thousands of new tests! (81x new tests for HumanEval!)
- ✨ crafts a set utility tools to sanitize, visualize and inspect LLM-generated code and evaluation results!
- ✨ accelerates LLM4Code research by open-sourcing LLM-generated samples for 14+ models -- no need to re-run the expensive benchmarks!
🔥 Quick Start
To get started, please first setup the environment:
pip install evalplus --upgrade
...Or you can try out the latest developing version:
pip install "git+https://github.com/evalplus/evalplus.git" --upgrade
🤔 Want to use local GitHub repo? :: click to expand ::
git clone https://github.com/evalplus/evalplus.git
cd evalplus
export PYTHONPATH=$PYTHONPATH:$(pwd)
pip install -r requirements.txt
HumanEval+
Generate Code Samples
The usage is just like the original HumanEval where you just need to implement the generate_one_completion
function!
from evalplus.data import get_human_eval_plus, write_jsonl
problems = get_human_eval_plus()
num_samples_per_task = 200
samples = [
dict(task_id=task_id, completion=generate_one_completion(problems[task_id]["prompt"]))
for task_id in problems
for _ in range(num_samples_per_task)
]
write_jsonl("samples.jsonl", samples)
🤔 What is in a `problem`? :: click to expand ::
task_id
is the identifier string for the taskentry_point
is name of the functionprompt
is the function signature with docstring
canonical_solution
is the ground-truth implementation (re-implemented to fix bugs in HumanEval)base_input
is the test inputs in original HumanEvalplus_input
is the test inputs brought by EvalPlus
Evaluate Code Samples with HumanEval+
You are strongly recommended to use a sandbox such as docker:
docker run -v $(pwd):/app ganler/evalplus:latest --dataset humaneval --samples samples.jsonl
...Or if you want to try it locally regardless of the risks ⚠️:
evalplus.evaluate --dataset humaneval --samples samples.jsonl
Warning ⚠️ Do you use a very slow machine?
LLM solutions are regarded as failed on timeout (and OOM etc.). Specifically, we set the timeout $T=\max(T_{base}, T_{gt}\times k)$, where:
- $T_{base}$ is the minimal timeout (configurable by
--min-time-limit
; default to 0.2s);- $T_{gt}$ is the runtime of the ground-truth solutions (achieved via profiling);
- $k$ is a configurable factor
--gt-time-limit-factor
(default to 4);If your machine is too slow and you are getting high-variance results, try to use larger $k$ and $T_{base}$.
Additionally, you are NOT encouraged to make your test-bed over stressed while running evaluation. For example, using
--parallel 64
on a 4-core machine or doing something else during evaluation are bad ideas...
🤔 Evaluate with local GitHub repo? :: click to expand ::
export PYTHONPATH=$PYTHONPATH:$(pwd)
python evalplus/evaluate.py --dataset humaneval --samples samples.jsonl
⌨️ More command-line flags :: click to expand ::
--parallel
: by default half of the cores--base-only
(store_ture): only run base HumanEval tests--i-just-wanna-run
: force a re-run
The output should be like (below is GPT-4 greedy decoding example):
Computing expected output...
Expected outputs computed in 15.18s
Reading samples...
164it [00:04, 37.79it/s]
Evaluating samples...
100%|██████████████████████████████████████████| 164/164 [00:03<00:00, 44.75it/s]
Base
{'pass@1': 0.8841463414634146}
Base + Extra
{'pass@1': 0.75}
Base
is thepass@k
for the original HumanEvalBase + Extra
is thepass@k
for the our HumanEval+ (with extra tests)- The "k" includes
[1, 10, 100]
where k values<=
the sample size will be used - A cache file named like
samples_eval_results.jsonl
will be cached. Remove it to re-run the evaluation
🤔 How long it would take? :: click to expand ::
When running 200 samples x 164 tasks x ~700+ tests, it can take around 2-10 minute by using --parallel 64
and --test-details
.
Here are some tips to speed up the evaluation:
- Use
--parallel $(nproc)
- Do NOT use
--test-details
if you just want to quickly get pass@k as--test-details
will run all tests (700+ on average for each task), while without--test-details
the testing for a sample stops immediately when it fails the first test. - Use our pre-evaluated results (see LLM-generated code)
- Use HumanEval+ Mini
🚀 Try out
HumanEvalPlus-Mini
! which selects a minimal set of additional tests with the highest quality, achieving almost the same effectiveness of the full version. Just add a--mini
flag, it can run 23+% faster! (even faster if you evaluate all tests without fail-stop with--test-details
).docker run -v $(pwd):/app ganler/evalplus:latest --dataset humaneval --samples samples.jsonl --mini # ...Or locally ⚠️ # evalplus.evaluate --dataset humaneval --samples samples.jsonl --mini
MBPP+ (TBD)
💻 LLM-generated code
Please kindly find the LLM-pre-generated code samples in the attachment of our v0.1.0 release.
Each sample file is packaged in a zip file named like ${model_name}_temp_${temperature}.zip
.
You can unzip them to a folder named like ${model_name}_temp_${temperature}
and run the evaluation from scratch with:
evalplus.evaluate --dataset humaneval --samples ${model_name}_temp_${temperature}
📜 Papers
Read our paper for more detailed findings!
@article{evalplus,
title={Is Your Code Generated by ChatGPT Really Correct? Rigorous Evaluation of Large Language Models for Code Generation},
author={Jiawei Liu and Chunqiu Steven Xia and Yuyao Wang and Lingming Zhang},
journal={arXiv preprint arXiv:2305.01210},
year={2023},
}
🔨 Useful tools
To use these tools, please first install the repository from GitHub:
git clone https://github.com/evalplus/evalplus.git
cd evalplus
pip install -r requirements-tools.txt
Syntax checker for LLM-generated code
Check LLM-produced code and answer the following questions:
- Is the generation entirely done for all samples / all problems in the dataset?
- Are LLM-generated code compilable? (if no, something could be wrong and you'd better check)
python tools/checker.py --folder /path/to/[model]-[??]b_temp_[??] --dataset humaneval
Post code sanitizer
LLM-generated code may contain some syntax errors. But some of them can be easily fixable by doing simple post-processing. This tool will make the LLM-generated code more clean/compilable by doing certain post-processing such as trimming with more magical EOFs and some garbage non-code tokens.
python tools/sanitize.py --eof --folder /path/to/vicuna-[??]b_temp_[??]
# Sanitized code will be produced to `/path/to/vicuna-[??]b_temp_[??]-sanitized`
Render pass@k
results to rich
and LaTeX tables
python tools/render.py --type /path/to/[model]-[??]b # NOTE: no `_temp_[??]`
Perform test input generation from scratch (TBD)
👷 Development
Before you start:
pip install pre-commit
pre-commit install
export PYTHONPATH=$PYTHONPATH:$(pwd)
Name convention
evalplus
is the package name.${DATASET}_plus
is the name of dataset applied withevalplus
.
🙏 Acknowledgement
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file evalplus-0.1.6.tar.gz
.
File metadata
- Download URL: evalplus-0.1.6.tar.gz
- Upload date:
- Size: 552.0 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.1 CPython/3.8.13
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | fed096a1f22721c84ccbb12e9a294891cf1b17ae8457f47b06004f7c9186b419 |
|
MD5 | 580cceb2587a182b56f58dacaa6e2480 |
|
BLAKE2b-256 | 31a638857200e3b12219b3aa5b2a8e07f1e3debd134cbd1c4e50231ca165ab1e |
File details
Details for the file evalplus-0.1.6-py3-none-any.whl
.
File metadata
- Download URL: evalplus-0.1.6-py3-none-any.whl
- Upload date:
- Size: 34.3 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.1 CPython/3.8.13
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 22cef7f8eb101da20c9f7ef31d422bd8c64614f4040c2f495a1197c40233044c |
|
MD5 | 444e81e3cf495fdf0f2428ca01fc1046 |
|
BLAKE2b-256 | 6b5a29a2211a268ebee69ffb3fd9d5a9ac7f5ff31e91a253b9fad8b0c886d91d |