Skip to main content

"Evaluation package for BigCodeBench"

Project description

BigCodeBench

BigCodeBench

📰 News🔥 Quick Start🚀 Remote Evaluation💻 LLM-generated Code📜 Citation

📰 News

  • [2024-10-06] We are releasing bigcodebench==v0.2.0!
  • [2024-10-05] We create a public code execution API on the Hugging Face space.
  • [2024-10-01] We have evaluated 139 models on BigCodeBench-Hard so far. Take a look at the leaderboard!
  • [2024-08-19] To make the evaluation fully reproducible, we add a real-time code execution session to the leaderboard. It can be viewed here.
  • [2024-08-02] We release bigcodebench==v0.1.9.
More News :: click to expand ::
  • [2024-07-18] We announce a subset of BigCodeBench, BigCodeBench-Hard, which includes 148 tasks that are more aligned with the real-world programming tasks. The details are available in this blog post. The dataset is available here. The new release is bigcodebench==v0.1.8.
  • [2024-06-28] We release bigcodebench==v0.1.7.
  • [2024-06-27] We release bigcodebench==v0.1.6.
  • [2024-06-19] We start the Hugging Face BigCodeBench Leaderboard! The leaderboard is available here.
  • [2024-06-18] We release BigCodeBench, a new benchmark for code generation with 1140 software-engineering-oriented programming tasks. Preprint is available here. PyPI package is available here with the version 0.1.5.

🌸 About

BigCodeBench

BigCodeBench is an easy-to-use benchmark for solving practical and challenging tasks via code. It aims to evaluate the true programming capabilities of large language models (LLMs) in a more realistic setting. The benchmark is designed for HumanEval-like function-level code generation tasks, but with much more complex instructions and diverse function calls.

Why BigCodeBench?

BigCodeBench focuses on task automation via code generation with diverse function calls and complex instructions, with:

  • Precise evaluation & ranking: See our leaderboard for latest LLM rankings before & after rigorous evaluation.
  • Pre-generated samples: BigCodeBench accelerates code intelligence research by open-sourcing LLM-generated samples for various models -- no need to re-run the expensive benchmarks!

🔥 Quick Start

To get started, please first set up the environment:

# By default, you will use the remote evaluation API to execute the output samples.
pip install bigcodebench --upgrade

# You are suggested to use `flash-attn` for generating code samples.
pip install packaging ninja
pip install flash-attn --no-build-isolation
# Note: if you have installation problem, consider using pre-built
# wheels from https://github.com/Dao-AILab/flash-attention/releases
⏬ Install nightly version :: click to expand ::
# Install to use bigcodebench.generate
pip install "git+https://github.com/bigcode-project/bigcodebench.git" --upgrade

🚀 Remote Evaluation

We use the greedy decoding as an example to show how to evaluate the generated code samples via remote API.

[!Warning]

To ease the generation, we use batch inference by default. However, the batch inference results could vary from batch sizes to batch sizes and versions to versions, at least for the vLLM backend. If you want to get more deterministic results for greedy decoding, please set --bs to 1.

[!Note]

Remotely executing on BigCodeBench-Full typically takes 6-7 minutes, and on BigCodeBench-Hard typically takes 4-5 minutes.

bigcodebench.evaluate \
  --model meta-llama/Meta-Llama-3.1-8B-Instruct \
  --split [complete|instruct] \
  --subset [full|hard] \
  --backend [vllm|openai|anthropic|google|mistral|hf]
  • All the resulted files will be stored in a folder named bcb_results.
  • The generated code samples will be stored in a file named [model_name]--bigcodebench-[instruct|complete]--[backend]-[temp]-[n_samples]-sanitized_calibrated.jsonl.
  • The evaluation results will be stored in a file named [model_name]--bigcodebench-[instruct|complete]--[backend]-[temp]-[n_samples]-sanitized_calibrated_eval_results.json.
  • The pass@k results will be stored in a file named [model_name]--bigcodebench-[instruct|complete]--[backend]-[temp]-[n_samples]-sanitized_calibrated_pass_at_k.json.

[!Note]

BigCodeBench uses different prompts for base and chat models. By default it is detected by tokenizer.chat_template when using hf/vllm as backend. For other backends, only chat mode is allowed.

Therefore, if your base models come with a tokenizer.chat_template, please add --direct_completion to avoid being evaluated in a chat mode.

Access OpenAI APIs from OpenAI Console

export OPENAI_API_KEY=<your_openai_api_key>

Access Anthropic APIs from Anthropic Console

export ANTHROPIC_API_KEY=<your_anthropic_api_key>

Access Mistral APIs from Mistral Console

export MISTRAL_API_KEY=<your_mistral_api_key>

Access Gemini APIs from Google AI Studio

export GOOGLE_API_KEY=<your_google_api_key>

💻 LLM-generated Code

We share pre-generated code samples from LLMs we have evaluated:

  • See the attachment of our v0.2.0. We include sanitized_samples_calibrated.zip for your convenience.

Advanced Usage

Please refer to the ADVANCED USAGE for more details.

📜 Citation

@article{zhuo2024bigcodebench,
  title={BigCodeBench: Benchmarking Code Generation with Diverse Function Calls and Complex Instructions},
  author={Zhuo, Terry Yue and Vu, Minh Chien and Chim, Jenny and Hu, Han and Yu, Wenhao and Widyasari, Ratnadira and Yusuf, Imam Nur Bani and Zhan, Haolan and He, Junda and Paul, Indraneil and others},
  journal={arXiv preprint arXiv:2406.15877},
  year={2024}
}

🙏 Acknowledgement

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

bigcodebench-0.2.0.post1.tar.gz (69.4 kB view details)

Uploaded Source

Built Distribution

bigcodebench-0.2.0.post1-py3-none-any.whl (44.5 kB view details)

Uploaded Python 3

File details

Details for the file bigcodebench-0.2.0.post1.tar.gz.

File metadata

  • Download URL: bigcodebench-0.2.0.post1.tar.gz
  • Upload date:
  • Size: 69.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.12.1

File hashes

Hashes for bigcodebench-0.2.0.post1.tar.gz
Algorithm Hash digest
SHA256 06a9ef1c162e02abefbaeceb09da29487509040a8a825079fa8c365e4020084d
MD5 01a8a3f75042974f573e670ebd5f0b45
BLAKE2b-256 b1e74778ee63ad7a46b72bc9a179217168152b52af45e298167dd8ec08f1bec4

See more details on using hashes here.

File details

Details for the file bigcodebench-0.2.0.post1-py3-none-any.whl.

File metadata

File hashes

Hashes for bigcodebench-0.2.0.post1-py3-none-any.whl
Algorithm Hash digest
SHA256 aad3decf080b5332a9710ac52a62173f5d2a691dfe7e988e26711f5c8d08df39
MD5 fd8b14abfd63ca332caf2b8c353c5709
BLAKE2b-256 ca1fe146e71338bc7af0d043ffe888f22adde225c664e9252b586f9935be19d7

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page