The official SWE-bench package - a benchmark for evaluating LMs on software engineering
Project description
Code and data for our ICLR 2024 paper SWE-bench: Can Language Models Resolve Real-World GitHub Issues?
Please refer our website for the public leaderboard and the change log for information on the latest updates to the SWE-bench benchmark.
👋 Overview
SWE-bench is a benchmark for evaluating large language models on real world software issues collected from GitHub. Given a codebase and an issue, a language model is tasked with generating a patch that resolves the described problem.
🚀 Set Up
To build SWE-bench from source, follow these steps:
- Clone this repository locally
cd
into the repository.- Run
conda env create -f environment.yml
to created a conda environment namedswe-bench
- Activate the environment with
conda activate swe-bench
💽 Usage
You can download the SWE-bench dataset directly (dev, test sets) or from HuggingFace.
To use SWE-Bench, you can:
- Train your own models on our pre-processed datasets
- Run inference on existing models (either models you have on-disk like LLaMA, or models you have access to through an API like GPT-4). The inference step is where you get a repo and an issue and have the model try to generate a fix for it.
- Evaluate models against SWE-bench. This is where you take a SWE-Bench task and a model-proposed solution and evaluate its correctness.
- Run SWE-bench's data collection procedure on your own repositories, to make new SWE-Bench tasks.
⬇️ Downloads
🍎 Tutorials
We've also written the following blog posts on how to use different parts of SWE-bench. If you'd like to see a post about a particular topic, please let us know via an issue.
- [Nov 1. 2023] Collecting Evaluation Tasks for SWE-Bench (🔗)
- [Nov 6. 2023] Evaluating on SWE-bench (🔗)
💫 Contributions
We would love to hear from the broader NLP, Machine Learning, and Software Engineering research communities, and we welcome any contributions, pull requests, or issues! To do so, please either file a new pull request or issue and fill in the corresponding templates accordingly. We'll be sure to follow up shortly!
Contact person: Carlos E. Jimenez and John Yang (Email: {carlosej, jy1682}@princeton.edu).
✍️ Citation
If you find our work helpful, please use the following citations.
@inproceedings{
jimenez2024swebench,
title={{SWE}-bench: Can Language Models Resolve Real-world Github Issues?},
author={Carlos E Jimenez and John Yang and Alexander Wettig and Shunyu Yao and Kexin Pei and Ofir Press and Karthik R Narasimhan},
booktitle={The Twelfth International Conference on Learning Representations},
year={2024},
url={https://openreview.net/forum?id=VTF8yNQM66}
}
🪪 License
MIT. Check LICENSE.md
.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for swebench-1.0.19-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | c90b2236f77470867c6718156e18bcaae3983dddfdd51d3dbe7c34882f5f981d |
|
MD5 | 2ef9e53e863229b7e733e2ef12f10b1f |
|
BLAKE2b-256 | 8927f2ee809ffa12977336910db21a9064d6c16b0ced6e6f48a5a571fbd5541a |