Translation proxy enabling Claude Code CLI to work with SWE-agent format models
Project description
Ai2 Soft-Verified Efficient Repository Agents (SERA) Claude Code Proxy
This repo allows Claude Code to be used with the Ai2 Open Coding Agents SERA model.
You will need Claude Code and uv installed to set up the SERA CLI.
For more information about Open Coding Agents and SERA, see:
- sera-cli Demo on YouTube
- SERA Data Generation and Training Code
- Ai2 Open Coding Agents Blog Post
- SERA Technical Report
- Ai2 Open Coding Agents Hugging Face Collection
Quick Start with Modal
The fastest way to try SERA is with Modal, which handles GPU provisioning, vLLM deployment, and downloading the model automatically. This takes ~10m for the first run as ~65GB of model weights are downloaded. Subsequent runs will cache the model and start up faster.
When you exit Claude Code, the Modal app will automatically get cleaned up.
# Install modal and sera globally
uv tool install modal
uv tool install ai2-sera-cli
# Setup modal (this will prompt you to set up an account)
modal setup
# Deploy SERA to Modal and launch Claude Code this uses allenai/SERA-32B by default
sera --modal
# Use the allenai/SERA-8B model. Non-SERA models are untested and may not behave as expected
sera --modal --model allenai/SERA-8B
Using Existing Endpoints
If you have an existing vLLM endpoint for the SERA model (e.g., from a shared deployment or your own infrastructure):
# Install sera globally
uv tool install ai2-sera-cli
# Set the API key if your endpoint requires authentication
export SERA_API_KEY=<your API key>
# Run sera with your endpoint
sera --endpoint <endpoint URL>
Shared Deployments with deploy-sera
For teams or multi-user setups, you can create a persistent vLLM deployment on Modal using deploy-sera. Unlike sera --modal which creates ephemeral deployments that stop when you exit, deploy-sera creates persistent deployments that stay up until explicitly stopped.
# Deploy a persistent vLLM instance with your choice of model
deploy-sera --model allenai/SERA-32B
deploy-sera --model allenai/SERA-8B
# The command outputs an endpoint URL and API key
# Share these with your team members
# Team members can then connect with:
SERA_API_KEY=<api-key> sera --endpoint <endpoint-url>
# Stop the deployment when done
deploy-sera --stop
deploy-sera Options
| Option | Description |
|---|---|
--model MODEL |
HuggingFace model ID to deploy (default: allenai/SERA-32B) |
--num-gpus N |
Number of GPUs to use; also sets tensor parallelism (default: 1) |
--api-key KEY |
API key for authentication (auto-generated if not specified) |
--hf-secret NAME |
Modal secret containing HF_TOKEN for private/gated models |
--stop |
Stop the running deployment |
Deploying Private Models
For private models (e.g., fine-tuned on a proprietary codebase), use --hf-secret to authenticate with HuggingFace:
# 1. Create a Modal secret with your HuggingFace token
modal secret create huggingface HF_TOKEN=hf_your_token_here
# 2. Deploy your private model
deploy-sera --model your-org/private-sera-model --hf-secret huggingface
# 3. Users connect with the provided endpoint and API key
SERA_API_KEY=<api-key> sera --endpoint <endpoint-url>
For ephemeral single-user deployments, the same --hf-secret flag works with sera --modal.
Self-Hosted vLLM
You can run SERA with vLLM on any cloud GPU provider or your own hardware directly with vLLM.
On the server:
python -m vllm.entrypoints.openai.api_server \
--model allenai/SERA-32B \
--host 0.0.0.0 \
--port 8000 \
--max-model-len 32768 \
--tensor-parallel-size 2 \
--trust-remote-code \
--enable-auto-tool-choice \
--tool-call-parser hermes
On your dev machine:
uv tool install ai2-sera-cli
sera --endpoint http://your-server:8000/v1/chat/completions
Configuration
sera CLI Options
| Option | Description |
|---|---|
--endpoint URL |
vLLM endpoint URL (required unless --modal is used) |
--modal |
Deploy vLLM to Modal (ephemeral, auto-cleanup on exit) |
--port PORT |
Proxy server port (default: 8080) |
--model MODEL |
Model name/path |
--hf-secret NAME |
Modal secret name containing HF_TOKEN for private/gated models |
--proxy-only |
Start proxy only, don't launch Claude Code |
Environment Variables
| Variable | Description |
|---|---|
SERA_API_KEY |
API key for vLLM endpoint authentication |
SERA_MODEL |
Default model name (fallback for --model) |
SERA_HF_SECRET |
Default Modal secret name (fallback for --hf-secret) |
API Key Authentication
The proxy supports API key authentication for vLLM endpoints:
sera --modal: API key is auto-generated and managed in the backgrounddeploy-sera: API key is auto-generated and printed so it can be shared with team members- Existing endpoints: Set
SERA_API_KEYenvironment variable before runningsera - Self-hosted vLLM: Start vLLM with
--api-key YOUR_KEY, then setSERA_API_KEY=YOUR_KEY
The proxy includes the API key in the Authorization: Bearer <api_key> header when making requests.
Citation
@misc{shen2026serasoftverifiedefficientrepository,
title={SERA: Soft-Verified Efficient Repository Agents},
author={Ethan Shen and Danny Tormoen and Saurabh Shah and Ali Farhadi and Tim Dettmers},
year={2026},
eprint={2601.20789},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2601.20789},
}
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file ai2_sera_cli-0.1.2.tar.gz.
File metadata
- Download URL: ai2_sera_cli-0.1.2.tar.gz
- Upload date:
- Size: 21.3 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.9.22 {"installer":{"name":"uv","version":"0.9.22","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"macOS","version":null,"id":null,"libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null}
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
0d209afd285246b287e8b0916b5c5aa7d7b8f4a5f2b849cefcde35fdc4e34c05
|
|
| MD5 |
e5f7830582cc829669aed8df37b60f29
|
|
| BLAKE2b-256 |
f584cf7407f1595b914f6be71c818b6a2e31ee0f35aac09c15c9dcef8a167abe
|
File details
Details for the file ai2_sera_cli-0.1.2-py3-none-any.whl.
File metadata
- Download URL: ai2_sera_cli-0.1.2-py3-none-any.whl
- Upload date:
- Size: 24.7 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.9.22 {"installer":{"name":"uv","version":"0.9.22","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"macOS","version":null,"id":null,"libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null}
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
87987abcf171bc87d712cdfa2037fd26b7f659c7c3f284cf244d0aedf8103024
|
|
| MD5 |
eb8ae87929e9ff373b598d96a401431b
|
|
| BLAKE2b-256 |
87d385573339d812fddc8032b82a11bbb5ae9e75caf87bf11e42684df47ba7cc
|