Multi-format bidirectional LLM proxy — translates between OpenAI Chat, OpenAI Responses, and Anthropic Messages
Project description
Rosetta
Multi-format bidirectional translation proxy for LLM APIs. Translates between OpenAI Chat Completions, OpenAI Responses, and Anthropic Messages formats — letting any client SDK talk to any provider regardless of which native API the provider speaks.
Quick Start
uvx (no install required)
# Create your config
mkdir -p ~/.rosetta-llm
cp config.example.jsonc ~/.rosetta-llm/config.json
# Edit ~/.rosetta-llm/config.json with your providers and API keys
# Run instantly
uvx rosetta-llm
Or point to a custom config:
uvx rosetta-llm --config /path/to/config.json
# Equivalent via env var:
ROSETTA_CONFIG=/path/to/config.json uvx rosetta-llm
uv tool install (persistent)
uv tool install rosetta-llm
rosetta-llm --help
rosetta-llm --config ~/my-config.json --port 9999
Docker
docker run -p 7860:7860 \
-v ~/.rosetta-llm/config.json:/app/config.json \
-e ANTHROPIC_API_KEY=sk-ant-... \
-e OPENAI_API_KEY=sk-... \
ghcr.io/lokesh-chimakurthi/rosetta-llm:main
From source
git clone https://github.com/Lokesh-Chimakurthi/rosetta-llm.git
cd rosetta-llm
uv sync
python -m rosetta
Features
- Three endpoint families:
/v1/chat/completions,/v1/responses,/v1/messages— all with full streaming support - Passthrough fast path: zero-overhead when inbound format matches provider's native format
- Canonical IR translation: lossless cross-format translation including thinking blocks, tool calls, and reasoning
- Claude Code gateway: full model picker integration — non-Anthropic models appear in
/modelviaclaude-code/prefixing - Provider routing:
<provider>/<model>prefix scheme (e.g.,anthropic/claude-opus-4-7) - Bearer-token auth: optional proxy-level API key authentication
- Structured JSON logging: request-scoped with configurable log levels
- Docker support: multi-stage build with
python:3.13-slim
Endpoints
| Method | Path | Purpose |
|---|---|---|
| POST | /v1/messages |
Anthropic Messages |
| POST | /v1/messages/count_tokens |
Local tiktoken token count |
| POST | /v1/chat/completions |
OpenAI Chat Completions |
| POST | /v1/responses |
OpenAI Responses |
| GET | /v1/models |
Merged model list |
| GET | /health |
Liveness check |
| GET | /providers |
Provider status |
Model ID Format
Models are addressed as <provider_key>/<model_name> where provider_key matches a key in your config's providers section.
Examples:
abc/kimi-k2.5— routes to the "abc" provider with model "kimi-k2.5"anthropic/claude-opus-4-7— routes to the "anthropic" provideropenai/gpt-5.4— routes to the "openai" provider
Claude Code Integration
Rosetta is a fully compatible Claude Code LLM gateway. Point Claude Code at Rosetta and all configured providers appear in the /model picker — including non-Anthropic models.
Setup
export ANTHROPIC_BASE_URL=http://localhost:7860
export ANTHROPIC_AUTH_TOKEN=sk-proxy-XXXX # if proxy auth is enabled
Or in Claude Code settings (~/.claude/settings.json):
{
"env": {
"ANTHROPIC_BASE_URL": "http://localhost:7860",
"ANTHROPIC_AUTH_TOKEN": "sk-proxy-XXXX"
}
}
Model picker
On startup, Claude Code queries GET /v1/models with its session headers. Rosetta detects Claude Code (via the X-Claude-Code-Session-Id header) and returns a model list tailored for the picker:
- Models already named
claude-*oranthropic/*pass through unchanged - All other models get a
claude-code/prefix — this ensures they pass Claude Code's built-in model filter (which only shows models starting withclaudeoranthropic)
For example, if your config has an OpenAI provider with gpt-5.4, it appears in the picker as claude-code/openai/gpt-5.4. When selected, Rosetta strips the claude-code/ prefix internally and routes to the correct provider.
Headers forwarded upstream
Rosetta forwards Claude Code's session headers (anthropic-beta, anthropic-version, X-Claude-Code-Session-Id) to every upstream call, preserving prompt caching and feature detection.
Translation Matrix
The proxy automatically translates between formats:
| Client Endpoint | Provider Format | Path |
|---|---|---|
/v1/messages |
anthropic | passthrough |
/v1/messages |
openai_chat | translate via IR |
/v1/messages |
openai_responses | translate via IR |
/v1/chat/completions |
openai_chat | passthrough |
/v1/chat/completions |
anthropic | translate via IR |
/v1/chat/completions |
openai_responses | translate via IR |
/v1/responses |
openai_responses | passthrough |
/v1/responses |
anthropic | translate via IR |
/v1/responses |
openai_chat | translate via IR |
Usage Examples
Anthropic client -> OpenAI-backed model
curl http://localhost:7860/v1/messages \
-H "Authorization: Bearer sk-proxy-XXXX" \
-H "Content-Type: application/json" \
-d '{
"model": "openai/gpt-5.4",
"max_tokens": 256,
"messages": [{"role": "user", "content": "Hello!"}]
}'
OpenAI client -> Anthropic-backed model
curl http://localhost:7860/v1/chat/completions \
-H "Authorization: Bearer sk-proxy-XXXX" \
-H "Content-Type: application/json" \
-d '{
"model": "anthropic/claude-opus-4-7",
"messages": [{"role": "user", "content": "Hello!"}],
"stream": true
}'
Environment Variables
| Variable | Purpose |
|---|---|
ROSETTA_CONFIG |
Path to config.json (default: ~/.rosetta-llm/config.json) |
| Provider-specific | Set via api_key_env in config (e.g., ANTHROPIC_API_KEY) |
Development
uv sync --group dev
uv run pytest -q
uv run mypy src/
uv run ruff check src/ tests/
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file rosetta_llm-0.1.0.tar.gz.
File metadata
- Download URL: rosetta_llm-0.1.0.tar.gz
- Upload date:
- Size: 126.6 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.9.22 {"installer":{"name":"uv","version":"0.9.22","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"macOS","version":null,"id":null,"libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null}
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
1b27eef262536c5411b17e34ea343d676827eecca73fc248f0db64962de359e4
|
|
| MD5 |
e6eb9262c2470b8d70bbca49fe006b51
|
|
| BLAKE2b-256 |
2279cd7650947c6222a5c3bce135e98b4f8718df2fd2388ab8fc0dc3d9e31cf0
|
File details
Details for the file rosetta_llm-0.1.0-py3-none-any.whl.
File metadata
- Download URL: rosetta_llm-0.1.0-py3-none-any.whl
- Upload date:
- Size: 48.2 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.9.22 {"installer":{"name":"uv","version":"0.9.22","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"macOS","version":null,"id":null,"libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null}
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
6946703b77f5f359be089fad1fe7ce43f7651a60043ef35246a6e939c99ca8fe
|
|
| MD5 |
ac991a78b4baf2f440c086bcdcfdd78d
|
|
| BLAKE2b-256 |
a5f73bf4614a2c014f4f348ea47508e858b6bcf55d0d3122268d27b327032e07
|