LLM API format converter with Rust core and Python bindings
Project description
linguafranca
LLM API format converter with a Rust core and Python bindings.
Converts requests, responses, and streaming events between:
- OpenAI Chat Completions
- Anthropic Messages
- Open Responses
Installation
# Python
pip install martian-linguafranca
# or
uv add martian-linguafranca
# Installs as 'martian-linguafranca', import as 'linguafranca'
# Rust
cargo add linguafranca
Supported formats
FormatName |
API |
|---|---|
FormatName.OPENAI_CHAT_COMPLETIONS |
OpenAI Chat Completions |
FormatName.ANTHROPIC_MESSAGES |
Anthropic Messages |
FormatName.OPEN_RESPONSES |
Open Responses |
Every pair is supported in both directions for requests and responses.
Quick start
import linguafranca as lf
# Convert a Chat Completions request to Anthropic Messages
result = lf.convert_request_json(
{"model": "gpt-4.1-mini", "messages": [{"role": "user", "content": "hello"}]},
source_format=lf.FormatName.OPENAI_CHAT_COMPLETIONS,
target_format=lf.FormatName.ANTHROPIC_MESSAGES,
)
result.value # converted dict
result.warnings # list of lossy conversion warnings (dropped/modified fields)
Converting requests
import linguafranca as lf
# OpenAI Chat Completions -> Anthropic Messages
result = lf.convert_request_json(
{
"model": "gpt-4.1-mini",
"messages": [{"role": "user", "content": "hello"}],
"temperature": 0.7,
},
source_format=lf.FormatName.OPENAI_CHAT_COMPLETIONS,
target_format=lf.FormatName.ANTHROPIC_MESSAGES,
)
print(result.value)
# {"model": "gpt-4.1-mini", "max_tokens": 4096, "messages": [...], ...}
# Anthropic Messages -> OpenAI Chat Completions
result = lf.convert_request_json(
{
"model": "claude-3-5-sonnet",
"max_tokens": 64,
"messages": [{"role": "user", "content": "hello"}],
},
source_format=lf.FormatName.ANTHROPIC_MESSAGES,
target_format=lf.FormatName.OPENAI_CHAT_COMPLETIONS,
)
Convenience wrappers
When you always target the same format, convenience wrappers save some typing:
# Convert anything -> Anthropic Messages
result = lf.to_messages_request(
openai_request,
source_format=lf.FormatName.OPENAI_CHAT_COMPLETIONS,
)
# Convert anything -> OpenAI Chat Completions
result = lf.to_chat_completions_request(
anthropic_request,
source_format=lf.FormatName.ANTHROPIC_MESSAGES,
)
The same pattern works for responses with to_messages_response and to_chat_completions_response.
Converting responses
result = lf.convert_response_json(
{
"id": "chatcmpl-abc123",
"object": "chat.completion",
"model": "gpt-4.1-mini",
"choices": [{
"index": 0,
"message": {"role": "assistant", "content": "Hello!"},
"finish_reason": "stop",
}],
"usage": {"prompt_tokens": 5, "completion_tokens": 7, "total_tokens": 12},
},
source_format=lf.FormatName.OPENAI_CHAT_COMPLETIONS,
target_format=lf.FormatName.ANTHROPIC_MESSAGES,
)
print(result.value)
Streaming
Sync streaming with httpx
import json
import httpx
import linguafranca as lf
def parse_sse(response: httpx.Response):
"""Yield parsed JSON objects from an SSE stream."""
for line in response.iter_lines():
if line.startswith("data: ") and line != "data: [DONE]":
yield json.loads(line[6:])
headers = {"Authorization": "Bearer YOUR_KEY", "Content-Type": "application/json"}
payload = {
"model": "gpt-4.1-mini",
"messages": [{"role": "user", "content": "hello"}],
"stream": True,
}
with httpx.stream("POST", "https://api.openai.com/v1/chat/completions",
headers=headers, json=payload) as resp:
stream = lf.convert_response_stream_json(
parse_sse(resp),
source_format=lf.FormatName.OPENAI_CHAT_COMPLETIONS,
target_format=lf.FormatName.OPEN_RESPONSES,
)
for event in stream:
print(event)
# Check warnings after the stream is fully consumed
for w in stream.take_warnings():
print(f"{w.field}: {w.message}")
Async streaming with httpx
import json
import httpx
import linguafranca as lf
async def parse_sse(response: httpx.Response):
async for line in response.aiter_lines():
if line.startswith("data: ") and line != "data: [DONE]":
yield json.loads(line[6:])
async def main():
headers = {"Authorization": "Bearer YOUR_KEY", "Content-Type": "application/json"}
payload = {
"model": "gpt-4.1-mini",
"messages": [{"role": "user", "content": "hello"}],
"stream": True,
}
async with httpx.AsyncClient() as client:
async with client.stream("POST",
"https://api.openai.com/v1/chat/completions",
headers=headers, json=payload) as resp:
stream = lf.aconvert_response_stream(
parse_sse(resp),
source_format=lf.FormatName.OPENAI_CHAT_COMPLETIONS,
target_format=lf.FormatName.OPEN_RESPONSES,
)
async for event in stream:
print(event)
Typed payloads (recommended)
The package ships auto-generated @dataclass definitions for all three
formats via linguafranca.types. Using them gives you IDE autocompletion,
type checking, and catches mistakes before the payload hits the converter.
import linguafranca as lf
from linguafranca.types import (
ChatCompletionsOpenAiRequest,
ChatCompletionsMessageUser,
)
request = ChatCompletionsOpenAiRequest(
model="gpt-4.1-mini",
messages=[
ChatCompletionsMessageUser(content="hello", role="user"),
],
temperature=0.7,
)
result = lf.convert_request(
request,
source_format=lf.FormatName.OPENAI_CHAT_COMPLETIONS,
target_format=lf.FormatName.ANTHROPIC_MESSAGES,
)
print(result.value)
The non-_json variants (convert_request, convert_response,
convert_response_stream) accept any of:
linguafranca.typesdataclasses (recommended)- plain dicts
- Pydantic models — serialised via
model.model_dump()
The _json variants (convert_request_json, convert_response_json,
convert_response_stream_json) accept and return plain dicts only.
Conversion config
Request conversions accept an optional config parameter to control conversion behavior.
Stripping encrypted reasoning
When forwarding requests between providers, thinking/reasoning blocks carry provider-specific signatures that the target API will reject. Use strip_encrypted_reasoning to clean them:
import linguafranca as lf
result = lf.convert_request_json(
anthropic_request_with_thinking,
source_format=lf.FormatName.ANTHROPIC_MESSAGES,
target_format=lf.FormatName.OPEN_RESPONSES,
config=lf.ConversionConfig(strip_encrypted_reasoning=True),
)
You can also pass a plain dict:
result = lf.convert_request_json(
...,
config={"strip_encrypted_reasoning": True},
)
When strip_encrypted_reasoning is enabled:
- Anthropic -> Open Responses: Thinking blocks keep their summary text but
encrypted_contentis removed. Redacted thinking blocks (no summary) are dropped entirely. - Open Responses -> Anthropic: All reasoning items are dropped from the message history.
- The reasoning/thinking config (whether the model should think) is always preserved.
Warnings
Conversions between formats can be lossy — some fields exist in one format but not another. When this happens, the library returns warnings instead of failing:
result = lf.convert_request_json(
request,
source_format=lf.FormatName.OPENAI_CHAT_COMPLETIONS,
target_format=lf.FormatName.ANTHROPIC_MESSAGES,
)
for w in result.warnings:
print(f"{w.field}: {w.message}")
# e.g. "frequency_penalty: field not supported in Anthropic Messages, dropped"
For streaming, call stream.take_warnings() after the stream is consumed.
Error handling
All errors inherit from ConversionError:
import linguafranca as lf
# Invalid payload structure
try:
lf.convert_request_json(
{"not": "a valid request"},
source_format=lf.FormatName.OPENAI_CHAT_COMPLETIONS,
target_format=lf.FormatName.ANTHROPIC_MESSAGES,
)
except lf.SchemaValidationError as e:
print(e) # payload doesn't match the source format schema
# Unsupported conversion pair (streaming only)
try:
lf.convert_response_stream_json(
events,
source_format=lf.FormatName.OPEN_RESPONSES,
target_format=lf.FormatName.OPEN_RESPONSES,
)
except lf.UnsupportedConversionError as e:
print(e)
All available types
All request, response, and streaming event types for each format are available
under linguafranca.types:
from linguafranca.types import (
# OpenAI Chat Completions
ChatCompletionsOpenAiRequest,
ChatCompletionsMessageUser,
ChatCompletionsMessageSystem,
ChatCompletionsMessageAssistant,
ChatCompletionsResponse,
ChatCompletionsStreamChunk,
# Anthropic Messages
AnthropicRequest,
AnthropicMessage,
AnthropicResponse,
# Open Responses
OpenResponsesRequest,
OpenResponsesResponse,
# ... and all nested types (content parts, tool calls, etc.)
)
These are standard @dataclass definitions generated from the Rust schemas.
See Typed payloads for usage examples.
License
MIT
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distributions
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file martian_linguafranca-0.2.1.tar.gz.
File metadata
- Download URL: martian_linguafranca-0.2.1.tar.gz
- Upload date:
- Size: 198.6 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
8c66ddc6ecebb8e761b871c677f81c2dac91c4941cb01c7debd88031ca63488c
|
|
| MD5 |
c74af882147bbcf5ddc9c6600a14df12
|
|
| BLAKE2b-256 |
0a42809a00055ce907ba1d6dbc8acad20e34dd45f8a658b390c52af572db91b4
|
File details
Details for the file martian_linguafranca-0.2.1-pp310-pypy310_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl.
File metadata
- Download URL: martian_linguafranca-0.2.1-pp310-pypy310_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl
- Upload date:
- Size: 1.2 MB
- Tags: PyPy, manylinux: glibc 2.17+ ARM64
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
f0b28d8bb5d73e431db41f4b9187a65811f50fc837af892134598255f3acb8fb
|
|
| MD5 |
dba27a9ec37159b30261f73a9dd06362
|
|
| BLAKE2b-256 |
d9cc9b6320e911bfc5bc455e2caca7d8a468cdfdf0cd30e25a89ba16e5053c7f
|
File details
Details for the file martian_linguafranca-0.2.1-pp39-pypy39_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl.
File metadata
- Download URL: martian_linguafranca-0.2.1-pp39-pypy39_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl
- Upload date:
- Size: 1.3 MB
- Tags: PyPy, manylinux: glibc 2.17+ ARM64
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
8884a8854de9a69683f78790bc284872f25f6a106aa0a4a0f7c74a0086dadce9
|
|
| MD5 |
5ad26438effc6b1ff262efed1f2dd703
|
|
| BLAKE2b-256 |
ee286e0f9fa7f97a7cb28297ce05c42f2f0a65234d42b056de8911864d73cdef
|
File details
Details for the file martian_linguafranca-0.2.1-cp310-abi3-win_amd64.whl.
File metadata
- Download URL: martian_linguafranca-0.2.1-cp310-abi3-win_amd64.whl
- Upload date:
- Size: 1.4 MB
- Tags: CPython 3.10+, Windows x86-64
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
0fde7dd2f9fe41e0c0fc95bf7e8e10806d3a1c9d39c14272c1d5d2ea7f7d063e
|
|
| MD5 |
ba970f353cb77a64e4429a05dbce6ea2
|
|
| BLAKE2b-256 |
96d6c018bc35ba5cb9bb40f71522e96acacab274dae080c621705665d702287b
|
File details
Details for the file martian_linguafranca-0.2.1-cp310-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl.
File metadata
- Download URL: martian_linguafranca-0.2.1-cp310-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl
- Upload date:
- Size: 1.2 MB
- Tags: CPython 3.10+, manylinux: glibc 2.17+ ARM64
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
997cd56cf397dda914d5e4b5a94d24b15a803ce9af7e9bfdf9a9d99bc60fcdbc
|
|
| MD5 |
b0a7abfa30405fe7f09fa0ca94a369fe
|
|
| BLAKE2b-256 |
e5fedb6b4277dc082483d0da8c6d5fad2b11826dbb77ce8dacff7a35a327904c
|
File details
Details for the file martian_linguafranca-0.2.1-cp310-abi3-macosx_11_0_arm64.whl.
File metadata
- Download URL: martian_linguafranca-0.2.1-cp310-abi3-macosx_11_0_arm64.whl
- Upload date:
- Size: 1.2 MB
- Tags: CPython 3.10+, macOS 11.0+ ARM64
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
82fae833a4e7e0bf264035ab2dde8267208e886ec9a1a8f897c9044b89b14936
|
|
| MD5 |
34f421dc1139fc1d59b192667d8609db
|
|
| BLAKE2b-256 |
5a88fa9e7624cffd5a375b0c1c3e03acdcddaf4e6ad24c5ec41f611032c1a8e9
|
File details
Details for the file martian_linguafranca-0.2.1-cp310-abi3-macosx_10_12_x86_64.whl.
File metadata
- Download URL: martian_linguafranca-0.2.1-cp310-abi3-macosx_10_12_x86_64.whl
- Upload date:
- Size: 1.3 MB
- Tags: CPython 3.10+, macOS 10.12+ x86-64
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
4c08017ab897802fd2662f3f095b54fa9ecefdc829d658d3f475605a179f3a2e
|
|
| MD5 |
7c32ff0129cb3c0a12a84357150afcc7
|
|
| BLAKE2b-256 |
f1ae38fd9eea2cc16e6d263b6ddd83b9cbcb08c8f226072f4dad5e7721e813cd
|
File details
Details for the file martian_linguafranca-0.2.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.
File metadata
- Download URL: martian_linguafranca-0.2.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
- Upload date:
- Size: 1.4 MB
- Tags: CPython 3.8, manylinux: glibc 2.17+ x86-64
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
0ac6bdf3841544f949d5495f25e9f4ac10241f932689d58d56daa1f0b6aef1c0
|
|
| MD5 |
64414514a3bb007c164d54014086809b
|
|
| BLAKE2b-256 |
38960a6a6cc10dd785afd1e9399cbda44b4989111e7f139bb2be61a54621d94a
|