Skip to main content

LLM API format converter with Rust core and Python bindings

Project description

linguafranca

LLM API format converter with a Rust core and Python bindings.

Converts requests, responses, and streaming events between:

  • OpenAI Chat Completions
  • Anthropic Messages
  • Open Responses

Installation

# Python
pip install martian-linguafranca
# or
uv add martian-linguafranca
# Installs as 'martian-linguafranca', import as 'linguafranca'
# Rust
cargo add linguafranca

Supported formats

FormatName API
FormatName.OPENAI_CHAT_COMPLETIONS OpenAI Chat Completions
FormatName.ANTHROPIC_MESSAGES Anthropic Messages
FormatName.OPEN_RESPONSES Open Responses

Every pair is supported in both directions for requests and responses.

Quick start

import linguafranca as lf

# Convert a Chat Completions request to Anthropic Messages
result = lf.convert_request_json(
    {"model": "gpt-4.1-mini", "messages": [{"role": "user", "content": "hello"}]},
    source_format=lf.FormatName.OPENAI_CHAT_COMPLETIONS,
    target_format=lf.FormatName.ANTHROPIC_MESSAGES,
)

result.value     # converted dict
result.warnings  # list of lossy conversion warnings (dropped/modified fields)

Converting requests

import linguafranca as lf

# OpenAI Chat Completions -> Anthropic Messages
result = lf.convert_request_json(
    {
        "model": "gpt-4.1-mini",
        "messages": [{"role": "user", "content": "hello"}],
        "temperature": 0.7,
    },
    source_format=lf.FormatName.OPENAI_CHAT_COMPLETIONS,
    target_format=lf.FormatName.ANTHROPIC_MESSAGES,
)
print(result.value)
# {"model": "gpt-4.1-mini", "max_tokens": 4096, "messages": [...], ...}

# Anthropic Messages -> OpenAI Chat Completions
result = lf.convert_request_json(
    {
        "model": "claude-3-5-sonnet",
        "max_tokens": 64,
        "messages": [{"role": "user", "content": "hello"}],
    },
    source_format=lf.FormatName.ANTHROPIC_MESSAGES,
    target_format=lf.FormatName.OPENAI_CHAT_COMPLETIONS,
)

Convenience wrappers

When you always target the same format, convenience wrappers save some typing:

# Convert anything -> Anthropic Messages
result = lf.to_messages_request(
    openai_request,
    source_format=lf.FormatName.OPENAI_CHAT_COMPLETIONS,
)

# Convert anything -> OpenAI Chat Completions
result = lf.to_chat_completions_request(
    anthropic_request,
    source_format=lf.FormatName.ANTHROPIC_MESSAGES,
)

The same pattern works for responses with to_messages_response and to_chat_completions_response.

Converting responses

result = lf.convert_response_json(
    {
        "id": "chatcmpl-abc123",
        "object": "chat.completion",
        "model": "gpt-4.1-mini",
        "choices": [{
            "index": 0,
            "message": {"role": "assistant", "content": "Hello!"},
            "finish_reason": "stop",
        }],
        "usage": {"prompt_tokens": 5, "completion_tokens": 7, "total_tokens": 12},
    },
    source_format=lf.FormatName.OPENAI_CHAT_COMPLETIONS,
    target_format=lf.FormatName.ANTHROPIC_MESSAGES,
)
print(result.value)

Streaming

Sync streaming with httpx

import json
import httpx
import linguafranca as lf

def parse_sse(response: httpx.Response):
    """Yield parsed JSON objects from an SSE stream."""
    for line in response.iter_lines():
        if line.startswith("data: ") and line != "data: [DONE]":
            yield json.loads(line[6:])

headers = {"Authorization": "Bearer YOUR_KEY", "Content-Type": "application/json"}
payload = {
    "model": "gpt-4.1-mini",
    "messages": [{"role": "user", "content": "hello"}],
    "stream": True,
}

with httpx.stream("POST", "https://api.openai.com/v1/chat/completions",
                   headers=headers, json=payload) as resp:
    stream = lf.convert_response_stream_json(
        parse_sse(resp),
        source_format=lf.FormatName.OPENAI_CHAT_COMPLETIONS,
        target_format=lf.FormatName.OPEN_RESPONSES,
    )
    for event in stream:
        print(event)

    # Check warnings after the stream is fully consumed
    for w in stream.take_warnings():
        print(f"{w.field}: {w.message}")

Async streaming with httpx

import json
import httpx
import linguafranca as lf

async def parse_sse(response: httpx.Response):
    async for line in response.aiter_lines():
        if line.startswith("data: ") and line != "data: [DONE]":
            yield json.loads(line[6:])

async def main():
    headers = {"Authorization": "Bearer YOUR_KEY", "Content-Type": "application/json"}
    payload = {
        "model": "gpt-4.1-mini",
        "messages": [{"role": "user", "content": "hello"}],
        "stream": True,
    }

    async with httpx.AsyncClient() as client:
        async with client.stream("POST",
                                 "https://api.openai.com/v1/chat/completions",
                                 headers=headers, json=payload) as resp:
            stream = lf.aconvert_response_stream(
                parse_sse(resp),
                source_format=lf.FormatName.OPENAI_CHAT_COMPLETIONS,
                target_format=lf.FormatName.OPEN_RESPONSES,
            )
            async for event in stream:
                print(event)

Typed payloads (recommended)

The package ships auto-generated @dataclass definitions for all three formats via linguafranca.types. Using them gives you IDE autocompletion, type checking, and catches mistakes before the payload hits the converter.

import linguafranca as lf
from linguafranca.types import (
    ChatCompletionsOpenAiRequest,
    ChatCompletionsMessageUser,
)

request = ChatCompletionsOpenAiRequest(
    model="gpt-4.1-mini",
    messages=[
        ChatCompletionsMessageUser(content="hello", role="user"),
    ],
    temperature=0.7,
)

result = lf.convert_request(
    request,
    source_format=lf.FormatName.OPENAI_CHAT_COMPLETIONS,
    target_format=lf.FormatName.ANTHROPIC_MESSAGES,
)
print(result.value)

The non-_json variants (convert_request, convert_response, convert_response_stream) accept any of:

  • linguafranca.types dataclasses (recommended)
  • plain dicts
  • Pydantic models — serialised via model.model_dump()

The _json variants (convert_request_json, convert_response_json, convert_response_stream_json) accept and return plain dicts only.

Conversion config

Request conversions accept an optional config parameter to control conversion behavior.

Stripping encrypted reasoning

When forwarding requests between providers, thinking/reasoning blocks carry provider-specific signatures that the target API will reject. Use strip_encrypted_reasoning to clean them:

import linguafranca as lf

result = lf.convert_request_json(
    anthropic_request_with_thinking,
    source_format=lf.FormatName.ANTHROPIC_MESSAGES,
    target_format=lf.FormatName.OPEN_RESPONSES,
    config=lf.ConversionConfig(strip_encrypted_reasoning=True),
)

You can also pass a plain dict:

result = lf.convert_request_json(
    ...,
    config={"strip_encrypted_reasoning": True},
)

When strip_encrypted_reasoning is enabled:

  • Anthropic -> Open Responses: Thinking blocks keep their summary text but encrypted_content is removed. Redacted thinking blocks (no summary) are dropped entirely.
  • Open Responses -> Anthropic: All reasoning items are dropped from the message history.
  • The reasoning/thinking config (whether the model should think) is always preserved.

Warnings

Conversions between formats can be lossy — some fields exist in one format but not another. When this happens, the library returns warnings instead of failing:

result = lf.convert_request_json(
    request,
    source_format=lf.FormatName.OPENAI_CHAT_COMPLETIONS,
    target_format=lf.FormatName.ANTHROPIC_MESSAGES,
)

for w in result.warnings:
    print(f"{w.field}: {w.message}")
    # e.g. "frequency_penalty: field not supported in Anthropic Messages, dropped"

For streaming, call stream.take_warnings() after the stream is consumed.

Error handling

All errors inherit from ConversionError:

import linguafranca as lf

# Invalid payload structure
try:
    lf.convert_request_json(
        {"not": "a valid request"},
        source_format=lf.FormatName.OPENAI_CHAT_COMPLETIONS,
        target_format=lf.FormatName.ANTHROPIC_MESSAGES,
    )
except lf.SchemaValidationError as e:
    print(e)  # payload doesn't match the source format schema

# Unsupported conversion pair (streaming only)
try:
    lf.convert_response_stream_json(
        events,
        source_format=lf.FormatName.OPEN_RESPONSES,
        target_format=lf.FormatName.OPEN_RESPONSES,
    )
except lf.UnsupportedConversionError as e:
    print(e)

All available types

All request, response, and streaming event types for each format are available under linguafranca.types:

from linguafranca.types import (
    # OpenAI Chat Completions
    ChatCompletionsOpenAiRequest,
    ChatCompletionsMessageUser,
    ChatCompletionsMessageSystem,
    ChatCompletionsMessageAssistant,
    ChatCompletionsResponse,
    ChatCompletionsStreamChunk,
    # Anthropic Messages
    AnthropicRequest,
    AnthropicMessage,
    AnthropicResponse,
    # Open Responses
    OpenResponsesRequest,
    OpenResponsesResponse,
    # ... and all nested types (content parts, tool calls, etc.)
)

These are standard @dataclass definitions generated from the Rust schemas. See Typed payloads for usage examples.

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

martian_linguafranca-0.2.6.tar.gz (203.4 kB view details)

Uploaded Source

Built Distributions

If you're not sure about the file name format, learn more about wheel file names.

martian_linguafranca-0.2.6-cp310-abi3-win_amd64.whl (1.4 MB view details)

Uploaded CPython 3.10+Windows x86-64

martian_linguafranca-0.2.6-cp310-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (1.4 MB view details)

Uploaded CPython 3.10+manylinux: glibc 2.17+ x86-64

martian_linguafranca-0.2.6-cp310-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl (1.3 MB view details)

Uploaded CPython 3.10+manylinux: glibc 2.17+ ARM64

martian_linguafranca-0.2.6-cp310-abi3-macosx_11_0_arm64.whl (1.2 MB view details)

Uploaded CPython 3.10+macOS 11.0+ ARM64

martian_linguafranca-0.2.6-cp310-abi3-macosx_10_12_x86_64.whl (1.3 MB view details)

Uploaded CPython 3.10+macOS 10.12+ x86-64

File details

Details for the file martian_linguafranca-0.2.6.tar.gz.

File metadata

  • Download URL: martian_linguafranca-0.2.6.tar.gz
  • Upload date:
  • Size: 203.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for martian_linguafranca-0.2.6.tar.gz
Algorithm Hash digest
SHA256 97b06681c75a08db3c52dcd27ad8a95859f071e09b42ba79743709cd4463d9c0
MD5 ad6a06faca82786a7c8955e59d437270
BLAKE2b-256 b2b337ac9d1683aef6bb6a8ac9d726aa0d630730bbb19fbadad674b96f830645

See more details on using hashes here.

File details

Details for the file martian_linguafranca-0.2.6-cp310-abi3-win_amd64.whl.

File metadata

File hashes

Hashes for martian_linguafranca-0.2.6-cp310-abi3-win_amd64.whl
Algorithm Hash digest
SHA256 95b3b5e17ad52e7d8526e67e04f612c85f839fa6cd862b17efa555e76a1dadf1
MD5 da9f37c20c0fddc80bd002608a9774e5
BLAKE2b-256 ea12b9b487df1b5a50f89714b23f4df9c86f34b364daa4a33137b5dd9b9c5967

See more details on using hashes here.

File details

Details for the file martian_linguafranca-0.2.6-cp310-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for martian_linguafranca-0.2.6-cp310-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 7dfa60f5f757c93180c7c9c1c11700b10d677e4c23fef60926829b2e86f97157
MD5 0e303c72856924191a43af2b4eda0cc7
BLAKE2b-256 a0b0770dfa22c463fb9709d0db3d67a7089d333119cc18391ddb743cd70c1b9a

See more details on using hashes here.

File details

Details for the file martian_linguafranca-0.2.6-cp310-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl.

File metadata

File hashes

Hashes for martian_linguafranca-0.2.6-cp310-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl
Algorithm Hash digest
SHA256 a6920de4c45f7c6fe5850b05528a9190c07edfc424c76cfb2ef66ef2d5acbaef
MD5 4981f9f03aca19ddcd802d8160a36794
BLAKE2b-256 861bd7d2486f3d3b675941eb68ba0e27e41a658520de399589e69e79b9319c75

See more details on using hashes here.

File details

Details for the file martian_linguafranca-0.2.6-cp310-abi3-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for martian_linguafranca-0.2.6-cp310-abi3-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 86bf2b80db8a076ed180df2afd1c3bc472c522312c6566e5a00377fd67d16a22
MD5 9b1353cc4d874a91b56812c55a8f62ed
BLAKE2b-256 198efdbadfa2931a707854923ca081d2c3d0d662a823f97f3f3230babfc97016

See more details on using hashes here.

File details

Details for the file martian_linguafranca-0.2.6-cp310-abi3-macosx_10_12_x86_64.whl.

File metadata

File hashes

Hashes for martian_linguafranca-0.2.6-cp310-abi3-macosx_10_12_x86_64.whl
Algorithm Hash digest
SHA256 002b5c987accd1d76d23cd4a489ae2f0dea96ff870f3599ec59cd796f380747d
MD5 2a08544bb79cf713313195e880cdaafc
BLAKE2b-256 f30fe2eccf951d6a5e25d47b04ce7d9a58a841092e3a49bcfe3188a6fb88d644

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page