Skip to main content

LLM API format converter with Rust core and Python bindings

Project description

linguafranca

LLM API format converter with a Rust core and Python bindings.

Converts requests, responses, and streaming events between:

  • OpenAI Chat Completions
  • Anthropic Messages
  • Open Responses

Installation

# Python
pip install martian-linguafranca
# or
uv add martian-linguafranca
# Installs as 'martian-linguafranca', import as 'linguafranca'
# Rust
cargo add linguafranca

Supported formats

FormatName API
FormatName.OPENAI_CHAT_COMPLETIONS OpenAI Chat Completions
FormatName.ANTHROPIC_MESSAGES Anthropic Messages
FormatName.OPEN_RESPONSES Open Responses

Every pair is supported in both directions for requests and responses.

Quick start

import linguafranca as lf

# Convert a Chat Completions request to Anthropic Messages
result = lf.convert_request_json(
    {"model": "gpt-4.1-mini", "messages": [{"role": "user", "content": "hello"}]},
    source_format=lf.FormatName.OPENAI_CHAT_COMPLETIONS,
    target_format=lf.FormatName.ANTHROPIC_MESSAGES,
)

result.value     # converted dict
result.warnings  # list of lossy conversion warnings (dropped/modified fields)

Converting requests

import linguafranca as lf

# OpenAI Chat Completions -> Anthropic Messages
result = lf.convert_request_json(
    {
        "model": "gpt-4.1-mini",
        "messages": [{"role": "user", "content": "hello"}],
        "temperature": 0.7,
    },
    source_format=lf.FormatName.OPENAI_CHAT_COMPLETIONS,
    target_format=lf.FormatName.ANTHROPIC_MESSAGES,
)
print(result.value)
# {"model": "gpt-4.1-mini", "max_tokens": 4096, "messages": [...], ...}

# Anthropic Messages -> OpenAI Chat Completions
result = lf.convert_request_json(
    {
        "model": "claude-3-5-sonnet",
        "max_tokens": 64,
        "messages": [{"role": "user", "content": "hello"}],
    },
    source_format=lf.FormatName.ANTHROPIC_MESSAGES,
    target_format=lf.FormatName.OPENAI_CHAT_COMPLETIONS,
)

Convenience wrappers

When you always target the same format, convenience wrappers save some typing:

# Convert anything -> Anthropic Messages
result = lf.to_messages_request(
    openai_request,
    source_format=lf.FormatName.OPENAI_CHAT_COMPLETIONS,
)

# Convert anything -> OpenAI Chat Completions
result = lf.to_chat_completions_request(
    anthropic_request,
    source_format=lf.FormatName.ANTHROPIC_MESSAGES,
)

The same pattern works for responses with to_messages_response and to_chat_completions_response.

Converting responses

result = lf.convert_response_json(
    {
        "id": "chatcmpl-abc123",
        "object": "chat.completion",
        "model": "gpt-4.1-mini",
        "choices": [{
            "index": 0,
            "message": {"role": "assistant", "content": "Hello!"},
            "finish_reason": "stop",
        }],
        "usage": {"prompt_tokens": 5, "completion_tokens": 7, "total_tokens": 12},
    },
    source_format=lf.FormatName.OPENAI_CHAT_COMPLETIONS,
    target_format=lf.FormatName.ANTHROPIC_MESSAGES,
)
print(result.value)

Streaming

Sync streaming with httpx

import json
import httpx
import linguafranca as lf

def parse_sse(response: httpx.Response):
    """Yield parsed JSON objects from an SSE stream."""
    for line in response.iter_lines():
        if line.startswith("data: ") and line != "data: [DONE]":
            yield json.loads(line[6:])

headers = {"Authorization": "Bearer YOUR_KEY", "Content-Type": "application/json"}
payload = {
    "model": "gpt-4.1-mini",
    "messages": [{"role": "user", "content": "hello"}],
    "stream": True,
}

with httpx.stream("POST", "https://api.openai.com/v1/chat/completions",
                   headers=headers, json=payload) as resp:
    stream = lf.convert_response_stream_json(
        parse_sse(resp),
        source_format=lf.FormatName.OPENAI_CHAT_COMPLETIONS,
        target_format=lf.FormatName.OPEN_RESPONSES,
    )
    for event in stream:
        print(event)

    # Check warnings after the stream is fully consumed
    for w in stream.take_warnings():
        print(f"{w.field}: {w.message}")

Async streaming with httpx

import json
import httpx
import linguafranca as lf

async def parse_sse(response: httpx.Response):
    async for line in response.aiter_lines():
        if line.startswith("data: ") and line != "data: [DONE]":
            yield json.loads(line[6:])

async def main():
    headers = {"Authorization": "Bearer YOUR_KEY", "Content-Type": "application/json"}
    payload = {
        "model": "gpt-4.1-mini",
        "messages": [{"role": "user", "content": "hello"}],
        "stream": True,
    }

    async with httpx.AsyncClient() as client:
        async with client.stream("POST",
                                 "https://api.openai.com/v1/chat/completions",
                                 headers=headers, json=payload) as resp:
            stream = lf.aconvert_response_stream(
                parse_sse(resp),
                source_format=lf.FormatName.OPENAI_CHAT_COMPLETIONS,
                target_format=lf.FormatName.OPEN_RESPONSES,
            )
            async for event in stream:
                print(event)

Typed payloads (recommended)

The package ships auto-generated @dataclass definitions for all three formats via linguafranca.types. Using them gives you IDE autocompletion, type checking, and catches mistakes before the payload hits the converter.

import linguafranca as lf
from linguafranca.types import (
    ChatCompletionsOpenAiRequest,
    ChatCompletionsMessageUser,
)

request = ChatCompletionsOpenAiRequest(
    model="gpt-4.1-mini",
    messages=[
        ChatCompletionsMessageUser(content="hello", role="user"),
    ],
    temperature=0.7,
)

result = lf.convert_request(
    request,
    source_format=lf.FormatName.OPENAI_CHAT_COMPLETIONS,
    target_format=lf.FormatName.ANTHROPIC_MESSAGES,
)
print(result.value)

The non-_json variants (convert_request, convert_response, convert_response_stream) accept any of:

  • linguafranca.types dataclasses (recommended)
  • plain dicts
  • Pydantic models — serialised via model.model_dump()

The _json variants (convert_request_json, convert_response_json, convert_response_stream_json) accept and return plain dicts only.

Conversion config

Request conversions accept an optional config parameter to control conversion behavior.

Stripping encrypted reasoning

When forwarding requests between providers, thinking/reasoning blocks carry provider-specific signatures that the target API will reject. Use strip_encrypted_reasoning to clean them:

import linguafranca as lf

result = lf.convert_request_json(
    anthropic_request_with_thinking,
    source_format=lf.FormatName.ANTHROPIC_MESSAGES,
    target_format=lf.FormatName.OPEN_RESPONSES,
    config=lf.ConversionConfig(strip_encrypted_reasoning=True),
)

You can also pass a plain dict:

result = lf.convert_request_json(
    ...,
    config={"strip_encrypted_reasoning": True},
)

When strip_encrypted_reasoning is enabled:

  • Anthropic -> Open Responses: Thinking blocks keep their summary text but encrypted_content is removed. Redacted thinking blocks (no summary) are dropped entirely.
  • Open Responses -> Anthropic: All reasoning items are dropped from the message history.
  • The reasoning/thinking config (whether the model should think) is always preserved.

Warnings

Conversions between formats can be lossy — some fields exist in one format but not another. When this happens, the library returns warnings instead of failing:

result = lf.convert_request_json(
    request,
    source_format=lf.FormatName.OPENAI_CHAT_COMPLETIONS,
    target_format=lf.FormatName.ANTHROPIC_MESSAGES,
)

for w in result.warnings:
    print(f"{w.field}: {w.message}")
    # e.g. "frequency_penalty: field not supported in Anthropic Messages, dropped"

For streaming, call stream.take_warnings() after the stream is consumed.

Error handling

All errors inherit from ConversionError:

import linguafranca as lf

# Invalid payload structure
try:
    lf.convert_request_json(
        {"not": "a valid request"},
        source_format=lf.FormatName.OPENAI_CHAT_COMPLETIONS,
        target_format=lf.FormatName.ANTHROPIC_MESSAGES,
    )
except lf.SchemaValidationError as e:
    print(e)  # payload doesn't match the source format schema

# Unsupported conversion pair (streaming only)
try:
    lf.convert_response_stream_json(
        events,
        source_format=lf.FormatName.OPEN_RESPONSES,
        target_format=lf.FormatName.OPEN_RESPONSES,
    )
except lf.UnsupportedConversionError as e:
    print(e)

All available types

All request, response, and streaming event types for each format are available under linguafranca.types:

from linguafranca.types import (
    # OpenAI Chat Completions
    ChatCompletionsOpenAiRequest,
    ChatCompletionsMessageUser,
    ChatCompletionsMessageSystem,
    ChatCompletionsMessageAssistant,
    ChatCompletionsResponse,
    ChatCompletionsStreamChunk,
    # Anthropic Messages
    AnthropicRequest,
    AnthropicMessage,
    AnthropicResponse,
    # Open Responses
    OpenResponsesRequest,
    OpenResponsesResponse,
    # ... and all nested types (content parts, tool calls, etc.)
)

These are standard @dataclass definitions generated from the Rust schemas. See Typed payloads for usage examples.

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

martian_linguafranca-0.2.3.tar.gz (198.6 kB view details)

Uploaded Source

Built Distributions

If you're not sure about the file name format, learn more about wheel file names.

martian_linguafranca-0.2.3-cp310-abi3-win_amd64.whl (1.4 MB view details)

Uploaded CPython 3.10+Windows x86-64

martian_linguafranca-0.2.3-cp310-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (1.4 MB view details)

Uploaded CPython 3.10+manylinux: glibc 2.17+ x86-64

martian_linguafranca-0.2.3-cp310-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl (1.3 MB view details)

Uploaded CPython 3.10+manylinux: glibc 2.17+ ARM64

martian_linguafranca-0.2.3-cp310-abi3-macosx_11_0_arm64.whl (1.2 MB view details)

Uploaded CPython 3.10+macOS 11.0+ ARM64

martian_linguafranca-0.2.3-cp310-abi3-macosx_10_12_x86_64.whl (1.3 MB view details)

Uploaded CPython 3.10+macOS 10.12+ x86-64

File details

Details for the file martian_linguafranca-0.2.3.tar.gz.

File metadata

  • Download URL: martian_linguafranca-0.2.3.tar.gz
  • Upload date:
  • Size: 198.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for martian_linguafranca-0.2.3.tar.gz
Algorithm Hash digest
SHA256 3971c8e2ecf9e59dd561d4910bb62e2075e3bd4c7f1ff02024d70001a2305694
MD5 51c3bc9f84d9037a1473e70e048a2ad1
BLAKE2b-256 bde9a0169d48a6471262aefe97861ef27995c213866cce1ffc7fa85c949659f9

See more details on using hashes here.

File details

Details for the file martian_linguafranca-0.2.3-cp310-abi3-win_amd64.whl.

File metadata

File hashes

Hashes for martian_linguafranca-0.2.3-cp310-abi3-win_amd64.whl
Algorithm Hash digest
SHA256 a4482124110e9472187c149cb6edd1abd0d9b37e24a2aa495fd13f45c5f7b4ad
MD5 70f33dd3aba6a147bb1cb868d63a609f
BLAKE2b-256 7b5ee64498f805714df3116222f9b9cc3f3b095848cfe7ce35e8ef198b1bc88c

See more details on using hashes here.

File details

Details for the file martian_linguafranca-0.2.3-cp310-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for martian_linguafranca-0.2.3-cp310-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 6eebd2fa6b96c9cf8b4044e233c5a0afa23bf5e89edddadf67d62516e383b81c
MD5 3da1ec2ea5deb499cf38e3ba96b49ed8
BLAKE2b-256 1d341d22ebff99ab6df2b3f2d32fd44105c43abb96e634194b1fb23c01698ce9

See more details on using hashes here.

File details

Details for the file martian_linguafranca-0.2.3-cp310-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl.

File metadata

File hashes

Hashes for martian_linguafranca-0.2.3-cp310-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl
Algorithm Hash digest
SHA256 fec1936066c24cfaf3cb5f628a4972b995bb9133f83c87a9cdfc87207891ed62
MD5 2ff7abf5c5699dc9b6c0bea2970ea7ef
BLAKE2b-256 b74da1acb89acea9e6e630dcda26295539b0c6a332c7df6dbcce51554054f770

See more details on using hashes here.

File details

Details for the file martian_linguafranca-0.2.3-cp310-abi3-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for martian_linguafranca-0.2.3-cp310-abi3-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 2aaf4ab516002210f2935ef805086b18582a85914d0d472ef2310853f68c142f
MD5 5deef417217bc940ce0432fe0f7b1899
BLAKE2b-256 e77960329facfd0ff1ac29b3ed8bbf6b8e551e372f7cb76d4a56a8789fd0d460

See more details on using hashes here.

File details

Details for the file martian_linguafranca-0.2.3-cp310-abi3-macosx_10_12_x86_64.whl.

File metadata

File hashes

Hashes for martian_linguafranca-0.2.3-cp310-abi3-macosx_10_12_x86_64.whl
Algorithm Hash digest
SHA256 62c5085d9d2aa28a9bda8eeb853b3a3657044c38fbe267bc0e7e1b541a1654d8
MD5 672ac8bada17743a1d001078b5e9b818
BLAKE2b-256 d6400d501f9e9185d9f68d0a3d9d2d5e5c9c9ba1d62981d37a29fda1a273def4

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page