Skip to main content

LLM API format converter with Rust core and Python bindings

Project description

linguafranca

LLM API format converter with a Rust core and Python bindings.

Converts requests, responses, and streaming events between:

  • OpenAI Chat Completions
  • Anthropic Messages
  • Open Responses

Installation

# Python
pip install martian-linguafranca
# or
uv add martian-linguafranca
# Installs as 'martian-linguafranca', import as 'linguafranca'
# Rust
cargo add linguafranca

Supported formats

FormatName API
FormatName.OPENAI_CHAT_COMPLETIONS OpenAI Chat Completions
FormatName.ANTHROPIC_MESSAGES Anthropic Messages
FormatName.OPEN_RESPONSES Open Responses

Every pair is supported in both directions for requests and responses.

Quick start

import linguafranca as lf

# Convert a Chat Completions request to Anthropic Messages
result = lf.convert_request_json(
    {"model": "gpt-4.1-mini", "messages": [{"role": "user", "content": "hello"}]},
    source_format=lf.FormatName.OPENAI_CHAT_COMPLETIONS,
    target_format=lf.FormatName.ANTHROPIC_MESSAGES,
)

result.value     # converted dict
result.warnings  # list of lossy conversion warnings (dropped/modified fields)

Converting requests

import linguafranca as lf

# OpenAI Chat Completions -> Anthropic Messages
result = lf.convert_request_json(
    {
        "model": "gpt-4.1-mini",
        "messages": [{"role": "user", "content": "hello"}],
        "temperature": 0.7,
    },
    source_format=lf.FormatName.OPENAI_CHAT_COMPLETIONS,
    target_format=lf.FormatName.ANTHROPIC_MESSAGES,
)
print(result.value)
# {"model": "gpt-4.1-mini", "max_tokens": 4096, "messages": [...], ...}

# Anthropic Messages -> OpenAI Chat Completions
result = lf.convert_request_json(
    {
        "model": "claude-3-5-sonnet",
        "max_tokens": 64,
        "messages": [{"role": "user", "content": "hello"}],
    },
    source_format=lf.FormatName.ANTHROPIC_MESSAGES,
    target_format=lf.FormatName.OPENAI_CHAT_COMPLETIONS,
)

Convenience wrappers

When you always target the same format, convenience wrappers save some typing:

# Convert anything -> Anthropic Messages
result = lf.to_messages_request(
    openai_request,
    source_format=lf.FormatName.OPENAI_CHAT_COMPLETIONS,
)

# Convert anything -> OpenAI Chat Completions
result = lf.to_chat_completions_request(
    anthropic_request,
    source_format=lf.FormatName.ANTHROPIC_MESSAGES,
)

The same pattern works for responses with to_messages_response and to_chat_completions_response.

Converting responses

result = lf.convert_response_json(
    {
        "id": "chatcmpl-abc123",
        "object": "chat.completion",
        "model": "gpt-4.1-mini",
        "choices": [{
            "index": 0,
            "message": {"role": "assistant", "content": "Hello!"},
            "finish_reason": "stop",
        }],
        "usage": {"prompt_tokens": 5, "completion_tokens": 7, "total_tokens": 12},
    },
    source_format=lf.FormatName.OPENAI_CHAT_COMPLETIONS,
    target_format=lf.FormatName.ANTHROPIC_MESSAGES,
)
print(result.value)

Streaming

Sync streaming with httpx

import json
import httpx
import linguafranca as lf

def parse_sse(response: httpx.Response):
    """Yield parsed JSON objects from an SSE stream."""
    for line in response.iter_lines():
        if line.startswith("data: ") and line != "data: [DONE]":
            yield json.loads(line[6:])

headers = {"Authorization": "Bearer YOUR_KEY", "Content-Type": "application/json"}
payload = {
    "model": "gpt-4.1-mini",
    "messages": [{"role": "user", "content": "hello"}],
    "stream": True,
}

with httpx.stream("POST", "https://api.openai.com/v1/chat/completions",
                   headers=headers, json=payload) as resp:
    stream = lf.convert_response_stream_json(
        parse_sse(resp),
        source_format=lf.FormatName.OPENAI_CHAT_COMPLETIONS,
        target_format=lf.FormatName.OPEN_RESPONSES,
    )
    for event in stream:
        print(event)

    # Check warnings after the stream is fully consumed
    for w in stream.take_warnings():
        print(f"{w.field}: {w.message}")

Async streaming with httpx

import json
import httpx
import linguafranca as lf

async def parse_sse(response: httpx.Response):
    async for line in response.aiter_lines():
        if line.startswith("data: ") and line != "data: [DONE]":
            yield json.loads(line[6:])

async def main():
    headers = {"Authorization": "Bearer YOUR_KEY", "Content-Type": "application/json"}
    payload = {
        "model": "gpt-4.1-mini",
        "messages": [{"role": "user", "content": "hello"}],
        "stream": True,
    }

    async with httpx.AsyncClient() as client:
        async with client.stream("POST",
                                 "https://api.openai.com/v1/chat/completions",
                                 headers=headers, json=payload) as resp:
            stream = lf.aconvert_response_stream(
                parse_sse(resp),
                source_format=lf.FormatName.OPENAI_CHAT_COMPLETIONS,
                target_format=lf.FormatName.OPEN_RESPONSES,
            )
            async for event in stream:
                print(event)

Typed payloads (recommended)

The package ships auto-generated @dataclass definitions for all three formats via linguafranca.types. Using them gives you IDE autocompletion, type checking, and catches mistakes before the payload hits the converter.

import linguafranca as lf
from linguafranca.types import (
    ChatCompletionsOpenAiRequest,
    ChatCompletionsMessageUser,
)

request = ChatCompletionsOpenAiRequest(
    model="gpt-4.1-mini",
    messages=[
        ChatCompletionsMessageUser(content="hello", role="user"),
    ],
    temperature=0.7,
)

result = lf.convert_request(
    request,
    source_format=lf.FormatName.OPENAI_CHAT_COMPLETIONS,
    target_format=lf.FormatName.ANTHROPIC_MESSAGES,
)
print(result.value)

The non-_json variants (convert_request, convert_response, convert_response_stream) accept any of:

  • linguafranca.types dataclasses (recommended)
  • plain dicts
  • Pydantic models — serialised via model.model_dump()

The _json variants (convert_request_json, convert_response_json, convert_response_stream_json) accept and return plain dicts only.

Conversion config

Request conversions accept an optional config parameter to control conversion behavior.

Stripping encrypted reasoning

When forwarding requests between providers, thinking/reasoning blocks carry provider-specific signatures that the target API will reject. Use strip_encrypted_reasoning to clean them:

import linguafranca as lf

result = lf.convert_request_json(
    anthropic_request_with_thinking,
    source_format=lf.FormatName.ANTHROPIC_MESSAGES,
    target_format=lf.FormatName.OPEN_RESPONSES,
    config=lf.ConversionConfig(strip_encrypted_reasoning=True),
)

You can also pass a plain dict:

result = lf.convert_request_json(
    ...,
    config={"strip_encrypted_reasoning": True},
)

When strip_encrypted_reasoning is enabled:

  • Anthropic -> Open Responses: Thinking blocks keep their summary text but encrypted_content is removed. Redacted thinking blocks (no summary) are dropped entirely.
  • Open Responses -> Anthropic: All reasoning items are dropped from the message history.
  • The reasoning/thinking config (whether the model should think) is always preserved.

Warnings

Conversions between formats can be lossy — some fields exist in one format but not another. When this happens, the library returns warnings instead of failing:

result = lf.convert_request_json(
    request,
    source_format=lf.FormatName.OPENAI_CHAT_COMPLETIONS,
    target_format=lf.FormatName.ANTHROPIC_MESSAGES,
)

for w in result.warnings:
    print(f"{w.field}: {w.message}")
    # e.g. "frequency_penalty: field not supported in Anthropic Messages, dropped"

For streaming, call stream.take_warnings() after the stream is consumed.

Error handling

All errors inherit from ConversionError:

import linguafranca as lf

# Invalid payload structure
try:
    lf.convert_request_json(
        {"not": "a valid request"},
        source_format=lf.FormatName.OPENAI_CHAT_COMPLETIONS,
        target_format=lf.FormatName.ANTHROPIC_MESSAGES,
    )
except lf.SchemaValidationError as e:
    print(e)  # payload doesn't match the source format schema

# Unsupported conversion pair (streaming only)
try:
    lf.convert_response_stream_json(
        events,
        source_format=lf.FormatName.OPEN_RESPONSES,
        target_format=lf.FormatName.OPEN_RESPONSES,
    )
except lf.UnsupportedConversionError as e:
    print(e)

All available types

All request, response, and streaming event types for each format are available under linguafranca.types:

from linguafranca.types import (
    # OpenAI Chat Completions
    ChatCompletionsOpenAiRequest,
    ChatCompletionsMessageUser,
    ChatCompletionsMessageSystem,
    ChatCompletionsMessageAssistant,
    ChatCompletionsResponse,
    ChatCompletionsStreamChunk,
    # Anthropic Messages
    AnthropicRequest,
    AnthropicMessage,
    AnthropicResponse,
    # Open Responses
    OpenResponsesRequest,
    OpenResponsesResponse,
    # ... and all nested types (content parts, tool calls, etc.)
)

These are standard @dataclass definitions generated from the Rust schemas. See Typed payloads for usage examples.

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

martian_linguafranca-0.2.0.tar.gz (175.3 kB view details)

Uploaded Source

Built Distributions

If you're not sure about the file name format, learn more about wheel file names.

martian_linguafranca-0.2.0-cp310-abi3-win_amd64.whl (1.3 MB view details)

Uploaded CPython 3.10+Windows x86-64

martian_linguafranca-0.2.0-cp310-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl (1.2 MB view details)

Uploaded CPython 3.10+manylinux: glibc 2.17+ ARM64

martian_linguafranca-0.2.0-cp310-abi3-macosx_11_0_arm64.whl (1.1 MB view details)

Uploaded CPython 3.10+macOS 11.0+ ARM64

martian_linguafranca-0.2.0-cp310-abi3-macosx_10_12_x86_64.whl (1.2 MB view details)

Uploaded CPython 3.10+macOS 10.12+ x86-64

martian_linguafranca-0.2.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (1.3 MB view details)

Uploaded CPython 3.8manylinux: glibc 2.17+ x86-64

File details

Details for the file martian_linguafranca-0.2.0.tar.gz.

File metadata

  • Download URL: martian_linguafranca-0.2.0.tar.gz
  • Upload date:
  • Size: 175.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for martian_linguafranca-0.2.0.tar.gz
Algorithm Hash digest
SHA256 3a0132588a833a3c6d9c4ab1a52ec68af6dd6f2ef49327208e455296f0b15dd6
MD5 88da740b1493a6878a7986f8f17bcc3c
BLAKE2b-256 293a396803d0bdf743d7e819538fb78fc42b3e5ae971afef330fcb1da1b0d354

See more details on using hashes here.

File details

Details for the file martian_linguafranca-0.2.0-pp310-pypy310_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl.

File metadata

File hashes

Hashes for martian_linguafranca-0.2.0-pp310-pypy310_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl
Algorithm Hash digest
SHA256 3245d4d6af5eed3195253939049b4077f5018180bdb71829c9b39ce27d25d5f9
MD5 f8e8f5edf64ae416545f1452bfd1d52f
BLAKE2b-256 91408186b75cf65483529d7ac987897d36aea01f8d2df2d7cb9206a272aaf21c

See more details on using hashes here.

File details

Details for the file martian_linguafranca-0.2.0-pp39-pypy39_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl.

File metadata

File hashes

Hashes for martian_linguafranca-0.2.0-pp39-pypy39_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl
Algorithm Hash digest
SHA256 03647e82c1fd8d2199a02326516a8c8c4de726889163dc1680def9456f0d56de
MD5 f1941df8b765891431899546380e5bb2
BLAKE2b-256 118ed2cf0acb0b23831716e776a90f0617a700984cef3b180f20e53c6c2832a9

See more details on using hashes here.

File details

Details for the file martian_linguafranca-0.2.0-cp310-abi3-win_amd64.whl.

File metadata

File hashes

Hashes for martian_linguafranca-0.2.0-cp310-abi3-win_amd64.whl
Algorithm Hash digest
SHA256 9b63315bdf81aab85ac40f35a6105dd8e68904550c662231f6a22f1008509852
MD5 f2273cc5dacfadf6e520693bc0de7727
BLAKE2b-256 85dc65452a5a51951a806ddbcbf263bbfc9c7da64881dbf428b1ab295f271c88

See more details on using hashes here.

File details

Details for the file martian_linguafranca-0.2.0-cp310-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl.

File metadata

File hashes

Hashes for martian_linguafranca-0.2.0-cp310-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl
Algorithm Hash digest
SHA256 137d71f27293b6e57af9c6e3b935b97bf76785cc2085ac175156b8120e36d0d3
MD5 1c1a31fe4eac12acb074d37b6c6d2d13
BLAKE2b-256 b5e0306c40fcb5e10541245d24191ff67d2574c0dc6057194d49310d2bcb8a4d

See more details on using hashes here.

File details

Details for the file martian_linguafranca-0.2.0-cp310-abi3-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for martian_linguafranca-0.2.0-cp310-abi3-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 5bd52d07e019401dd937b49c9b9f0e82ee762ff93762a235770daae3db320dd7
MD5 2ad91eb010068f7f39a62e17448f5c2e
BLAKE2b-256 65bf574660df9d81d98ecba0d91b6d42400548bf6267115e182374f6facc2baf

See more details on using hashes here.

File details

Details for the file martian_linguafranca-0.2.0-cp310-abi3-macosx_10_12_x86_64.whl.

File metadata

File hashes

Hashes for martian_linguafranca-0.2.0-cp310-abi3-macosx_10_12_x86_64.whl
Algorithm Hash digest
SHA256 2d16a65a538b572a67dcf6fe5048c494fa62dab31d1c5f32abbafb99803ff50c
MD5 081366cf8bfd6fab65cc47f7cf50059d
BLAKE2b-256 0aa29406484b0e34901d100a7b87a1035047a8bc054ab4089398c7437b4b8f3f

See more details on using hashes here.

File details

Details for the file martian_linguafranca-0.2.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for martian_linguafranca-0.2.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 4397984a00c65b63d53706c5bf0f52de87a40b1dcd395e16c3a766811a9d8bea
MD5 d4dfbaa81d1aa24c6c318c6ef5718ab2
BLAKE2b-256 5e1d6014dd78ae512f7e5ebe35a74d4b14dc65b070cc6b5300b72555a751a9e4

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page