Skip to main content

LLM API format converter with Rust core and Python bindings

Project description

linguafranca

LLM API format converter with a Rust core and Python bindings.

Converts requests, responses, and streaming events between:

  • OpenAI Chat Completions
  • Anthropic Messages
  • Open Responses

Also supports within-format conversions: collect a stream into a single response, or decompose a response into stream events.

Installation

# Python
pip install martian-linguafranca
# or
uv add martian-linguafranca
# Installs as 'martian-linguafranca', import as 'linguafranca'
# Rust
cargo add linguafranca

Supported formats

FormatName API
FormatName.OPENAI_CHAT_COMPLETIONS OpenAI Chat Completions
FormatName.ANTHROPIC_MESSAGES Anthropic Messages
FormatName.OPEN_RESPONSES Open Responses

Every pair is supported in both directions for requests and responses. Within-format collect (stream → response) and decompose (response → stream) are supported for all three formats.

Quick start

import linguafranca as lf

# Convert a Chat Completions request to Anthropic Messages
result = lf.convert_request_json(
    {"model": "gpt-4.1-mini", "messages": [{"role": "user", "content": "hello"}]},
    source_format=lf.FormatName.OPENAI_CHAT_COMPLETIONS,
    target_format=lf.FormatName.ANTHROPIC_MESSAGES,
)

result.value     # converted dict
result.warnings  # list of lossy conversion warnings (dropped/modified fields)

Converting requests

import linguafranca as lf

# OpenAI Chat Completions -> Anthropic Messages
result = lf.convert_request_json(
    {
        "model": "gpt-4.1-mini",
        "messages": [{"role": "user", "content": "hello"}],
        "temperature": 0.7,
    },
    source_format=lf.FormatName.OPENAI_CHAT_COMPLETIONS,
    target_format=lf.FormatName.ANTHROPIC_MESSAGES,
)
print(result.value)
# {"model": "gpt-4.1-mini", "max_tokens": 4096, "messages": [...], ...}

# Anthropic Messages -> OpenAI Chat Completions
result = lf.convert_request_json(
    {
        "model": "claude-3-5-sonnet",
        "max_tokens": 64,
        "messages": [{"role": "user", "content": "hello"}],
    },
    source_format=lf.FormatName.ANTHROPIC_MESSAGES,
    target_format=lf.FormatName.OPENAI_CHAT_COMPLETIONS,
)

Convenience wrappers

When you always target the same format, convenience wrappers save some typing:

# Convert anything -> Anthropic Messages
result = lf.to_messages_request(
    openai_request,
    source_format=lf.FormatName.OPENAI_CHAT_COMPLETIONS,
)

# Convert anything -> OpenAI Chat Completions
result = lf.to_chat_completions_request(
    anthropic_request,
    source_format=lf.FormatName.ANTHROPIC_MESSAGES,
)

The same pattern works for responses with to_messages_response and to_chat_completions_response.

Converting responses

result = lf.convert_response_json(
    {
        "id": "chatcmpl-abc123",
        "object": "chat.completion",
        "model": "gpt-4.1-mini",
        "choices": [{
            "index": 0,
            "message": {"role": "assistant", "content": "Hello!"},
            "finish_reason": "stop",
        }],
        "usage": {"prompt_tokens": 5, "completion_tokens": 7, "total_tokens": 12},
    },
    source_format=lf.FormatName.OPENAI_CHAT_COMPLETIONS,
    target_format=lf.FormatName.ANTHROPIC_MESSAGES,
)
print(result.value)

Streaming

Sync streaming with httpx

import json
import httpx
import linguafranca as lf

def parse_sse(response: httpx.Response):
    """Yield parsed JSON objects from an SSE stream."""
    for line in response.iter_lines():
        if line.startswith("data: ") and line != "data: [DONE]":
            yield json.loads(line[6:])

headers = {"Authorization": "Bearer YOUR_KEY", "Content-Type": "application/json"}
payload = {
    "model": "gpt-4.1-mini",
    "messages": [{"role": "user", "content": "hello"}],
    "stream": True,
}

with httpx.stream("POST", "https://api.openai.com/v1/chat/completions",
                   headers=headers, json=payload) as resp:
    stream = lf.convert_response_stream_json(
        parse_sse(resp),
        source_format=lf.FormatName.OPENAI_CHAT_COMPLETIONS,
        target_format=lf.FormatName.OPEN_RESPONSES,
    )
    for event in stream:
        print(event)

    # Check warnings after the stream is fully consumed
    for w in stream.take_warnings():
        print(f"{w.field}: {w.message}")

Async streaming with httpx

import json
import httpx
import linguafranca as lf

async def parse_sse(response: httpx.Response):
    async for line in response.aiter_lines():
        if line.startswith("data: ") and line != "data: [DONE]":
            yield json.loads(line[6:])

async def main():
    headers = {"Authorization": "Bearer YOUR_KEY", "Content-Type": "application/json"}
    payload = {
        "model": "gpt-4.1-mini",
        "messages": [{"role": "user", "content": "hello"}],
        "stream": True,
    }

    async with httpx.AsyncClient() as client:
        async with client.stream("POST",
                                 "https://api.openai.com/v1/chat/completions",
                                 headers=headers, json=payload) as resp:
            stream = lf.aconvert_response_stream(
                parse_sse(resp),
                source_format=lf.FormatName.OPENAI_CHAT_COMPLETIONS,
                target_format=lf.FormatName.OPEN_RESPONSES,
            )
            async for event in stream:
                print(event)

Collecting & decomposing streams

Besides converting streams between formats, you can convert within a format: collect streaming events into a single response, or decompose a response into the stream events a server would have produced.

All three formats are supported.

Collect: stream → response

import json
import httpx
import linguafranca as lf

def parse_sse(response: httpx.Response):
    for line in response.iter_lines():
        if line.startswith("data: ") and line != "data: [DONE]":
            yield json.loads(line[6:])

headers = {"Authorization": "Bearer YOUR_KEY", "Content-Type": "application/json"}
payload = {
    "model": "gpt-4.1-mini",
    "messages": [{"role": "user", "content": "hello"}],
    "stream": True,
}

with httpx.stream("POST", "https://api.openai.com/v1/chat/completions",
                   headers=headers, json=payload) as resp:
    result = lf.collect_response_stream_json(
        parse_sse(resp),
        format_name=lf.FormatName.OPENAI_CHAT_COMPLETIONS,
    )
    print(result.value)     # complete response dict
    print(result.warnings)  # any issues during collection

The events iterable is consumed lazily — you can pass a generator, list, or any iterator.

Decompose: response → stream

import linguafranca as lf

result = lf.decompose_response_to_stream_json(
    response_dict,
    format_name=lf.FormatName.ANTHROPIC_MESSAGES,
)
for event in result.value:
    print(event["type"])  # message_start, content_block_start, ...

Async variants

import linguafranca as lf

# Async collect
result = await lf.acollect_response_stream_json(
    async_sse_events,
    format_name=lf.FormatName.OPENAI_CHAT_COMPLETIONS,
)

Typed event variants

The non-_json variants (collect_response_stream, decompose_response_to_stream) accept dataclasses and Pydantic models in addition to plain dicts:

result = lf.collect_response_stream(
    typed_events,
    format_name=lf.FormatName.OPEN_RESPONSES,
)

Typed payloads (recommended)

The package ships auto-generated @dataclass definitions for all three formats via linguafranca.types. Using them gives you IDE autocompletion, type checking, and catches mistakes before the payload hits the converter.

import linguafranca as lf
from linguafranca.types import (
    ChatCompletionsOpenAiRequest,
    ChatCompletionsMessageUser,
)

request = ChatCompletionsOpenAiRequest(
    model="gpt-4.1-mini",
    messages=[
        ChatCompletionsMessageUser(content="hello", role="user"),
    ],
    temperature=0.7,
)

result = lf.convert_request(
    request,
    source_format=lf.FormatName.OPENAI_CHAT_COMPLETIONS,
    target_format=lf.FormatName.ANTHROPIC_MESSAGES,
)
print(result.value)

The non-_json variants (convert_request, convert_response, convert_response_stream) accept any of:

  • linguafranca.types dataclasses (recommended)
  • plain dicts
  • Pydantic models — serialised via model.model_dump()

The _json variants (convert_request_json, convert_response_json, convert_response_stream_json) accept and return plain dicts only.

Conversion config

Request conversions accept an optional config parameter to control conversion behavior.

Stripping encrypted reasoning

When forwarding requests between providers, thinking/reasoning blocks carry provider-specific signatures that the target API will reject. Use strip_encrypted_reasoning to clean them:

import linguafranca as lf

result = lf.convert_request_json(
    anthropic_request_with_thinking,
    source_format=lf.FormatName.ANTHROPIC_MESSAGES,
    target_format=lf.FormatName.OPEN_RESPONSES,
    config=lf.ConversionConfig(strip_encrypted_reasoning=True),
)

You can also pass a plain dict:

result = lf.convert_request_json(
    ...,
    config={"strip_encrypted_reasoning": True},
)

When strip_encrypted_reasoning is enabled:

  • Anthropic -> Open Responses: Thinking blocks keep their summary text but encrypted_content is removed. Redacted thinking blocks (no summary) are dropped entirely.
  • Open Responses -> Anthropic: All reasoning items are dropped from the message history.
  • The reasoning/thinking config (whether the model should think) is always preserved.

Warnings

Conversions between formats can be lossy — some fields exist in one format but not another. When this happens, the library returns warnings instead of failing:

result = lf.convert_request_json(
    request,
    source_format=lf.FormatName.OPENAI_CHAT_COMPLETIONS,
    target_format=lf.FormatName.ANTHROPIC_MESSAGES,
)

for w in result.warnings:
    print(f"{w.field}: {w.message}")
    # e.g. "frequency_penalty: field not supported in Anthropic Messages, dropped"

For streaming, call stream.take_warnings() after the stream is consumed.

Error handling

All errors inherit from ConversionError:

import linguafranca as lf

# Invalid payload structure
try:
    lf.convert_request_json(
        {"not": "a valid request"},
        source_format=lf.FormatName.OPENAI_CHAT_COMPLETIONS,
        target_format=lf.FormatName.ANTHROPIC_MESSAGES,
    )
except lf.SchemaValidationError as e:
    print(e)  # payload doesn't match the source format schema

# Unsupported conversion pair (streaming only)
try:
    lf.convert_response_stream_json(
        events,
        source_format=lf.FormatName.OPEN_RESPONSES,
        target_format=lf.FormatName.OPEN_RESPONSES,
    )
except lf.UnsupportedConversionError as e:
    print(e)

All available types

All request, response, and streaming event types for each format are available under linguafranca.types:

from linguafranca.types import (
    # OpenAI Chat Completions
    ChatCompletionsOpenAiRequest,
    ChatCompletionsMessageUser,
    ChatCompletionsMessageSystem,
    ChatCompletionsMessageAssistant,
    ChatCompletionsResponse,
    ChatCompletionsStreamChunk,
    # Anthropic Messages
    AnthropicRequest,
    AnthropicMessage,
    AnthropicResponse,
    # Open Responses
    OpenResponsesRequest,
    OpenResponsesResponse,
    # ... and all nested types (content parts, tool calls, etc.)
)

These are standard @dataclass definitions generated from the Rust schemas. See Typed payloads for usage examples.

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

martian_linguafranca-0.3.3.tar.gz (250.0 kB view details)

Uploaded Source

Built Distributions

If you're not sure about the file name format, learn more about wheel file names.

martian_linguafranca-0.3.3-cp310-abi3-win_amd64.whl (1.5 MB view details)

Uploaded CPython 3.10+Windows x86-64

martian_linguafranca-0.3.3-cp310-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (1.4 MB view details)

Uploaded CPython 3.10+manylinux: glibc 2.17+ x86-64

martian_linguafranca-0.3.3-cp310-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl (1.3 MB view details)

Uploaded CPython 3.10+manylinux: glibc 2.17+ ARM64

martian_linguafranca-0.3.3-cp310-abi3-macosx_11_0_arm64.whl (1.3 MB view details)

Uploaded CPython 3.10+macOS 11.0+ ARM64

martian_linguafranca-0.3.3-cp310-abi3-macosx_10_12_x86_64.whl (1.4 MB view details)

Uploaded CPython 3.10+macOS 10.12+ x86-64

File details

Details for the file martian_linguafranca-0.3.3.tar.gz.

File metadata

  • Download URL: martian_linguafranca-0.3.3.tar.gz
  • Upload date:
  • Size: 250.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for martian_linguafranca-0.3.3.tar.gz
Algorithm Hash digest
SHA256 4ee684306b3f13a7d6c54dffe1522c7dbeabf3fa00b8e91a94afdb20d7eef465
MD5 d97b63616540e3c7ca3e3176ed13f42d
BLAKE2b-256 8efc753da944ad2fbd62b6d38fd2b3acbdb31e43a4aea309e92109e655eb73c5

See more details on using hashes here.

File details

Details for the file martian_linguafranca-0.3.3-cp310-abi3-win_amd64.whl.

File metadata

File hashes

Hashes for martian_linguafranca-0.3.3-cp310-abi3-win_amd64.whl
Algorithm Hash digest
SHA256 f0fcad980c16678acf11b0878eaa6e5e2abb4a552aa8034701df7dca51e8763f
MD5 8deb95f38a16817064977f8e9d0abec3
BLAKE2b-256 f09adf04a2581658195176e75522b0d0e900959aca6766fc75cbd8fd5b79a61e

See more details on using hashes here.

File details

Details for the file martian_linguafranca-0.3.3-cp310-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for martian_linguafranca-0.3.3-cp310-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 ddbb8794a8351856a8dcead892b4d2782547281dc15fbc502e0d8d5d452dd5ad
MD5 2fe923838abc3467ceebea1d347d8d70
BLAKE2b-256 cb76b5c5126dec06998167fa14841b5a5c7b9f836dacd633527daf64f26357ad

See more details on using hashes here.

File details

Details for the file martian_linguafranca-0.3.3-cp310-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl.

File metadata

File hashes

Hashes for martian_linguafranca-0.3.3-cp310-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl
Algorithm Hash digest
SHA256 a02bc0b902ff82f0a8314321396ea4ce36a9626f99d90c651bec2c3d73369e7e
MD5 0a6f61b73cc99ba43b3a30326edfec33
BLAKE2b-256 abc629f37b2845a9db072d3b34549ae87619cd051731ccb4a7f0805704d36f26

See more details on using hashes here.

File details

Details for the file martian_linguafranca-0.3.3-cp310-abi3-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for martian_linguafranca-0.3.3-cp310-abi3-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 392068a53734b35dd52a0afdae147647a5c3f696a86b3a0fcfe0d777205b85f3
MD5 0ac0b39d69b7a5f546cd6d9cb601e342
BLAKE2b-256 a26cdfdc24cfc48157133d39cebea3b2cefc5ed08a202b429675e270c1d3038d

See more details on using hashes here.

File details

Details for the file martian_linguafranca-0.3.3-cp310-abi3-macosx_10_12_x86_64.whl.

File metadata

File hashes

Hashes for martian_linguafranca-0.3.3-cp310-abi3-macosx_10_12_x86_64.whl
Algorithm Hash digest
SHA256 f69da8652ae27507ff36594236fb819683fb3370518680089cdc502feac5dcf8
MD5 a7bbfd2071db5fc99f08c551bd854ae2
BLAKE2b-256 29df69a8aad830b7aeb2a38b3e772dbec7e5c938a8755dbd7aefce4a39e34aa8

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page