Skip to main content

LLM API format converter with Rust core and Python bindings

Project description

linguafranca

LLM API format converter with a Rust core and Python bindings.

Converts requests, responses, and streaming events between:

  • OpenAI Chat Completions
  • Anthropic Messages
  • Open Responses

Installation

# Python
pip install martian-linguafranca
# or
uv add martian-linguafranca
# Installs as 'martian-linguafranca', import as 'linguafranca'
# Rust
cargo add linguafranca

Supported formats

FormatName API
FormatName.OPENAI_CHAT_COMPLETIONS OpenAI Chat Completions
FormatName.ANTHROPIC_MESSAGES Anthropic Messages
FormatName.OPEN_RESPONSES Open Responses

Every pair is supported in both directions for requests and responses.

Quick start

import linguafranca as lf

# Convert a Chat Completions request to Anthropic Messages
result = lf.convert_request_json(
    {"model": "gpt-4.1-mini", "messages": [{"role": "user", "content": "hello"}]},
    source_format=lf.FormatName.OPENAI_CHAT_COMPLETIONS,
    target_format=lf.FormatName.ANTHROPIC_MESSAGES,
)

result.value     # converted dict
result.warnings  # list of lossy conversion warnings (dropped/modified fields)

Converting requests

import linguafranca as lf

# OpenAI Chat Completions -> Anthropic Messages
result = lf.convert_request_json(
    {
        "model": "gpt-4.1-mini",
        "messages": [{"role": "user", "content": "hello"}],
        "temperature": 0.7,
    },
    source_format=lf.FormatName.OPENAI_CHAT_COMPLETIONS,
    target_format=lf.FormatName.ANTHROPIC_MESSAGES,
)
print(result.value)
# {"model": "gpt-4.1-mini", "max_tokens": 4096, "messages": [...], ...}

# Anthropic Messages -> OpenAI Chat Completions
result = lf.convert_request_json(
    {
        "model": "claude-3-5-sonnet",
        "max_tokens": 64,
        "messages": [{"role": "user", "content": "hello"}],
    },
    source_format=lf.FormatName.ANTHROPIC_MESSAGES,
    target_format=lf.FormatName.OPENAI_CHAT_COMPLETIONS,
)

Convenience wrappers

When you always target the same format, convenience wrappers save some typing:

# Convert anything -> Anthropic Messages
result = lf.to_messages_request(
    openai_request,
    source_format=lf.FormatName.OPENAI_CHAT_COMPLETIONS,
)

# Convert anything -> OpenAI Chat Completions
result = lf.to_chat_completions_request(
    anthropic_request,
    source_format=lf.FormatName.ANTHROPIC_MESSAGES,
)

The same pattern works for responses with to_messages_response and to_chat_completions_response.

Converting responses

result = lf.convert_response_json(
    {
        "id": "chatcmpl-abc123",
        "object": "chat.completion",
        "model": "gpt-4.1-mini",
        "choices": [{
            "index": 0,
            "message": {"role": "assistant", "content": "Hello!"},
            "finish_reason": "stop",
        }],
        "usage": {"prompt_tokens": 5, "completion_tokens": 7, "total_tokens": 12},
    },
    source_format=lf.FormatName.OPENAI_CHAT_COMPLETIONS,
    target_format=lf.FormatName.ANTHROPIC_MESSAGES,
)
print(result.value)

Streaming

Sync streaming with httpx

import json
import httpx
import linguafranca as lf

def parse_sse(response: httpx.Response):
    """Yield parsed JSON objects from an SSE stream."""
    for line in response.iter_lines():
        if line.startswith("data: ") and line != "data: [DONE]":
            yield json.loads(line[6:])

headers = {"Authorization": "Bearer YOUR_KEY", "Content-Type": "application/json"}
payload = {
    "model": "gpt-4.1-mini",
    "messages": [{"role": "user", "content": "hello"}],
    "stream": True,
}

with httpx.stream("POST", "https://api.openai.com/v1/chat/completions",
                   headers=headers, json=payload) as resp:
    stream = lf.convert_response_stream_json(
        parse_sse(resp),
        source_format=lf.FormatName.OPENAI_CHAT_COMPLETIONS,
        target_format=lf.FormatName.OPEN_RESPONSES,
    )
    for event in stream:
        print(event)

    # Check warnings after the stream is fully consumed
    for w in stream.take_warnings():
        print(f"{w.field}: {w.message}")

Async streaming with httpx

import json
import httpx
import linguafranca as lf

async def parse_sse(response: httpx.Response):
    async for line in response.aiter_lines():
        if line.startswith("data: ") and line != "data: [DONE]":
            yield json.loads(line[6:])

async def main():
    headers = {"Authorization": "Bearer YOUR_KEY", "Content-Type": "application/json"}
    payload = {
        "model": "gpt-4.1-mini",
        "messages": [{"role": "user", "content": "hello"}],
        "stream": True,
    }

    async with httpx.AsyncClient() as client:
        async with client.stream("POST",
                                 "https://api.openai.com/v1/chat/completions",
                                 headers=headers, json=payload) as resp:
            stream = lf.aconvert_response_stream(
                parse_sse(resp),
                source_format=lf.FormatName.OPENAI_CHAT_COMPLETIONS,
                target_format=lf.FormatName.OPEN_RESPONSES,
            )
            async for event in stream:
                print(event)

Typed payloads (recommended)

The package ships auto-generated @dataclass definitions for all three formats via linguafranca.types. Using them gives you IDE autocompletion, type checking, and catches mistakes before the payload hits the converter.

import linguafranca as lf
from linguafranca.types import (
    ChatCompletionsOpenAiRequest,
    ChatCompletionsMessageUser,
)

request = ChatCompletionsOpenAiRequest(
    model="gpt-4.1-mini",
    messages=[
        ChatCompletionsMessageUser(content="hello", role="user"),
    ],
    temperature=0.7,
)

result = lf.convert_request(
    request,
    source_format=lf.FormatName.OPENAI_CHAT_COMPLETIONS,
    target_format=lf.FormatName.ANTHROPIC_MESSAGES,
)
print(result.value)

The non-_json variants (convert_request, convert_response, convert_response_stream) accept any of:

  • linguafranca.types dataclasses (recommended)
  • plain dicts
  • Pydantic models — serialised via model.model_dump()

The _json variants (convert_request_json, convert_response_json, convert_response_stream_json) accept and return plain dicts only.

Warnings

Conversions between formats can be lossy — some fields exist in one format but not another. When this happens, the library returns warnings instead of failing:

result = lf.convert_request_json(
    request,
    source_format=lf.FormatName.OPENAI_CHAT_COMPLETIONS,
    target_format=lf.FormatName.ANTHROPIC_MESSAGES,
)

for w in result.warnings:
    print(f"{w.field}: {w.message}")
    # e.g. "frequency_penalty: field not supported in Anthropic Messages, dropped"

For streaming, call stream.take_warnings() after the stream is consumed.

Error handling

All errors inherit from ConversionError:

import linguafranca as lf

# Invalid payload structure
try:
    lf.convert_request_json(
        {"not": "a valid request"},
        source_format=lf.FormatName.OPENAI_CHAT_COMPLETIONS,
        target_format=lf.FormatName.ANTHROPIC_MESSAGES,
    )
except lf.SchemaValidationError as e:
    print(e)  # payload doesn't match the source format schema

# Unsupported conversion pair (streaming only)
try:
    lf.convert_response_stream_json(
        events,
        source_format=lf.FormatName.OPEN_RESPONSES,
        target_format=lf.FormatName.OPEN_RESPONSES,
    )
except lf.UnsupportedConversionError as e:
    print(e)

All available types

All request, response, and streaming event types for each format are available under linguafranca.types:

from linguafranca.types import (
    # OpenAI Chat Completions
    ChatCompletionsOpenAiRequest,
    ChatCompletionsMessageUser,
    ChatCompletionsMessageSystem,
    ChatCompletionsMessageAssistant,
    ChatCompletionsResponse,
    ChatCompletionsStreamChunk,
    # Anthropic Messages
    AnthropicRequest,
    AnthropicMessage,
    AnthropicResponse,
    # Open Responses
    OpenResponsesRequest,
    OpenResponsesResponse,
    # ... and all nested types (content parts, tool calls, etc.)
)

These are standard @dataclass definitions generated from the Rust schemas. See Typed payloads for usage examples.

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

martian_linguafranca-0.1.6.tar.gz (170.5 kB view details)

Uploaded Source

Built Distributions

If you're not sure about the file name format, learn more about wheel file names.

martian_linguafranca-0.1.6-cp310-abi3-win_amd64.whl (1.3 MB view details)

Uploaded CPython 3.10+Windows x86-64

martian_linguafranca-0.1.6-cp310-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl (1.2 MB view details)

Uploaded CPython 3.10+manylinux: glibc 2.17+ ARM64

martian_linguafranca-0.1.6-cp310-abi3-macosx_11_0_arm64.whl (1.1 MB view details)

Uploaded CPython 3.10+macOS 11.0+ ARM64

martian_linguafranca-0.1.6-cp310-abi3-macosx_10_12_x86_64.whl (1.2 MB view details)

Uploaded CPython 3.10+macOS 10.12+ x86-64

martian_linguafranca-0.1.6-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (1.3 MB view details)

Uploaded CPython 3.8manylinux: glibc 2.17+ x86-64

File details

Details for the file martian_linguafranca-0.1.6.tar.gz.

File metadata

  • Download URL: martian_linguafranca-0.1.6.tar.gz
  • Upload date:
  • Size: 170.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for martian_linguafranca-0.1.6.tar.gz
Algorithm Hash digest
SHA256 3da0d44c7ce45790fdbd60809fa3c4596dfdf448e0426cd2db16416f35b32f8e
MD5 38a71d874878aaef990a7d54b17dd339
BLAKE2b-256 b52831be187117f6138577fa00e9fb664aa3fa2bedf20c07f53a64b127e4a661

See more details on using hashes here.

File details

Details for the file martian_linguafranca-0.1.6-pp310-pypy310_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl.

File metadata

File hashes

Hashes for martian_linguafranca-0.1.6-pp310-pypy310_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl
Algorithm Hash digest
SHA256 dcd8294ef2747953db9570c0e3832490997b7c38934d254a3b2746581be8c160
MD5 b78bee62217cb5227b24035b3a329be1
BLAKE2b-256 89437d7726baec6ed89a4dab93031362ca3e6191fbec51a3296b56ddc27c4ed8

See more details on using hashes here.

File details

Details for the file martian_linguafranca-0.1.6-pp39-pypy39_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl.

File metadata

File hashes

Hashes for martian_linguafranca-0.1.6-pp39-pypy39_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl
Algorithm Hash digest
SHA256 05f5c6e499199a037592838534e8c8ef4a9bc34ea17fa15063a8f79117ad6468
MD5 6b4741f06e78552bb933edc713857d11
BLAKE2b-256 a4cdbd4b061c1722de72c4e5a66f9e31c688c277b8a3a6f584b359d516e9f53d

See more details on using hashes here.

File details

Details for the file martian_linguafranca-0.1.6-cp310-abi3-win_amd64.whl.

File metadata

File hashes

Hashes for martian_linguafranca-0.1.6-cp310-abi3-win_amd64.whl
Algorithm Hash digest
SHA256 0e9be647ec0a328864ebcb94d726371d3338ee4600021d2688b9b3b83127e501
MD5 02681cea603afd2d1a3198c0448320e2
BLAKE2b-256 92f89a472472e1ba37ae2c5bf69e09697e557fc49cfb3dc07427553dcacd2c16

See more details on using hashes here.

File details

Details for the file martian_linguafranca-0.1.6-cp310-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl.

File metadata

File hashes

Hashes for martian_linguafranca-0.1.6-cp310-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl
Algorithm Hash digest
SHA256 ee025f7cfc1f58f9c06dcb12b15f6071ab0c3ad797e6e0eae8c5bc16414d82e5
MD5 e378c0c4f8a4b1d4a6cd849e6660223e
BLAKE2b-256 a733da7aed4011c90e339a6d3558e913587d7d7ed25705a456d4cf4df71e96fd

See more details on using hashes here.

File details

Details for the file martian_linguafranca-0.1.6-cp310-abi3-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for martian_linguafranca-0.1.6-cp310-abi3-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 066911c1a3b25c886293894b4a5d295113afa38b2807bca550232c99aa9695e4
MD5 7909903aa47337b3084369231cf77202
BLAKE2b-256 54f87e8ffa33fd7fed9508496603b3efb8c92dfa50d5fdd31737abd43bd45bed

See more details on using hashes here.

File details

Details for the file martian_linguafranca-0.1.6-cp310-abi3-macosx_10_12_x86_64.whl.

File metadata

File hashes

Hashes for martian_linguafranca-0.1.6-cp310-abi3-macosx_10_12_x86_64.whl
Algorithm Hash digest
SHA256 560eb280d359d0a4a6dc3d3b4a95d0f7e7e5f14b33948b28eb632bc57a9d9f52
MD5 6f2cdc96607fbbf2d9dc9ea8ed042b45
BLAKE2b-256 81b64a06883a209eec4fc4786046387c97e2225c09183a21c028a35324d9bd0e

See more details on using hashes here.

File details

Details for the file martian_linguafranca-0.1.6-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for martian_linguafranca-0.1.6-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 67fbd32abfb5fe6a2127a1376490babb61c0709a86ee66d6cd27f67fce32323e
MD5 5646396732f3d64544d62b85647a7a76
BLAKE2b-256 2c589bd7b4461abd07284e7968918ff98bbf44dd4fea8d26d364ca88d6cb1133

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page