Skip to main content

A lightweight, provider-neutral library for translating LLM requests and responses across model APIs.

Project description

Divyam LLM Interop

A minimal, provider‑agnostic library for interoperable AI model requests and responses. Divyam LLM Interop provides a unified interface for interacting with models across providers while maintaining consistent request and response semantics.

Installation

# Install from PyPI
pip install divyam-llm-interop

See PyPI

Usage

The primary API for text based chat request and response conversion is ChatTranslator.

Translate a chat request

from divyam_llm_interop.translate.chat.api_types import ModelApiType
from divyam_llm_interop.translate.chat.translate import ChatTranslator
from divyam_llm_interop.translate.chat.types import ChatRequest, ChatResponse, Model

# Translate gemini-1.5-pro Chat Completions API request to a gpt-4.1
# Responses API request
translator = ChatTranslator()
chat_request = ChatRequest(body={
    "model": "gemini-1.5-pro",
    "messages": [
        {
            "role": "system",
            "content": (
                "You are a highly knowledgeable trivia assistant. "
                "Provide clear, accurate answers across history, geography, "
                "science, pop culture, and general knowledge. "
                "When explaining, keep it concise unless asked otherwise."
            )
        },
        {
            "role": "user",
            "content": "What is the capital of India?"
        }
    ],
    "temperature": 0.7,
    "top_p": 1.0,
    "max_tokens": 100000,
    "presence_penalty": 0.5
})
source = Model(name="gemini-1.5-pro", api_type=ModelApiType.COMPLETIONS)
target = Model(name="gpt-4.1", api_type=ModelApiType.RESPONSES)
translated = translator.translate_request(chat_request, source, target)

Translate chat response

from divyam_llm_interop.translate.chat.api_types import ModelApiType
from divyam_llm_interop.translate.chat.translate import ChatTranslator
from divyam_llm_interop.translate.chat.types import ChatResponse, Model

# Translate Responses API response to Chat Completions API Response. 
translator = ChatTranslator()

# Response body most likely obtained from a LLM call.
chat_response = ChatResponse(body={
    "id": "resp_abc123",
    "object": "response",
    "model": "gpt-4.1",
    "created": 1733400000,
    "output": [
        {
            "role": "assistant",
            "content": [
                {
                    "type": "output_text",
                    "text": "The capital of India is New Delhi."
                }
            ]
        }
    ],
    "usage": {
        "input_tokens": 35,
        "output_tokens": 10,
        "total_tokens": 45
    },
    "metadata": {
        "temperature": 0.7,
        "top_p": 1.0,
        "presence_penalty": 0.5
    }
})

source = Model(name="gpt-4.1", api_type=ModelApiType.RESPONSES)
target = Model(name="gpt-4.1", api_type=ModelApiType.COMPLETIONS)
translated = translator.translate_response(chat_response, source, target)

Development Environment Setup

Create a virtual environment

With Python virtualenv:

python -m venv .venv
source .venv/bin/activate  # On Windows: .venv\Scripts\activate

With conda:

conda create -n .venv python=3.10 -y
conda activate .venv

Note: Make sure to activate the virtual environment before running any commands.

Install poetry

pip install poetry
poetry self update 

Install dependencies

For the first time, or when dependencies in pyproject.toml change, regenerate the poetry lock file.

poetry lock
poetry install

Contributing

We welcome contributions to improve the library!

How to contribute

  1. Fork the repository
  2. Create a feature branch: git checkout -b feature/my-improvement
  3. Make your changes
  4. Run tests and linters (see below)
  5. Submit a pull request

Contribution guidelines

  • Follow existing code style
  • Write clear commit messages
  • Include tests when adding features or fixing bugs
  • Ensure documentation reflects changes

If you're unsure about a change, feel free to open a discussion or draft PR.

Code Quality Checks

Before submitting your PR, make sure the code passes all checks:

Format code

poetry run ruff format .

Check formatting (without modifying files)

poetry run ruff format --check .

Lint code

poetry run ruff check .

Auto-fix linting issues (where possible)

poetry run ruff check --fix .

Type check

poetry run pyright .

Run all checks at once

poetry run ruff format . && poetry run ruff check . && poetry run pyright .

Running Tests

poetry run pytest

With coverage report:

poetry run pytest --cov=. --cov-report=term-missing

License

This project is licensed under the Apache License, Version 2.0. You may obtain a copy of the License at:

https://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the LICENSE file for the full license text.


Copyright © 2025 DivyamAI Technologies Private Limited. All rights reserved.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

divyam_llm_interop-0.1.1.tar.gz (51.2 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

divyam_llm_interop-0.1.1-py3-none-any.whl (78.8 kB view details)

Uploaded Python 3

File details

Details for the file divyam_llm_interop-0.1.1.tar.gz.

File metadata

  • Download URL: divyam_llm_interop-0.1.1.tar.gz
  • Upload date:
  • Size: 51.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for divyam_llm_interop-0.1.1.tar.gz
Algorithm Hash digest
SHA256 ce9bc18585e27b9f936371557b7b5b4240bb96d0bac4b4418d15a146a6b4006a
MD5 5b88096c35cfc3ab1228063fe228a1e1
BLAKE2b-256 6378f2ea255cde38c9363b974f6163a9e4df348cb836e5861483ccca05af5d5b

See more details on using hashes here.

Provenance

The following attestation bundles were made for divyam_llm_interop-0.1.1.tar.gz:

Publisher: release.yml on Divyam-AI/divyam-llm-interop

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file divyam_llm_interop-0.1.1-py3-none-any.whl.

File metadata

File hashes

Hashes for divyam_llm_interop-0.1.1-py3-none-any.whl
Algorithm Hash digest
SHA256 6966c36264d2cbcda4c8c65459b883b4d279dd9d4c624078010db26e18f1b54e
MD5 62545fcd921d2cc9e281fd0f330897c7
BLAKE2b-256 ff216cb8a7b395be545e83ab217f9e895d60c5fa0f2ab5fa2a89159e4989aeda

See more details on using hashes here.

Provenance

The following attestation bundles were made for divyam_llm_interop-0.1.1-py3-none-any.whl:

Publisher: release.yml on Divyam-AI/divyam-llm-interop

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page