A lightweight, provider-neutral library for translating LLM requests and responses across model APIs.
Project description
Divyam LLM Interop
A minimal, provider‑agnostic library for interoperable AI model requests and responses. Divyam LLM Interop provides a unified interface for interacting with models across providers while maintaining consistent request and response semantics.
Installation
# Install from PyPI
pip install divyam-llm-interop
See PyPI
Usage
The primary API for text based chat request and response conversion is ChatTranslator.
Translate a chat request
from divyam_llm_interop.translate.chat.api_types import ModelApiType
from divyam_llm_interop.translate.chat.translate import ChatTranslator
from divyam_llm_interop.translate.chat.types import ChatRequest, ChatResponse, Model
# Translate gemini-1.5-pro Chat Completions API request to a gpt-4.1
# Responses API request
translator = ChatTranslator()
chat_request = ChatRequest(body={
"model": "gemini-1.5-pro",
"messages": [
{
"role": "system",
"content": (
"You are a highly knowledgeable trivia assistant. "
"Provide clear, accurate answers across history, geography, "
"science, pop culture, and general knowledge. "
"When explaining, keep it concise unless asked otherwise."
)
},
{
"role": "user",
"content": "What is the capital of India?"
}
],
"temperature": 0.7,
"top_p": 1.0,
"max_tokens": 100000,
"presence_penalty": 0.5
})
source = Model(name="gemini-1.5-pro", api_type=ModelApiType.COMPLETIONS)
target = Model(name="gpt-4.1", api_type=ModelApiType.RESPONSES)
translated = translator.translate_request(chat_request, source, target)
Translate chat response
from divyam_llm_interop.translate.chat.api_types import ModelApiType
from divyam_llm_interop.translate.chat.translate import ChatTranslator
from divyam_llm_interop.translate.chat.types import ChatResponse, Model
# Translate Responses API response to Chat Completions API Response.
translator = ChatTranslator()
# Response body most likely obtained from a LLM call.
chat_response = ChatResponse(body={
"id": "resp_abc123",
"object": "response",
"model": "gpt-4.1",
"created": 1733400000,
"output": [
{
"role": "assistant",
"content": [
{
"type": "output_text",
"text": "The capital of India is New Delhi."
}
]
}
],
"usage": {
"input_tokens": 35,
"output_tokens": 10,
"total_tokens": 45
},
"metadata": {
"temperature": 0.7,
"top_p": 1.0,
"presence_penalty": 0.5
}
})
source = Model(name="gpt-4.1", api_type=ModelApiType.RESPONSES)
target = Model(name="gpt-4.1", api_type=ModelApiType.COMPLETIONS)
translated = translator.translate_response(chat_response, source, target)
Development Environment Setup
Create a virtual environment
With Python virtualenv:
python -m venv .venv
source .venv/bin/activate # On Windows: .venv\Scripts\activate
With conda:
conda create -n .venv python=3.10 -y
conda activate .venv
Note: Make sure to activate the virtual environment before running any commands.
Install poetry
pip install poetry
poetry self update
Install dependencies
For the first time, or when dependencies in pyproject.toml change, regenerate the poetry lock file.
poetry lock
poetry install
Contributing
We welcome contributions to improve the library!
How to contribute
- Fork the repository
- Create a feature branch:
git checkout -b feature/my-improvement - Make your changes
- Run tests and linters (see below)
- Submit a pull request
Contribution guidelines
- Follow existing code style
- Write clear commit messages
- Include tests when adding features or fixing bugs
- Ensure documentation reflects changes
If you're unsure about a change, feel free to open a discussion or draft PR.
Code Quality Checks
Before submitting your PR, make sure the code passes all checks:
Format code
poetry run ruff format .
Check formatting (without modifying files)
poetry run ruff format --check .
Lint code
poetry run ruff check .
Auto-fix linting issues (where possible)
poetry run ruff check --fix .
Type check
poetry run pyright .
Run all checks at once
poetry run ruff format . && poetry run ruff check . && poetry run pyright .
Running Tests
poetry run pytest
With coverage report:
poetry run pytest --cov=. --cov-report=term-missing
License
This project is licensed under the Apache License, Version 2.0. You may obtain a copy of the License at:
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the LICENSE file for the full license text.
Copyright © 2025 DivyamAI Technologies Private Limited. All rights reserved.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file divyam_llm_interop-0.1.2.tar.gz.
File metadata
- Download URL: divyam_llm_interop-0.1.2.tar.gz
- Upload date:
- Size: 52.3 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
f1f758758881ced8b91e277e123bf57a085cda5430dc28c9011ddf30ba164704
|
|
| MD5 |
d084861e78b46ba18e319fd507a0151e
|
|
| BLAKE2b-256 |
871af41c62ecfe1d9c793a55331d834557ae99d5e0a99b2d0c87bd3d10f1753c
|
Provenance
The following attestation bundles were made for divyam_llm_interop-0.1.2.tar.gz:
Publisher:
release.yml on Divyam-AI/divyam-llm-interop
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
divyam_llm_interop-0.1.2.tar.gz -
Subject digest:
f1f758758881ced8b91e277e123bf57a085cda5430dc28c9011ddf30ba164704 - Sigstore transparency entry: 956272592
- Sigstore integration time:
-
Permalink:
Divyam-AI/divyam-llm-interop@f55e4c658d870bb26a70e930aa816d4460445e50 -
Branch / Tag:
refs/heads/main - Owner: https://github.com/Divyam-AI
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yml@f55e4c658d870bb26a70e930aa816d4460445e50 -
Trigger Event:
workflow_dispatch
-
Statement type:
File details
Details for the file divyam_llm_interop-0.1.2-py3-none-any.whl.
File metadata
- Download URL: divyam_llm_interop-0.1.2-py3-none-any.whl
- Upload date:
- Size: 80.0 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
b68a8ead08bb4c7bbde400b202efd4bd47511541796396bfe5721851183ac267
|
|
| MD5 |
8a126fd2d8cf1a3854b2885f23f43e3f
|
|
| BLAKE2b-256 |
882e32b027df38eeed2500d67fac58d27be7049d8316877e230e9fba772c105e
|
Provenance
The following attestation bundles were made for divyam_llm_interop-0.1.2-py3-none-any.whl:
Publisher:
release.yml on Divyam-AI/divyam-llm-interop
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
divyam_llm_interop-0.1.2-py3-none-any.whl -
Subject digest:
b68a8ead08bb4c7bbde400b202efd4bd47511541796396bfe5721851183ac267 - Sigstore transparency entry: 956272593
- Sigstore integration time:
-
Permalink:
Divyam-AI/divyam-llm-interop@f55e4c658d870bb26a70e930aa816d4460445e50 -
Branch / Tag:
refs/heads/main - Owner: https://github.com/Divyam-AI
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yml@f55e4c658d870bb26a70e930aa816d4460445e50 -
Trigger Event:
workflow_dispatch
-
Statement type: