Skip to main content

Proxy Server that converts llm request format from one provider to another

Project description

LLM Proxy

Overview

llm-proxy-gateway is a small FastAPI service that accepts requests in different LLM API formats and converts them into the OpenAI Responses API format.

Right now, the project supports these endpoint styles:

  • OpenAI Chat Completions API
  • OpenAI Responses API
  • Claude Messages API

The upstream OpenAI call is always made with streaming enabled. The proxy can return either a normal response or a streamed response based on the incoming request.

Installation

  1. Make sure you have uv installed.
  2. Create a .env file in the project root or export variables like startup.sh.
  3. Add your configuration values.
  4. Install dependencies.
  5. Start the server.

Example .env file:

LLM_PROVIDER=OPENAI
API_SERVER_PORT=11434
OPENAI_API_KEY=your_openai_api_key_here
OPENAI_MODEL=gpt-4
OPENAI_BASE_URL=https://api.openai.com/v1

Install dependencies:

uv sync

Start the server:

./startup.sh

Or run directly:

uv run llm-proxy-gateway

Sample Request

Health check:

GET http://localhost:11434/health

OpenAI Chat Completions request:

POST http://localhost:11434/v1/chat/completions
Content-Type: application/json

{
  "model": "oca/gpt5",
  "stream": false,
  "messages": [
    {
      "role": "user",
      "content": "Hello, how are you?"
    }
  ]
}

OpenAI Responses request:

POST http://localhost:11434/v1/responses
Content-Type: application/json

{
  "model": "oca/gpt5",
  "input": "Reply with the word OK if you can read this.",
  "stream": false
}

Claude Messages request:

POST http://localhost:11434/v1/messages
Content-Type: application/json

{
  "model": "oca/gpt5",
  "max_tokens": 128,
  "messages": [
    {
      "role": "user",
      "content": "Hello Claude, reply with a short greeting."
    }
  ]
}

You can also use the ready-made examples in test.http.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llm_proxy_gateway-0.0.4.tar.gz (10.2 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

llm_proxy_gateway-0.0.4-py3-none-any.whl (11.7 kB view details)

Uploaded Python 3

File details

Details for the file llm_proxy_gateway-0.0.4.tar.gz.

File metadata

  • Download URL: llm_proxy_gateway-0.0.4.tar.gz
  • Upload date:
  • Size: 10.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.7.5

File hashes

Hashes for llm_proxy_gateway-0.0.4.tar.gz
Algorithm Hash digest
SHA256 d91f87ef3ed96a6c3c73146a5f61bbd74f4ce0ef5b70905d45ac5f1aae299a32
MD5 6d60b06bc98c2fd59e0e1760b10ca46c
BLAKE2b-256 d8707e2bc2c3ffeb0f595677c0d3090956d26dc6623d72171e5751d95a999ed9

See more details on using hashes here.

File details

Details for the file llm_proxy_gateway-0.0.4-py3-none-any.whl.

File metadata

File hashes

Hashes for llm_proxy_gateway-0.0.4-py3-none-any.whl
Algorithm Hash digest
SHA256 8343ea6bcebb13ed4fec94b16c022eafb7ec61d3afccf55b7fb6336154f67fc0
MD5 dd592f0b44fca29e85d6b80f83270a0b
BLAKE2b-256 08e443e861709c55f34c7433bd67bc2bce56d790ab7da8b6ab757603111eea64

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page