Skip to main content

Universal LLM API client for Python. Unified interface for streaming, tool calling, and provider routing across 142+ LLM providers. Rust-powered.

Project description

Python

kreuzberg.dev

Universal LLM API client for Python. Access 143+ LLM providers — OpenAI, Anthropic, Groq, Mistral, and more — through a single unified interface. Native async/await support, streaming responses, tool calling, and type-safe API.

Installation

Package Installation

Install via pip:

pip install liter-llm

System Requirements

  • Python 3.10+ required
  • API keys via environment variables (e.g. OPENAI_API_KEY, ANTHROPIC_API_KEY)

Quick Start

Basic Chat

Send a message to any provider using the provider/model prefix:

import asyncio
import os
from liter_llm import LlmClient

async def main() -> None:
    client = LlmClient(api_key=os.environ["OPENAI_API_KEY"])
    response = await client.chat(
        model="openai/gpt-4o",
        messages=[{"role": "user", "content": "Hello!"}],
    )
    print(response.choices[0].message.content)

asyncio.run(main())

Common Use Cases

Streaming Responses

Stream tokens in real time:

import asyncio
import os
from liter_llm import LlmClient

async def main() -> None:
    client = LlmClient(api_key=os.environ["OPENAI_API_KEY"])
    async for chunk in await client.chat_stream(
        model="openai/gpt-4o",
        messages=[{"role": "user", "content": "Tell me a story"}],
    ):
        if chunk.choices and chunk.choices[0].delta.content:
            print(chunk.choices[0].delta.content, end="", flush=True)
    print()

asyncio.run(main())

Tool Calling

Define and invoke tools:

import asyncio
import os
from liter_llm import LlmClient

async def main() -> None:
    client = LlmClient(api_key=os.environ["OPENAI_API_KEY"])

    tools = [
        {
            "type": "function",
            "function": {
                "name": "get_weather",
                "description": "Get the current weather for a location",
                "parameters": {
                    "type": "object",
                    "properties": {
                        "location": {"type": "string", "description": "City name"},
                    },
                    "required": ["location"],
                },
            },
        }
    ]

    response = await client.chat(
        model="openai/gpt-4o",
        messages=[{"role": "user", "content": "What is the weather in Berlin?"}],
        tools=tools,
    )

    choice = response.choices[0]
    if choice.message.tool_calls:
        for call in choice.message.tool_calls:
            print(f"Tool: {call.function.name}, Args: {call.function.arguments}")

asyncio.run(main())

Next Steps

Features

Supported Providers (143+)

Route to any provider using the provider/model prefix convention:

Provider Example Model
OpenAI openai/gpt-4o, openai/gpt-4o-mini
Anthropic anthropic/claude-3-5-sonnet-20241022
Groq groq/llama-3.1-70b-versatile
Mistral mistral/mistral-large-latest
Cohere cohere/command-r-plus
Together AI together/meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo
Fireworks fireworks/accounts/fireworks/models/llama-v3p1-70b-instruct
Google Vertex vertexai/gemini-1.5-pro
Amazon Bedrock bedrock/anthropic.claude-3-5-sonnet-20241022-v2:0

Complete Provider List

Key Capabilities

  • Provider Routing -- Single client for 143+ LLM providers via provider/model prefix

  • Local LLMs — Connect to locally-hosted models via Ollama, LM Studio, vLLM, llama.cpp, and other local inference servers

  • Unified API -- Consistent chat, chat_stream, embeddings, list_models interface

  • Streaming -- Real-time token streaming via chat_stream

  • Tool Calling -- Function calling and tool use across all supporting providers

  • Type Safe -- Schema-driven types compiled from JSON schemas

  • Secure -- API keys never logged or serialized, managed via environment variables

  • Observability -- Built-in OpenTelemetry with GenAI semantic conventions

  • Error Handling -- Structured errors with provider context and retry hints

Performance

Built on a compiled Rust core for speed and safety:

  • Provider resolution at client construction -- zero per-request overhead
  • Configurable timeouts and connection pooling
  • Zero-copy streaming with SSE and AWS EventStream support
  • API keys wrapped in secure memory, zeroed on drop

Provider Routing

Route to 143+ providers using the provider/model prefix convention:

openai/gpt-4o
anthropic/claude-3-5-sonnet-20241022
groq/llama-3.1-70b-versatile
mistral/mistral-large-latest

See the provider registry for the full list.

Proxy Server

liter-llm also ships as an OpenAI-compatible proxy server with Docker support:

docker run -p 4000:4000 -e LITER_LLM_MASTER_KEY=sk-your-key ghcr.io/kreuzberg-dev/liter-llm

See the proxy server documentation for configuration, CLI usage, and MCP integration.

Documentation

Part of kreuzberg.dev.

Contributing

Contributions are welcome! See CONTRIBUTING.md for guidelines.

Join our Discord community for questions and discussion.

License

MIT -- see LICENSE for details.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

liter_llm-1.2.0.tar.gz (254.4 kB view details)

Uploaded Source

Built Distributions

If you're not sure about the file name format, learn more about wheel file names.

liter_llm-1.2.0-cp310-abi3-win_amd64.whl (5.2 MB view details)

Uploaded CPython 3.10+Windows x86-64

liter_llm-1.2.0-cp310-abi3-manylinux2014_x86_64.manylinux_2_17_x86_64.whl (5.2 MB view details)

Uploaded CPython 3.10+manylinux: glibc 2.17+ x86-64

liter_llm-1.2.0-cp310-abi3-manylinux2014_aarch64.manylinux_2_17_aarch64.whl (4.9 MB view details)

Uploaded CPython 3.10+manylinux: glibc 2.17+ ARM64

liter_llm-1.2.0-cp310-abi3-macosx_11_0_arm64.whl (4.8 MB view details)

Uploaded CPython 3.10+macOS 11.0+ ARM64

File details

Details for the file liter_llm-1.2.0.tar.gz.

File metadata

  • Download URL: liter_llm-1.2.0.tar.gz
  • Upload date:
  • Size: 254.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for liter_llm-1.2.0.tar.gz
Algorithm Hash digest
SHA256 49e9791840c531f7f746af99f049e84caa450cf5c0f79bb4814122dbf421b413
MD5 e4a295cc3b0307ed60d8c259413845ef
BLAKE2b-256 5632256a6a9f07738341bdf9941f5e629ac711ed4d60a357410e543080709073

See more details on using hashes here.

Provenance

The following attestation bundles were made for liter_llm-1.2.0.tar.gz:

Publisher: publish.yaml on kreuzberg-dev/liter-llm

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file liter_llm-1.2.0-cp310-abi3-win_amd64.whl.

File metadata

  • Download URL: liter_llm-1.2.0-cp310-abi3-win_amd64.whl
  • Upload date:
  • Size: 5.2 MB
  • Tags: CPython 3.10+, Windows x86-64
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for liter_llm-1.2.0-cp310-abi3-win_amd64.whl
Algorithm Hash digest
SHA256 49de611ac259d15743d29aa09a2d68e09f93ef6df48e53d0081025cdd5e71b0f
MD5 d5fede53cfa163eed5475624cfa5ee0d
BLAKE2b-256 22def5ba18d0b51cce342a8af3258687260e23ac3ea13ce03c48e30f81a5592b

See more details on using hashes here.

Provenance

The following attestation bundles were made for liter_llm-1.2.0-cp310-abi3-win_amd64.whl:

Publisher: publish.yaml on kreuzberg-dev/liter-llm

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file liter_llm-1.2.0-cp310-abi3-manylinux2014_x86_64.manylinux_2_17_x86_64.whl.

File metadata

File hashes

Hashes for liter_llm-1.2.0-cp310-abi3-manylinux2014_x86_64.manylinux_2_17_x86_64.whl
Algorithm Hash digest
SHA256 dd8e02c283a2cdce18670f21c11b7e3239a65600bcabc5b4737d0c520ee3abb4
MD5 a0c141247b3f4876bdc7bc1b6ededadc
BLAKE2b-256 98dd236b2b0976f117a4e1762f505c1bb99defeb00dba05f7d4b1eba2e7e03d9

See more details on using hashes here.

Provenance

The following attestation bundles were made for liter_llm-1.2.0-cp310-abi3-manylinux2014_x86_64.manylinux_2_17_x86_64.whl:

Publisher: publish.yaml on kreuzberg-dev/liter-llm

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file liter_llm-1.2.0-cp310-abi3-manylinux2014_aarch64.manylinux_2_17_aarch64.whl.

File metadata

File hashes

Hashes for liter_llm-1.2.0-cp310-abi3-manylinux2014_aarch64.manylinux_2_17_aarch64.whl
Algorithm Hash digest
SHA256 2939b9823ba863dca0b580291079a045112c5f75f43ec846f4384ac79eba89f9
MD5 eadf5ef08b0ba9613d727c31feaa12fc
BLAKE2b-256 18144d549857e1d2ff6287a1eb7a34228de69a7ef24f5026f4dd2a004066057f

See more details on using hashes here.

Provenance

The following attestation bundles were made for liter_llm-1.2.0-cp310-abi3-manylinux2014_aarch64.manylinux_2_17_aarch64.whl:

Publisher: publish.yaml on kreuzberg-dev/liter-llm

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file liter_llm-1.2.0-cp310-abi3-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for liter_llm-1.2.0-cp310-abi3-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 01a6b94e6af29fc62e6d322bf7444ac10c29acce338c62e0a3c4b71efb176a2b
MD5 4a667c1cbb6714d2e054413c67903b4b
BLAKE2b-256 3378e813ab88686b6b0169be1267e352d6632ba16643f4d4218c7ad9103664ab

See more details on using hashes here.

Provenance

The following attestation bundles were made for liter_llm-1.2.0-cp310-abi3-macosx_11_0_arm64.whl:

Publisher: publish.yaml on kreuzberg-dev/liter-llm

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page