Skip to main content

Universal LLM API client for Python. Unified interface for streaming, tool calling, and provider routing across 142+ LLM providers. Rust-powered.

Project description

Python

kreuzberg.dev

Universal LLM API client for Python. Access 143+ LLM providers through a single unified interface. Native async/await support, streaming responses, tool calling, and type-safe API.

Installation

Package Installation

Install via pip:

pip install liter-llm

System Requirements

  • Python 3.10+ required
  • API keys via environment variables (e.g. OPENAI_API_KEY, ANTHROPIC_API_KEY)

Quick Start

Basic Chat

Send a message to any provider using the provider/model prefix:

import asyncio
import os
from liter_llm import LlmClient

async def main() -> None:
    client = LlmClient(api_key=os.environ["OPENAI_API_KEY"])
    response = await client.chat(
        model="openai/gpt-4o",
        messages=[{"role": "user", "content": "Hello!"}],
    )
    print(response.choices[0].message.content)

asyncio.run(main())

Common Use Cases

Streaming Responses

Stream tokens in real time:

import asyncio
import os
from liter_llm import LlmClient

async def main() -> None:
    client = LlmClient(api_key=os.environ["OPENAI_API_KEY"])
    async for chunk in await client.chat_stream(
        model="openai/gpt-4o",
        messages=[{"role": "user", "content": "Tell me a story"}],
    ):
        if chunk.choices and chunk.choices[0].delta.content:
            print(chunk.choices[0].delta.content, end="", flush=True)
    print()

asyncio.run(main())

Tool Calling

Define and invoke tools:

import asyncio
import os
from liter_llm import LlmClient

async def main() -> None:
    client = LlmClient(api_key=os.environ["OPENAI_API_KEY"])

    tools = [
        {
            "type": "function",
            "function": {
                "name": "get_weather",
                "description": "Get the current weather for a location",
                "parameters": {
                    "type": "object",
                    "properties": {
                        "location": {"type": "string", "description": "City name"},
                    },
                    "required": ["location"],
                },
            },
        }
    ]

    response = await client.chat(
        model="openai/gpt-4o",
        messages=[{"role": "user", "content": "What is the weather in Berlin?"}],
        tools=tools,
    )

    choice = response.choices[0]
    if choice.message.tool_calls:
        for call in choice.message.tool_calls:
            print(f"Tool: {call.function.name}, Args: {call.function.arguments}")

asyncio.run(main())

Next Steps

Features

Supported Providers (143+)

Route to any provider using the provider/model prefix convention:

Provider Example Model
OpenAI openai/gpt-4o, openai/gpt-4o-mini
Anthropic anthropic/claude-3-5-sonnet-20241022
Groq groq/llama-3.1-70b-versatile
Mistral mistral/mistral-large-latest
Cohere cohere/command-r-plus
Together AI together/meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo
Fireworks fireworks/accounts/fireworks/models/llama-v3p1-70b-instruct
Google Vertex vertexai/gemini-1.5-pro
Amazon Bedrock bedrock/anthropic.claude-3-5-sonnet-20241022-v2:0

Complete Provider List

Key Capabilities

  • Provider Routing -- Single client for 143+ LLM providers via provider/model prefix

  • Local LLMs — Connect to locally-hosted models via Ollama, LM Studio, vLLM, llama.cpp, and other local inference servers

  • Unified API -- Consistent chat, chat_stream, embeddings, list_models interface

  • Streaming -- Real-time token streaming via chat_stream

  • Tool Calling -- Function calling and tool use across all supporting providers

  • Type Safe -- Schema-driven types compiled from JSON schemas

  • Secure -- API keys never logged or serialized, managed via environment variables

  • Observability -- Built-in OpenTelemetry with GenAI semantic conventions

  • Error Handling -- Structured errors with provider context and retry hints

Performance

Built on a compiled Rust core for speed and safety:

  • Provider resolution at client construction -- zero per-request overhead
  • Configurable timeouts and connection pooling
  • Zero-copy streaming with SSE and AWS EventStream support
  • API keys wrapped in secure memory, zeroed on drop

Provider Routing

Route to 143+ providers using the provider/model prefix convention:

openai/gpt-4o
anthropic/claude-3-5-sonnet-20241022
groq/llama-3.1-70b-versatile
mistral/mistral-large-latest

See the provider registry for the full list.

Proxy Server

liter-llm also ships as an OpenAI-compatible proxy server with Docker support:

docker run -p 4000:4000 -e LITER_LLM_MASTER_KEY=sk-your-key ghcr.io/kreuzberg-dev/liter-llm

See the proxy server documentation for configuration, CLI usage, and MCP integration.

Documentation

Part of kreuzberg.dev.

Contributing

Contributions are welcome! See CONTRIBUTING.md for guidelines.

Join our Discord community for questions and discussion.

License

MIT -- see LICENSE for details.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

liter_llm-1.4.0rc3.tar.gz (306.8 kB view details)

Uploaded Source

Built Distributions

If you're not sure about the file name format, learn more about wheel file names.

liter_llm-1.4.0rc3-cp310-abi3-win_amd64.whl (3.2 MB view details)

Uploaded CPython 3.10+Windows x86-64

liter_llm-1.4.0rc3-cp310-abi3-manylinux2014_x86_64.manylinux_2_17_x86_64.whl (3.2 MB view details)

Uploaded CPython 3.10+manylinux: glibc 2.17+ x86-64

liter_llm-1.4.0rc3-cp310-abi3-manylinux2014_aarch64.manylinux_2_17_aarch64.whl (2.9 MB view details)

Uploaded CPython 3.10+manylinux: glibc 2.17+ ARM64

liter_llm-1.4.0rc3-cp310-abi3-macosx_11_0_arm64.whl (2.9 MB view details)

Uploaded CPython 3.10+macOS 11.0+ ARM64

File details

Details for the file liter_llm-1.4.0rc3.tar.gz.

File metadata

  • Download URL: liter_llm-1.4.0rc3.tar.gz
  • Upload date:
  • Size: 306.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for liter_llm-1.4.0rc3.tar.gz
Algorithm Hash digest
SHA256 328b0898b0c7087508e564af1f17440b000338e691ebac4481dcd72e5b4fd1d6
MD5 66852cc3da7121f1ebcaaadc4f5bcaa3
BLAKE2b-256 6197f0d5f417c3770b8a4953117840c5a561168f67d19ed8f9381e9f3f3a6177

See more details on using hashes here.

Provenance

The following attestation bundles were made for liter_llm-1.4.0rc3.tar.gz:

Publisher: publish.yaml on kreuzberg-dev/liter-llm

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file liter_llm-1.4.0rc3-cp310-abi3-win_amd64.whl.

File metadata

  • Download URL: liter_llm-1.4.0rc3-cp310-abi3-win_amd64.whl
  • Upload date:
  • Size: 3.2 MB
  • Tags: CPython 3.10+, Windows x86-64
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for liter_llm-1.4.0rc3-cp310-abi3-win_amd64.whl
Algorithm Hash digest
SHA256 b6903e80b4ae286d1ea73f60607ec532e2dce9ad3b2a304228c92f8fdd87a637
MD5 a2d9b2eb23bf80d8d4ea2bfacf017a2d
BLAKE2b-256 90e68db1ce62c0d581bd2eee80152950b0049d9c1fc09c2e851308dabbec2278

See more details on using hashes here.

Provenance

The following attestation bundles were made for liter_llm-1.4.0rc3-cp310-abi3-win_amd64.whl:

Publisher: publish.yaml on kreuzberg-dev/liter-llm

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file liter_llm-1.4.0rc3-cp310-abi3-manylinux2014_x86_64.manylinux_2_17_x86_64.whl.

File metadata

File hashes

Hashes for liter_llm-1.4.0rc3-cp310-abi3-manylinux2014_x86_64.manylinux_2_17_x86_64.whl
Algorithm Hash digest
SHA256 c2990338c9dd41103e221fa76e5930eb9e7766ff98edd9783b9a21d7cb91c629
MD5 7bf48aaa25a076d1cc40f0efdfda370f
BLAKE2b-256 1d186353a569bc528ce7eef854c0f89725c7a6454a4101f6a631c8b0197cb7dc

See more details on using hashes here.

Provenance

The following attestation bundles were made for liter_llm-1.4.0rc3-cp310-abi3-manylinux2014_x86_64.manylinux_2_17_x86_64.whl:

Publisher: publish.yaml on kreuzberg-dev/liter-llm

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file liter_llm-1.4.0rc3-cp310-abi3-manylinux2014_aarch64.manylinux_2_17_aarch64.whl.

File metadata

File hashes

Hashes for liter_llm-1.4.0rc3-cp310-abi3-manylinux2014_aarch64.manylinux_2_17_aarch64.whl
Algorithm Hash digest
SHA256 8dcca539ade6342e1a7ff7499862987034c781d3b3de7010864c6deae01bbd5f
MD5 2319e762e435dd7451e723ad134a9d1b
BLAKE2b-256 0d0711886f0f649beac22bace81aee4fe6bdb89697ee76a4eca70a16e143e84d

See more details on using hashes here.

Provenance

The following attestation bundles were made for liter_llm-1.4.0rc3-cp310-abi3-manylinux2014_aarch64.manylinux_2_17_aarch64.whl:

Publisher: publish.yaml on kreuzberg-dev/liter-llm

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file liter_llm-1.4.0rc3-cp310-abi3-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for liter_llm-1.4.0rc3-cp310-abi3-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 ecbf5dd9db44506c8a8b8007fc00878205888450f83ddf0217c9ea15c311f967
MD5 a9f05ab42895ba853077bad089260d66
BLAKE2b-256 dbe412c818016553a79e9fa19c364d6f7276511b53b605fa432bce4baad58821

See more details on using hashes here.

Provenance

The following attestation bundles were made for liter_llm-1.4.0rc3-cp310-abi3-macosx_11_0_arm64.whl:

Publisher: publish.yaml on kreuzberg-dev/liter-llm

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page