Skip to main content

Universal LLM API client for Python. Unified interface for streaming, tool calling, and provider routing across 142+ LLM providers. Rust-powered.

Project description

Python

kreuzberg.dev

Universal LLM API client for Python. Access 143+ LLM providers through a single unified interface. Native async/await support, streaming responses, tool calling, and type-safe API.

Installation

Package Installation

Install via pip:

pip install liter-llm

System Requirements

  • Python 3.10+ required
  • API keys via environment variables (e.g. OPENAI_API_KEY, ANTHROPIC_API_KEY)

Quick Start

Basic Chat

Send a message to any provider using the provider/model prefix:

import asyncio
import os
from liter_llm import LlmClient

async def main() -> None:
    client = LlmClient(api_key=os.environ["OPENAI_API_KEY"])
    response = await client.chat(
        model="openai/gpt-4o",
        messages=[{"role": "user", "content": "Hello!"}],
    )
    print(response.choices[0].message.content)

asyncio.run(main())

Common Use Cases

Streaming Responses

Stream tokens in real time:

import asyncio
import os
from liter_llm import LlmClient

async def main() -> None:
    client = LlmClient(api_key=os.environ["OPENAI_API_KEY"])
    async for chunk in await client.chat_stream(
        model="openai/gpt-4o",
        messages=[{"role": "user", "content": "Tell me a story"}],
    ):
        if chunk.choices and chunk.choices[0].delta.content:
            print(chunk.choices[0].delta.content, end="", flush=True)
    print()

asyncio.run(main())

Tool Calling

Define and invoke tools:

import asyncio
import os
from liter_llm import LlmClient

async def main() -> None:
    client = LlmClient(api_key=os.environ["OPENAI_API_KEY"])

    tools = [
        {
            "type": "function",
            "function": {
                "name": "get_weather",
                "description": "Get the current weather for a location",
                "parameters": {
                    "type": "object",
                    "properties": {
                        "location": {"type": "string", "description": "City name"},
                    },
                    "required": ["location"],
                },
            },
        }
    ]

    response = await client.chat(
        model="openai/gpt-4o",
        messages=[{"role": "user", "content": "What is the weather in Berlin?"}],
        tools=tools,
    )

    choice = response.choices[0]
    if choice.message.tool_calls:
        for call in choice.message.tool_calls:
            print(f"Tool: {call.function.name}, Args: {call.function.arguments}")

asyncio.run(main())

Next Steps

Features

Supported Providers (143+)

Route to any provider using the provider/model prefix convention:

Provider Example Model
OpenAI openai/gpt-4o, openai/gpt-4o-mini
Anthropic anthropic/claude-3-5-sonnet-20241022
Groq groq/llama-3.1-70b-versatile
Mistral mistral/mistral-large-latest
Cohere cohere/command-r-plus
Together AI together/meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo
Fireworks fireworks/accounts/fireworks/models/llama-v3p1-70b-instruct
Google Vertex vertexai/gemini-1.5-pro
Amazon Bedrock bedrock/anthropic.claude-3-5-sonnet-20241022-v2:0

Complete Provider List

Key Capabilities

  • Provider Routing -- Single client for 143+ LLM providers via provider/model prefix

  • Local LLMs — Connect to locally-hosted models via Ollama, LM Studio, vLLM, llama.cpp, and other local inference servers

  • Unified API -- Consistent chat, chat_stream, embeddings, list_models interface

  • Streaming -- Real-time token streaming via chat_stream

  • Tool Calling -- Function calling and tool use across all supporting providers

  • Type Safe -- Schema-driven types compiled from JSON schemas

  • Secure -- API keys never logged or serialized, managed via environment variables

  • Observability -- Built-in OpenTelemetry with GenAI semantic conventions

  • Error Handling -- Structured errors with provider context and retry hints

Performance

Built on a compiled Rust core for speed and safety:

  • Provider resolution at client construction -- zero per-request overhead
  • Configurable timeouts and connection pooling
  • Zero-copy streaming with SSE and AWS EventStream support
  • API keys wrapped in secure memory, zeroed on drop

Provider Routing

Route to 143+ providers using the provider/model prefix convention:

openai/gpt-4o
anthropic/claude-3-5-sonnet-20241022
groq/llama-3.1-70b-versatile
mistral/mistral-large-latest

See the provider registry for the full list.

Proxy Server

liter-llm also ships as an OpenAI-compatible proxy server with Docker support:

docker run -p 4000:4000 -e LITER_LLM_MASTER_KEY=sk-your-key ghcr.io/kreuzberg-dev/liter-llm

See the proxy server documentation for configuration, CLI usage, and MCP integration.

Documentation

Part of kreuzberg.dev.

Contributing

Contributions are welcome! See CONTRIBUTING.md for guidelines.

Join our Discord community for questions and discussion.

License

MIT -- see LICENSE for details.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

liter_llm-1.4.0rc1.tar.gz (269.4 kB view details)

Uploaded Source

Built Distributions

If you're not sure about the file name format, learn more about wheel file names.

liter_llm-1.4.0rc1-cp310-abi3-win_amd64.whl (3.2 MB view details)

Uploaded CPython 3.10+Windows x86-64

liter_llm-1.4.0rc1-cp310-abi3-manylinux2014_x86_64.manylinux_2_17_x86_64.whl (3.2 MB view details)

Uploaded CPython 3.10+manylinux: glibc 2.17+ x86-64

liter_llm-1.4.0rc1-cp310-abi3-manylinux2014_aarch64.manylinux_2_17_aarch64.whl (2.9 MB view details)

Uploaded CPython 3.10+manylinux: glibc 2.17+ ARM64

liter_llm-1.4.0rc1-cp310-abi3-macosx_11_0_arm64.whl (2.9 MB view details)

Uploaded CPython 3.10+macOS 11.0+ ARM64

File details

Details for the file liter_llm-1.4.0rc1.tar.gz.

File metadata

  • Download URL: liter_llm-1.4.0rc1.tar.gz
  • Upload date:
  • Size: 269.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for liter_llm-1.4.0rc1.tar.gz
Algorithm Hash digest
SHA256 953b8b9100159b0bbbd4e020432a255813977256562c975d109b16827a38142b
MD5 5d36228fad82eb6b3101d8bb6dea82fa
BLAKE2b-256 8d6edba1c98a2275347df1d370220a7cd9fec0fd7a4f6841f875da82fa742d64

See more details on using hashes here.

Provenance

The following attestation bundles were made for liter_llm-1.4.0rc1.tar.gz:

Publisher: publish.yaml on kreuzberg-dev/liter-llm

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file liter_llm-1.4.0rc1-cp310-abi3-win_amd64.whl.

File metadata

  • Download URL: liter_llm-1.4.0rc1-cp310-abi3-win_amd64.whl
  • Upload date:
  • Size: 3.2 MB
  • Tags: CPython 3.10+, Windows x86-64
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for liter_llm-1.4.0rc1-cp310-abi3-win_amd64.whl
Algorithm Hash digest
SHA256 93989bcad9d50cca391146fbe026e43a8063ac419ec7b7c9b52b4c5c0cf20d5a
MD5 be2e67059df6d366be925265b8f2388f
BLAKE2b-256 236ef7f711fc4f4d2e016744818d40cdcc10c399ba2444a193f217daecc16275

See more details on using hashes here.

Provenance

The following attestation bundles were made for liter_llm-1.4.0rc1-cp310-abi3-win_amd64.whl:

Publisher: publish.yaml on kreuzberg-dev/liter-llm

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file liter_llm-1.4.0rc1-cp310-abi3-manylinux2014_x86_64.manylinux_2_17_x86_64.whl.

File metadata

File hashes

Hashes for liter_llm-1.4.0rc1-cp310-abi3-manylinux2014_x86_64.manylinux_2_17_x86_64.whl
Algorithm Hash digest
SHA256 cda29264fc278141f76ad70ace0d8957b7095ea98f1b1cc6a680cb78334dabeb
MD5 fce0ed6ad491f3cef93785e8410ceaf9
BLAKE2b-256 2e1817ab94ecdfbeff9b74ed7f5142c5c2228ca168f5bcda2dfb9c92e6e97ce2

See more details on using hashes here.

Provenance

The following attestation bundles were made for liter_llm-1.4.0rc1-cp310-abi3-manylinux2014_x86_64.manylinux_2_17_x86_64.whl:

Publisher: publish.yaml on kreuzberg-dev/liter-llm

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file liter_llm-1.4.0rc1-cp310-abi3-manylinux2014_aarch64.manylinux_2_17_aarch64.whl.

File metadata

File hashes

Hashes for liter_llm-1.4.0rc1-cp310-abi3-manylinux2014_aarch64.manylinux_2_17_aarch64.whl
Algorithm Hash digest
SHA256 4993577caf6e82bc5c03e9a9d28dff1a1a865dde2712d735ba433ce31e504357
MD5 870f61df5b8b12823cd6711f50e49115
BLAKE2b-256 dee7b2d8dcfd488947ec248233ab3cb9b02cba979b08c1565172450348bb0575

See more details on using hashes here.

Provenance

The following attestation bundles were made for liter_llm-1.4.0rc1-cp310-abi3-manylinux2014_aarch64.manylinux_2_17_aarch64.whl:

Publisher: publish.yaml on kreuzberg-dev/liter-llm

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file liter_llm-1.4.0rc1-cp310-abi3-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for liter_llm-1.4.0rc1-cp310-abi3-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 61a7a524be8f167d05db095e8550a73c375bdf1b464d23b88724766539797098
MD5 a1fa4aa952acb3a3557edd4a80dbd2ad
BLAKE2b-256 3af0188e2db8a55536d950140f7d6183050b4083f0ca69c8100116ff46e7827c

See more details on using hashes here.

Provenance

The following attestation bundles were made for liter_llm-1.4.0rc1-cp310-abi3-macosx_11_0_arm64.whl:

Publisher: publish.yaml on kreuzberg-dev/liter-llm

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page