A lightweight, provider-agnostic LLM client library
Project description
ClearLLM
A lightweight, provider-agnostic LLM client. Think LiteLLM, but with clean Python types — structured returns, frozen pydantic models, no raw dicts.
Install
pip install clearllm # core only (no LLM SDKs)
pip install "clearllm[litellm]" # + LiteLLM (OpenAI, Anthropic, Azure, OpenRouter…)
pip install "clearllm[gemini]" # + native google-genai SDK
pip install "clearllm[openai]" # + direct openai SDK
pip install "clearllm[all]" # everything
For local development:
git clone https://github.com/you/clearllm
cd clearllm
pip install -e ".[all,dev]"
Quick start
from clearllm import LLM, Chat, UserTurn
llm = LLM("gpt-4o")
llm = LLM("gemini-2.5-flash") # native Gemini SDK auto-selected
llm = LLM("claude-3-5-sonnet-latest") # Anthropic via LiteLLM
# Build a conversation
chat = Chat(system_prompt="You are a helpful assistant.")
chat.add_user_turn(UserTurn().add_text("What is 2 + 2?"))
# Call — always returns AssistantTurn
turn = llm(chat)
print(turn.to_text())
# Or pass a raw message list
turn = llm([{"role": "user", "content": "Hello!"}])
# Async
turn = await llm.acall(chat)
Provider selection
Provider is auto-detected from the model name. You can override it explicitly.
LLM("gpt-4o") # → LiteLLMProvider (default catch-all)
LLM("gemini-2.5-flash") # → GeminiProvider (native SDK)
LLM("claude-3-5-sonnet-latest") # → LiteLLMProvider
LLM("gpt-4o", provider="openai") # → OpenAIProvider (direct SDK, no LiteLLM)
LLM("azure/gpt-4o", api_key=..., api_base=..., api_version=...) # Azure via LiteLLM
LLM("openrouter/google/gemini-2.5-pro") # OpenRouter via LiteLLM
# Bring your own provider
LLM("my-model", provider=my_provider_obj)
Default generation params can be set at construction and overridden per-call:
llm = LLM("gpt-4o", temperature=0.3, max_tokens=1024)
turn = llm(chat, temperature=1.0) # overrides for this call only
Tool use
from clearllm.tools import tool, simple_tool_result
from clearllm.types import ToolResult
from typing import Annotated
import json
@tool
async def search(query: Annotated[str, "Search query"]) -> ToolResult:
"""Search the web for information."""
results = await do_search(query)
return simple_tool_result(json.dumps(results))
# Pass tool specs to the model
turn = llm(chat, tools=[search.to_spec()])
Message types
All messages use the standard Message TypedDict shape. Content can be a plain string or a list of typed parts.
from clearllm.types import Message, TextPart, ImagePart, ThinkingPart
msg: Message = {
"role": "user",
"content": [
TextPart(type="text", text="What's in this image?"),
ImagePart(type="image", image="https://example.com/photo.jpg"),
]
}
ToolCall and UnparsedToolCall are immutable pydantic models validated on construction.
Multimodal (backbone)
The UserTurn / AssistantTurn / Chat classes in backbone.py give a fluent
builder API on top of raw messages:
from clearllm import Chat, UserTurn
chat = Chat(system_prompt="Describe images.")
turn = (UserTurn()
.add_text("What is in these photos?")
.add_image(url="https://…/photo1.jpg")
.add_image_file("local.png"))
chat.add_user_turn(turn)
response = llm(chat)
print(response.to_text())
print(response.prompt_tokens, response.completion_tokens)
Architecture
clearllm/
types.py # Message, ToolCall, TextPart, ImagePart, ToolSpec, … (no heavy deps)
tools.py # @tool decorator, FunctionTool, simple_tool_result
backbone.py # Chat, UserTurn, AssistantTurn, ContentBlockList, …
llm.py # LLM — public entry-point
retry.py # exponential-backoff retry
providers/
base.py # Provider protocol
litellm.py # LiteLLMProvider (optional: litellm)
openai.py # OpenAIProvider (optional: openai)
gemini.py # GeminiProvider (optional: google-genai)
display/
jupyter.py # HTML rendering for Jupyter notebooks
themes.py # glassmorphism theme tokens
flowchart TD
tests[tests/] --> backbone
backbone[backbone.py] --> types[types.py]
backbone --> display_jupyter[display/jupyter.py]
display_jupyter --> backbone
tools[tools.py] --> types
types --> pydantic[pydantic]
types --> PIL[Pillow]
llm[llm.py] --> providers
providers[providers/] --> backbone
providers --> retry[retry.py]
providers -.->|optional| litellm_dep[litellm]
providers -.->|optional| openai_dep[openai]
providers -.->|optional| gemini_dep[google-genai]
Design notes
- Always returns
AssistantTurn— nomm_betaflag, no raw dict responses. - Lazy optional deps — each provider only imports its SDK when first called.
- Clean types —
ToolCall/UnparsedToolCallare frozen pydantic models;Messageis a TypedDict. - No auto-refresh / factory pattern — connection lifecycle is the SDK's job.
- Retry is a utility —
retry_with_exponential_backoffinretry.py, configurable per provider viamax_retries/base_delayconstructor args.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file clearllm-0.1.0.tar.gz.
File metadata
- Download URL: clearllm-0.1.0.tar.gz
- Upload date:
- Size: 52.0 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.11.14
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
be377308c1eb13dc8821abe5962637f53a3f6b7a5006b298dc8f4a227470e867
|
|
| MD5 |
dc9140f1b9e87b4066181f32ef4abf96
|
|
| BLAKE2b-256 |
a62ce8203eb89f380ca4ba01e8bd6121f8d4046536e5815df992ff5acbae3bb5
|
File details
Details for the file clearllm-0.1.0-py3-none-any.whl.
File metadata
- Download URL: clearllm-0.1.0-py3-none-any.whl
- Upload date:
- Size: 51.7 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.11.14
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
6018f610c40aeee9c1960a75ce41ce18997a9e2c3acb7be01ddb3d7925b98cd5
|
|
| MD5 |
dcea53343d4f7b135208964146a35f11
|
|
| BLAKE2b-256 |
bd0ae22d1314a76edc753e242f44e16b59aa8b7e67f2b7b0b0a653557f37b42c
|