Lightweight LLM provider abstraction with standardized message models
Project description
casual-llm
Lightweight LLM provider abstraction with standardized message models.
Part of the "casual" ecosystem of lightweight AI tools.
Upgrading from v0.4.x? See the Migration Guide for breaking changes.
Features
- Client/Model Separation - Configure API connections once, create multiple models
- Protocol-based - Uses
typing.Protocol, no inheritance required - Multi-provider - Works with OpenAI, Anthropic (Claude), Ollama, or your custom provider
- Lightweight - Minimal dependencies (pydantic, httpx)
- Async-first - Built for modern async Python
- Type-safe - Full type hints with py.typed marker
- OpenAI-compatible - Standard message format used across the industry
- Tool calling - First-class support for function/tool calling
- Per-model usage tracking - Track token usage per model for cost monitoring
- Vision support - Send images to vision-capable models
- Streaming - Stream responses in real-time with
AsyncIterator
Installation
# Core only (pydantic + httpx)
uv add casual-llm
# With specific providers
uv add casual-llm[ollama]
uv add casual-llm[openai]
uv add casual-llm[anthropic]
# With all providers
uv add casual-llm[ollama,openai,anthropic]
# Or using pip
pip install casual-llm[openai,anthropic]
Quick Start
from casual_llm import OpenAIClient, Model, UserMessage
# Create client (works with OpenAI, OpenRouter, LM Studio, etc.)
client = OpenAIClient(
api_key="sk-...", # or set OPENAI_API_KEY env var
base_url="https://openrouter.ai/api/v1", # optional, omit for OpenAI
)
# Create model
model = Model(client, "gpt-4o-mini")
# Generate response
response = await model.chat([UserMessage(content="Hello!")])
print(response.content)
More examples:
- Quick Start Guide - Ollama, Anthropic, and more
- Vision Guide - Send images to models
- Streaming Guide - Real-time responses
- Advanced Usage - Custom clients, configuration classes
- API Reference - Full API documentation
- Examples Directory - Complete working examples
Message Models
casual-llm provides OpenAI-compatible message models that work with any provider:
from casual_llm import (
UserMessage,
AssistantMessage,
SystemMessage,
ToolResultMessage,
TextContent,
ImageContent,
)
# System message (sets behavior)
system_msg = SystemMessage(content="You are a helpful assistant.")
# User message (simple text)
user_msg = UserMessage(content="Hello!")
# User message (multimodal - text + image)
vision_msg = UserMessage(
content=[
TextContent(text="What's in this image?"),
ImageContent(source="https://example.com/image.jpg"),
]
)
# Assistant message (response from LLM)
assistant_msg = AssistantMessage(content="I'll help you with that.")
# Tool result message (after executing a tool)
tool_msg = ToolResultMessage(
name="get_weather",
tool_call_id="call_123",
content='{"temp": 20, "condition": "sunny"}'
)
Why casual-llm?
| Feature | casual-llm | LangChain | litellm |
|---|---|---|---|
| Dependencies | 2 core (pydantic, httpx) | 100+ | 50+ |
| Protocol-based | Yes | No | No |
| Type-safe | Full typing | Partial | Partial |
| Message models | Included | Separate | None |
| Multi-model sharing | Yes | No | Yes |
| Vision support | All providers | Yes | Yes |
| Streaming | All providers | Yes | Yes |
| Providers | OpenAI, Anthropic, Ollama | Many | Many |
| Learning curve | Minutes | Hours | Medium |
Use casual-llm when you want:
- Lightweight, focused library (not a framework)
- Protocol-based design (no inheritance)
- Standard message models shared across your codebase
- Efficient multi-model usage with shared connections
- Simple, predictable API
Use LangChain when you need:
- Full-featured framework with chains, agents, RAG
- Massive ecosystem of integrations
- Higher-level abstractions
Part of the casual-* Ecosystem
- casual-mcp - MCP server orchestration and tool calling
- casual-llm (this library) - LLM provider abstraction
- casual-memory - Memory intelligence with conflict detection
All casual-* libraries share the same philosophy: lightweight, protocol-based, easy to use.
Contributing
Contributions welcome! Please see CONTRIBUTING.md for guidelines.
License
MIT License - see LICENSE for details.
Links
- GitHub: https://github.com/casualgenius/casual-llm
- PyPI: https://pypi.org/project/casual-llm/
- Issues: https://github.com/casualgenius/casual-llm/issues
- Migration Guide: MIGRATION-0.5.0.md
- casual-mcp: https://github.com/casualgenius/casual-mcp
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file casual_llm-0.8.0.tar.gz.
File metadata
- Download URL: casual_llm-0.8.0.tar.gz
- Upload date:
- Size: 60.7 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
a05b89a797f99be831be403cb8e746feae3b1a2600f5b4d764e68d6bab93d531
|
|
| MD5 |
86a101828030faaa777244e882ce0224
|
|
| BLAKE2b-256 |
9a38b4095415c837f5bb7e89c0e6f33f12c519c323a99bd7795354f9219802c4
|
Provenance
The following attestation bundles were made for casual_llm-0.8.0.tar.gz:
Publisher:
release.yml on casualgenius/casual-llm
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
casual_llm-0.8.0.tar.gz -
Subject digest:
a05b89a797f99be831be403cb8e746feae3b1a2600f5b4d764e68d6bab93d531 - Sigstore transparency entry: 1001683905
- Sigstore integration time:
-
Permalink:
casualgenius/casual-llm@0f625ee040640dd295e3b41eda0e967858aa9a0b -
Branch / Tag:
refs/tags/v0.8.0 - Owner: https://github.com/casualgenius
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yml@0f625ee040640dd295e3b41eda0e967858aa9a0b -
Trigger Event:
push
-
Statement type:
File details
Details for the file casual_llm-0.8.0-py3-none-any.whl.
File metadata
- Download URL: casual_llm-0.8.0-py3-none-any.whl
- Upload date:
- Size: 40.3 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
1995495a7434e1f6244763e68db07cef80065e70ba2ce9ea1d15efb910f70b8e
|
|
| MD5 |
584180f7eb4bb5d4cfa08001f013fcf8
|
|
| BLAKE2b-256 |
6b579e7bda8b0be3b69350f4d625c1ddda561cd6ccbc46db3fbf399ae9b004cb
|
Provenance
The following attestation bundles were made for casual_llm-0.8.0-py3-none-any.whl:
Publisher:
release.yml on casualgenius/casual-llm
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
casual_llm-0.8.0-py3-none-any.whl -
Subject digest:
1995495a7434e1f6244763e68db07cef80065e70ba2ce9ea1d15efb910f70b8e - Sigstore transparency entry: 1001683909
- Sigstore integration time:
-
Permalink:
casualgenius/casual-llm@0f625ee040640dd295e3b41eda0e967858aa9a0b -
Branch / Tag:
refs/tags/v0.8.0 - Owner: https://github.com/casualgenius
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yml@0f625ee040640dd295e3b41eda0e967858aa9a0b -
Trigger Event:
push
-
Statement type: