Skip to main content

Lightweight LLM provider abstraction with standardized message models

Project description

casual-llm

PyPI License: MIT Python

Lightweight LLM provider abstraction with standardized message models.

Part of the "casual" ecosystem of lightweight AI tools.

Upgrading from v0.4.x? See the Migration Guide for breaking changes.

Features

  • Client/Model Separation - Configure API connections once, create multiple models
  • Protocol-based - Uses typing.Protocol, no inheritance required
  • Multi-provider - Works with OpenAI, Anthropic (Claude), Ollama, or your custom provider
  • Lightweight - Minimal dependencies (pydantic, httpx)
  • Async-first - Built for modern async Python
  • Type-safe - Full type hints with py.typed marker
  • OpenAI-compatible - Standard message format used across the industry
  • Tool calling - First-class support for function/tool calling
  • Per-model usage tracking - Track token usage per model for cost monitoring
  • Vision support - Send images to vision-capable models
  • Streaming - Stream responses in real-time with AsyncIterator

Installation

# Core only (pydantic + httpx)
uv add casual-llm

# With specific providers
uv add casual-llm[ollama]
uv add casual-llm[openai]
uv add casual-llm[anthropic]

# With all providers
uv add casual-llm[ollama,openai,anthropic]

# Or using pip
pip install casual-llm[openai,anthropic]

Quick Start

from casual_llm import OpenAIClient, Model, UserMessage

# Create client (works with OpenAI, OpenRouter, LM Studio, etc.)
client = OpenAIClient(
    api_key="sk-...",  # or set OPENAI_API_KEY env var
    base_url="https://openrouter.ai/api/v1",  # optional, omit for OpenAI
)

# Create model
model = Model(client, "gpt-4o-mini")

# Generate response
response = await model.chat([UserMessage(content="Hello!")])
print(response.content)

More examples:

Message Models

casual-llm provides OpenAI-compatible message models that work with any provider:

from casual_llm import (
    UserMessage,
    AssistantMessage,
    SystemMessage,
    ToolResultMessage,
    TextContent,
    ImageContent,
)

# System message (sets behavior)
system_msg = SystemMessage(content="You are a helpful assistant.")

# User message (simple text)
user_msg = UserMessage(content="Hello!")

# User message (multimodal - text + image)
vision_msg = UserMessage(
    content=[
        TextContent(text="What's in this image?"),
        ImageContent(source="https://example.com/image.jpg"),
    ]
)

# Assistant message (response from LLM)
assistant_msg = AssistantMessage(content="I'll help you with that.")

# Tool result message (after executing a tool)
tool_msg = ToolResultMessage(
    name="get_weather",
    tool_call_id="call_123",
    content='{"temp": 20, "condition": "sunny"}'
)

Why casual-llm?

Feature casual-llm LangChain litellm
Dependencies 2 core (pydantic, httpx) 100+ 50+
Protocol-based Yes No No
Type-safe Full typing Partial Partial
Message models Included Separate None
Multi-model sharing Yes No Yes
Vision support All providers Yes Yes
Streaming All providers Yes Yes
Providers OpenAI, Anthropic, Ollama Many Many
Learning curve Minutes Hours Medium

Use casual-llm when you want:

  • Lightweight, focused library (not a framework)
  • Protocol-based design (no inheritance)
  • Standard message models shared across your codebase
  • Efficient multi-model usage with shared connections
  • Simple, predictable API

Use LangChain when you need:

  • Full-featured framework with chains, agents, RAG
  • Massive ecosystem of integrations
  • Higher-level abstractions

Part of the casual-* Ecosystem

  • casual-mcp - MCP server orchestration and tool calling
  • casual-llm (this library) - LLM provider abstraction
  • casual-memory - Memory intelligence with conflict detection

All casual-* libraries share the same philosophy: lightweight, protocol-based, easy to use.

Contributing

Contributions welcome! Please see CONTRIBUTING.md for guidelines.

License

MIT License - see LICENSE for details.

Links

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

casual_llm-0.8.0.tar.gz (60.7 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

casual_llm-0.8.0-py3-none-any.whl (40.3 kB view details)

Uploaded Python 3

File details

Details for the file casual_llm-0.8.0.tar.gz.

File metadata

  • Download URL: casual_llm-0.8.0.tar.gz
  • Upload date:
  • Size: 60.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for casual_llm-0.8.0.tar.gz
Algorithm Hash digest
SHA256 a05b89a797f99be831be403cb8e746feae3b1a2600f5b4d764e68d6bab93d531
MD5 86a101828030faaa777244e882ce0224
BLAKE2b-256 9a38b4095415c837f5bb7e89c0e6f33f12c519c323a99bd7795354f9219802c4

See more details on using hashes here.

Provenance

The following attestation bundles were made for casual_llm-0.8.0.tar.gz:

Publisher: release.yml on casualgenius/casual-llm

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file casual_llm-0.8.0-py3-none-any.whl.

File metadata

  • Download URL: casual_llm-0.8.0-py3-none-any.whl
  • Upload date:
  • Size: 40.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for casual_llm-0.8.0-py3-none-any.whl
Algorithm Hash digest
SHA256 1995495a7434e1f6244763e68db07cef80065e70ba2ce9ea1d15efb910f70b8e
MD5 584180f7eb4bb5d4cfa08001f013fcf8
BLAKE2b-256 6b579e7bda8b0be3b69350f4d625c1ddda561cd6ccbc46db3fbf399ae9b004cb

See more details on using hashes here.

Provenance

The following attestation bundles were made for casual_llm-0.8.0-py3-none-any.whl:

Publisher: release.yml on casualgenius/casual-llm

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page