A fluent interface for working with LLMs, providing a clean and intuitive API for AI-powered applications.
Project description
Fluent LLM
Expressive, opinionated, and intuitive 'fluent interface' Python library for working with LLMs.
Mission statement
Express every LLM interaction in your app prototypes in a single statement, without having to reach for documentation, looking up model capabilities, or writing boilerplate code.
Highlights
- Expressive: Write natural, readable, and chainable LLM interactions.
- Opinionated: Focuses on best practices and sensible defaults for LLM workflows.
- Fluent API: Compose prompts, context, and expectations in a single chain.
- Supports multimodal (text, image, audio) inputs and outputs: Automatically picks model based on modalities required.
- Automatic coroutines Can be used both in async and sync contexts.
- Modern Python: Type hints, async/await, and dataclasses throughout.
Setting API Keys
# On Unix/macOS
export OPENAI_API_KEY=sk-...
export ANTHROPIC_API_KEY=sk-...
# On Windows (cmd)
set OPENAI_API_KEY=sk-...
set ANTHROPIC_API_KEY=sk-...
# On Windows (PowerShell)
$env:OPENAI_API_KEY="sk-..."
$env:ANTHROPIC_API_KEY="sk-..."
Prompt Builder
The llm global instance is a LLMPromptBuilder instance, which can be used to build prompts.
The following prompt components can be used in an arbitrary order and multiple times:
.agent(str): Sets the agent description, defines system behavior..context(str): Passes textual context to the LLM..request(str): Passes the main request to the LLM..image(str): Passes an image to the LLM..audio(str): Passes an audio file to the LLM.
The prompt chain is terminated by the following methods:
.prompt(): str: Sends the prompt to the LLM and expects a text response..prompt_for_image(): PIL.Image: Sends the prompt to the LLM and expects an image response..prompt_for_audio(): soundfile.SoundFile: Sends the prompt to the LLM and expects an audio response..prompt_for_structured_output(pydantic_model): BaseModel: Sends the prompt to the LLM and expects a structured response.
They will either return the desired response if processing was successful, or raise an exception otherwise.
Usage
Callable module
You can use this library as a callable module to experiment with LLMs.
> pip install fluent-llm
> fluent-llm "llm.request('1+2=?').prompt()"
1 + 2 = 3.
Or even easier, without installing, as a tool with uvx:
uvx fluent-llm "llm.request('1+2=?').prompt()"
1 + 2 = 3.
As a library
response = llm \
.agent("You are an art evaluator.") \
.context("You received this painting and were tasked to evaluate whether it's museum-worthy.") \
.image("painting.png") \
.prompt()
print(response)
Async/await
Just works. See if you can spot the difference to the example above.
response = await llm \
.agent("You are an art evaluator.") \
.context("You received this painting and were tasked to evaluate whether it's museum-worthy.") \
.image("painting.png") \
.prompt()
print(response)
Multimodality
response = llm
.agent("You are a 17th century classic painter.")
.context("You were paid 10 francs for creating a portrait.")
.request('Create a portrait of Louis XIV.')
.prompt_for_image()
assert isinstance(response, PIL.Image)
response.show()
Structured output
from pydantic import BaseModel
class PaintingEvaluation(BaseModel):
museum_worthy: bool
reason: str
response = llm \
.agent("You are an art evaluator.") \
.context("You received this painting and were tasked to evaluate whether it's museum-worthy.") \
.image("painting.png") \
.prompt_for_type(PaintingEvaluation)
print(response)
Usage tracking
Usage tracking and price estimations for the last call are built-in.
>>> llm.request('How are you?').prompt()
"I'm doing well, thank you! How about you?"
>>> print(llm.usage)
=== Last API Call Usage ===
Model: gpt-4o-mini-2024-07-18
input_tokens: 11 tokens
output_tokens: 12 tokens
💰 Cost Breakdown:
input_tokens: 11 tokens → $0.000002
output_tokens: 12 tokens → $0.000007
💵 Total Call Cost: $0.000009
==============================
>>> llm.usage.cost.total_call_cost_usd
0.000009
>>> llm.usage.cost.breakdown['input_tokens'].count
11
Automatic Model Selection (recommended)
If choosing a provider or model per-invocation is not sufficient, you can define
a custom ModelSelectionStrategy and pass it to the LLMPromptBuilder constructor to select provider and model based on your own criteria.
Provider and Model per-prompt override
You can specify preferred providers and models using the fluent chain API:
# Use a specific provider (will select best available model)
response = await llm \
.provider("anthropic") \
.request("Hello, how are you?") \
.prompt()
# Use a specific model
response = await llm \
.model("claude-sonnet-4-20250514") \
.request("Write a poem about coding") \
.prompt()
# Combine provider and model preferences
response = await llm \
.provider("openai") \
.model("gpt-4.1-mini") \
.request("Explain quantum computing") \
.prompt()
Customization
If the defaults are not sufficient, you can customize the behavior of the builder by creating your own LLMPromptBuilder, instead of using the llm global instance provided for convenience.
However, note that you're probably quickly reaching the point at which you should ask yourself if you're not better off using the official OpenAI Python client library directly. This library is designed to be a simple and opinionated wrapper around the OpenAI API, and it's not intended to be a full-featured LLM client.
Invocation
Instead of using the convenience methods .prompt_*(), you can use the .call() method to execute the prompt and return a response.
Client
Pass in a custom client to the .call() method, to use a custom client for the API call.
Contribution
Setup
uv sync --dev
- Installs all runtime and development dependencies (including pytest).
- Requires uv for fast, modern Python dependency management.
Running Tests
All tests are run with uv:
uv run pytest
Mocked Tests
- Located in
tests/test_mocked.py. - Do not require a real OpenAI API key or network access.
- Fast and safe for CI or local development.
Live API Tests
- Located in
tests/test_live_api_*.py. - Require a valid API KEY and internet access.
- Will consume credits!
- Run only when you want to test real OpenAI integration.
License
Licensed under the MIT License.
Disclaimer
Almost all code written by Claude, o3 and SWE-1, concept and design by @hheimbuerger.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file fluent_llm-1.1.0.tar.gz.
File metadata
- Download URL: fluent_llm-1.1.0.tar.gz
- Upload date:
- Size: 28.8 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.7.0
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
17d613db43b7239d9a1ad9c738f57284bd9824021ce9adc8dd0c8f5ed027ca5c
|
|
| MD5 |
95ce55daf948a608a7f8ebdaf32633af
|
|
| BLAKE2b-256 |
4b6dcaf0253bac780def3a9e073878b6aa3747203bf7a9e965f1bd00317064fc
|
File details
Details for the file fluent_llm-1.1.0-py3-none-any.whl.
File metadata
- Download URL: fluent_llm-1.1.0-py3-none-any.whl
- Upload date:
- Size: 26.5 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.7.0
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
8b28251bc677d998fc55ade600c81af2d3cff8254d2148598479a6ba9199084d
|
|
| MD5 |
a9c7211a3feeb896152aa791f6733f2c
|
|
| BLAKE2b-256 |
8977e08f65736a6df15000fa5cc8dfa7dee3b21565d662be79da98b4cafbcb67
|