A trivial set of API bindings for AI models, because I'd like them to be easy to use.
Project description
TrivialAI
(A set of trivial bindings for AI models)
Install
pip install trivialai
# Optional: HTTP/2 for OpenAI/Anthropic
# pip install "trivialai[http2]"
# Optional: AWS Bedrock support (via boto3)
# pip install "trivialai[bedrock]"
# Optional: Google Gemini support
# pip install "trivialai[gemini]"
Requirements
- Python ≥ 3.10 (the codebase uses
X | Ytype unions) - Uses httpx for HTTP-based providers, boto3 for Bedrock, and google-genai for Gemini
Quick start
>>> from trivialai import claude, gemini, ollama, chatgpt, bedrock
Note: The legacy
gcpmodule (backed byvertexai.generative_models) has been removed. Usegemini.Geminiinstead — it supports both the Gemini Developer API and Vertex AI, and provides text and image generation through a single client.
Credentials
Anthropic (Claude)
Use an Anthropic Console API key directly:
claude.Claude("claude-3-5-sonnet-20241022", os.environ["ANTHROPIC_API_KEY"])
OpenAI (ChatGPT)
Use an OpenAI Platform API key:
chatgpt.ChatGPT("gpt-4o-mini", os.environ["OPENAI_API_KEY"])
Google Gemini
Go to Google AI Studio, sign in with a Google account, and click
"Get API key" → "Create API key" in the left sidebar. The key starts with AIza....
No billing setup is required for the free tier.
gemini.Gemini(api_key=os.environ["GEMINI_API_KEY"])
For Vertex AI (service account or Application Default Credentials), see the Vertex AI auth section below.
AWS Bedrock
- Enable Bedrock and request model access in a supported region via the AWS Console.
- Ensure your IAM user/role has
bedrock:Converse*andbedrock:InvokeModel*permissions. - Provide credentials via
aws configure, environment variables, instance role, or explicit keys.
bedrock.Bedrock(
model_id="us.anthropic.claude-3-5-sonnet-20241022-v2:0",
region="us-east-1",
)
Synchronous usage
Ollama
>>> client = ollama.Ollama("gemma2:2b", "http://localhost:11434/")
>>> client.generate("sys msg", "Say hi with 'platypus'.").content
"Hi there—platypus!"
>>> client.generate_json("sys msg", "Return {'name': 'Platypus'} as JSON").content
{'name': 'Platypus'}
Claude (Anthropic API)
>>> client = claude.Claude("claude-3-5-sonnet-20240620", os.environ["ANTHROPIC_API_KEY"])
>>> client.generate("sys msg", "Say hi with 'platypus'.").content
"Hello, platypus!"
ChatGPT (OpenAI API)
>>> client = chatgpt.ChatGPT("gpt-4o-mini", os.environ["OPENAI_API_KEY"])
>>> client.generate("sys msg", "Say hi with 'platypus'.").content
"Hello, platypus!"
Gemini (Google) — text + image
Gemini is a unified client: one object, one set of credentials, two capabilities.
model targets text generation; image_model targets image generation.
Both default to sensible values, so you can use either or both.
# Text generation
>>> gem = gemini.Gemini(api_key=os.environ["GEMINI_API_KEY"])
>>> gem.generate(
... system="Reply concisely.",
... prompt="What is the capital of France?",
... ).content
"Paris."
# Image generation (txt2img)
>>> img = gem.generate_image("A corgi in a spacesuit floating above the Earth")
>>> img.file()
'/tmp/trivialai-img-ho9ftavj.png'
# Image editing (img2img)
>>> edited = gem.generate_image("Make it sunset colours", image=img)
>>> edited.file()
'/tmp/trivialai-img-x7q2kl1m.png'
Image and text models are independent — you can override either per-call or at construction:
gem = gemini.Gemini(
model="gemini-3-pro-preview", # text model
image_model="gemini-3-pro-image-preview", # image model (Nano Banana Pro)
api_key=os.environ["GEMINI_API_KEY"],
)
To discover what models are available on your key:
>>> gem.models()
{'text': [{'name': 'models/gemini-3-flash-preview', ...}, ...],
'image': [{'name': 'models/gemini-3.1-flash-image-preview', ...}, ...]}
>>> gem.text_model_names()
['models/gemini-3-flash-preview', 'models/gemini-3-pro-preview', ...]
>>> gem.image_model_names()
['models/gemini-3.1-flash-image-preview', 'models/gemini-3-pro-image-preview', ...]
Vertex AI auth
# Service account JSON file (project auto-read from the file)
gem = gemini.Gemini(vertex_api_creds="/path/to/sa.json", region="us-central1")
# Application Default Credentials (gcloud auth application-default login)
gem = gemini.Gemini(project="my-gcp-project", region="us-central1", use_vertexai=True)
Bedrock (AWS) — text + image
Bedrock is also a unified client. model_id targets text (via the Converse API);
image_model_id targets image generation (via InvokeModel). Both are optional and independent.
client = bedrock.Bedrock(
model_id="us.anthropic.claude-3-5-sonnet-20241022-v2:0",
image_model_id="amazon.nova-canvas-v1:0", # default
region="us-east-1",
aws_access_key_id=os.environ["AWS_ACCESS_KEY_ID"],
aws_secret_access_key=os.environ["AWS_SECRET_ACCESS_KEY"],
)
# Text
res = client.generate(
system="You are a helpful assistant.",
prompt="Explain neural networks in one sentence.",
)
print(res.content)
# Image (txt2img)
img = client.generate_image("A watercolour fox reading a book in an autumn forest")
img.file() # → '/tmp/trivialai-img-4ai11zoz.png'
# Image (img2img)
edited = client.generate_image("Add snow", image=img)
Supported image models: Nova Canvas (amazon.nova-canvas-v1:0), Titan Image
(amazon.titan-image-generator-v2:0), and Stability AI (stability.*).
To discover available models in your region:
>>> client.models()
{'text': [{'model_id': 'anthropic.claude-3-5-sonnet-20241022-v2:0', ...}, ...],
'image': [{'model_id': 'amazon.nova-canvas-v1:0', ...}, ...]}
>>> client.image_model_ids()
['amazon.nova-canvas-v1:0', 'amazon.titan-image-generator-v2:0', ...]
Choosing the right model_id
Bedrock distinguishes between foundation model IDs (anthropic.claude-3-5-sonnet-20241022-v2:0)
and inference profile IDs (us.anthropic.claude-3-5-sonnet-20241022-v2:0).
Some models/regions require the region-prefixed profile ID. If you get a validation error
about on-demand throughput, switch to the us. / eu. prefixed form.
Streaming (NDJSON-style events) via BiStream
All providers expose a common streaming shape via stream(...).
Important: stream(...) (and helpers like stream_checked(...) / stream_json(...)) return a
BiStream, which supports both sync and async iteration.
LLM event schema
{"type":"start", "provider":"<ollama|openai|anthropic|gemini|bedrock>", "model":"..."}{"type":"delta", "text":"...", "scratchpad":"..."}- Ollama:
scratchpadmay contain content extracted from<think>…</think>. - Gemini:
scratchpadcarries native thought tokens (no tag parsing needed). - Other providers:
scratchpadis typically""in deltas.
- Ollama:
{"type":"end", "content":"...", "scratchpad": <str|None>, "tokens": <int|None>}{"type":"error", "message":"..."}
stream_checked(...) / stream_json(...) append a final parse event:
{"type":"final", "ok": true|false, "parsed": ..., "error": ..., "raw": ...}
Image stream event schema
Image generation via imagestream(...) yields:
{"type":"start", "provider":"...", "model":"...", "mode":"txt2img"|"img2img"}{"type":"progress", "progress": 0.0–1.0, "state":"...", "textinfo":"..."}(where supported){"type":"end", "image": ImageResult, "model":"...", "mode":"..."}{"type":"error", "message":"..."}
Note on Gemini / Bedrock image streaming: both APIs are single-shot REST calls with no server-sent progress. The stream emits a synthetic
progress: 0.0event immediately before the blocking call (so progress-bar consumers see activity), then anendevent when the image resolves. Theendpayload is identical to other providers.
Example: streaming text (sync)
client = ollama.Ollama("gemma2:2b", "http://localhost:11434/")
for ev in client.stream("sys", "Explain, think step-by-step."):
if ev["type"] == "delta":
print(ev["text"], end="")
elif ev["type"] == "end":
print("\n-- scratchpad --")
print(ev["scratchpad"])
Example: streaming + parse-at-end
from trivialai.util import loadch
for ev in client.stream_checked(loadch, "sys", "Return a JSON object gradually."):
if ev["type"] == "final":
print("Parsed JSON:", ev["parsed"])
# Shortcut:
for ev in client.stream_json("sys", "Return {'name':'Platypus'} as JSON."):
if ev["type"] == "final":
print("Parsed:", ev["parsed"])
Example: streaming image (Gemini)
gem = gemini.Gemini(api_key=os.environ["GEMINI_API_KEY"])
for ev in gem.imagestream("A rainy Tokyo street at night, neon reflections"):
if ev["type"] == "progress":
print(f" {ev['textinfo']}")
elif ev["type"] == "end":
ev["image"].file("tokyo.png")
Example: streaming image (Bedrock)
client = bedrock.Bedrock(image_model_id="amazon.nova-canvas-v1:0", region="us-east-1")
for ev in client.imagestream("A watercolour fox reading a book in an autumn forest"):
if ev["type"] == "end":
ev["image"].file("fox.png")
Example: streaming text (async)
async for ev in client.stream("sys", "Stream something."):
...
BiStream: one stream interface for sync + async
from trivialai.bistream import BiStream
BiStream[T] wraps a sync Iterable[T], an async AsyncIterable[T], or another BiStream[T]
and exposes both iterator interfaces.
Key behaviour:
- Single-consumer: once consumed, exhausted.
- Mode-locked: a given instance may be consumed either sync or async.
- Bridging: async → sync driven by a background event loop thread; sync → async wraps
next().
Chaining streams with then / map / mapcat / branch
All combinators are mode-preserving (sync in → sync out, async in → async out).
then(...): append a follow-up stage after upstream terminates
pipeline = client.stream("sys", "Answer, streaming.").then(lambda: [
{"type": "note", "text": "stream ended"},
])
Your follow-up can be 0-arg or 1-arg (done receives StopIteration.value if present).
map(...): transform each event
pipeline = client.stream("sys", "Stream.").map(
lambda ev: (ev | {"text": ">> " + ev["text"]}) if ev.get("type") == "delta" else ev
)
mapcat(...): per-item stream expansion (flatMap), with optional concurrency
events = BiStream(["a.py", "b.py", "c.py"]).mapcat(
lambda path: agent.streamed(f"Analyze {path}"),
concurrency=8,
)
branch(...): fan-out, then fan-in via .sequence() / .interleave()
base = client.stream("sys", "First: describe the plan.")
fan = base.branch(["doc1", "doc2", "doc3"],
lambda doc: client.stream("sys", f"Summarize: {doc}"))
for ev in fan.interleave(concurrency=8):
handle(ev)
Extra helpers
tap(...): side effects without changing events
stream = client.stream("sys", "Stream.").tap(lambda ev: log(ev))
repeat_until(...): agent loops
from trivialai.bistream import repeat_until, is_type
looped = repeat_until(
src=client.stream("sys", "First attempt..."),
step=lambda driver: client.stream("sys", f"Next attempt, based on {driver}..."),
stop=is_type("final"),
max_iters=10,
)
Embeddings
from trivialai.embedding import OllamaEmbedder
embed = OllamaEmbedder(model="nomic-embed-text", server="http://localhost:11434")
vec = embed("hello world")
Notes & compatibility
- Dependencies:
httpxfor HTTP providers;boto3for Bedrock;google-genai+ optionallygoogle-authfor Gemini. - Scratchpad: Ollama surfaces
<think>content; Gemini routes native thought tokens; other providers emitscratchpad=""in deltas andNonein the finalend. gcpmodule removed: the oldgcp.GCPclass (backed byvertexai.generative_models, deprecated June 2025) has been removed. Migrate togemini.Gemini— it supports all three auth modes the old class did, plus image generation.- BiStream: single-use and single-consumer — don't consume the same instance from multiple tasks.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file trivialai-0.7.2.tar.gz.
File metadata
- Download URL: trivialai-0.7.2.tar.gz
- Upload date:
- Size: 88.0 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.9.25
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
3e9d3577bc0bd40b54935226f6dc85cf02e11843c6624762e4a824c06e2b11dc
|
|
| MD5 |
842cdb2e4a9c89fcee6285d7e219ccd3
|
|
| BLAKE2b-256 |
68d469993f3a629b095390dc784743e182fedb5c841f2630cb51eab0dcf8f32f
|
File details
Details for the file trivialai-0.7.2-py3-none-any.whl.
File metadata
- Download URL: trivialai-0.7.2-py3-none-any.whl
- Upload date:
- Size: 78.6 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.9.25
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
18adb272704ec7ca10a4cbd79be320c30ca3578d247aff3b3afb806f4a3f0b4d
|
|
| MD5 |
13db1e92b268b2838edd3b1dd4e8d55f
|
|
| BLAKE2b-256 |
82b6393d439fb66b478e4c321035219112315f61aa959533e26398f014054047
|