A lite abstraction layer for LLM calls
Project description
LLMCall
A lite abstraction layer for LLM calls.
Motivation
As AI becomes more prevalent in software development, there's a growing need for simple and intuitive APIs for
interacting with AI for quick text generation, decision making, and more. This is especially important now that we
have structured outputs, which allow us to seamlessly integrate AI into our application flow.
llmcall provides a minimal intelligence interface for common LLM operations without unnecessary complexity.
Installation
pip install llmcall
Quick Start
# 1. Install
pip install llmcall
# 2. Set your API key (copy .env.example to .env and fill in your key)
cp .env.example .env
# 3. Use it
from llmcall import generate
response = generate("What is the capital of France?")
print(response) # Paris
Configuration
Copy .env.example to .env and set your values:
# Required
LLMCALL_API_KEY=sk-...
# Optional (defaults shown)
LLMCALL_MODEL=openai/gpt-4.1
LLMCALL_BASE_URL= # for Ollama, Azure, LM Studio, etc.
LLMCALL_DEBUG=false
Or set environment variables directly:
export LLMCALL_API_KEY=sk-...
Uses LiteLLM under the hood — any supported provider works. We recommend OpenAI models for structured outputs.
Using local models (Ollama)
LLMCALL_MODEL=ollama/llama3.2
LLMCALL_BASE_URL=http://localhost:11434
LLMCALL_API_KEY=ollama
Example Usage
Generation
from llmcall import generate, generate_decision
from pydantic import BaseModel
# i. Basic generation
response = generate("Write a story about a fictional holiday to the sun.")
# ii. Structured generation
class ResponseSchema(BaseModel):
story: str
tags: list[str]
response: ResponseSchema = generate(
"Create a rare story about the history of civilisation.",
output_schema=ResponseSchema,
)
# iii. Streaming — get tokens as they arrive
for chunk in generate("Tell me a joke.", stream=True):
print(chunk, end="", flush=True)
# iv. Decision making
decision = generate_decision(
"Which is bigger?",
options=["apple", "berry", "pumpkin"],
)
print(decision.selection) # pumpkin
print(decision.reason) # Pumpkins are significantly larger than...
Async generation
import asyncio
from llmcall import agenerate, agenerate_decision, aextract
# Async generate
response = await agenerate("Write a story about a fictional holiday to the sun.")
# Async streaming
async for chunk in await agenerate("Tell me a joke.", stream=True):
print(chunk, end="", flush=True)
# Async decision
decision = await agenerate_decision("Which is bigger?", options=["apple", "berry", "pumpkin"])
# Async extract
result = await aextract(text=my_text, output_schema=MySchema)
# Run concurrently
story, decision = await asyncio.gather(
agenerate("Write a story."),
agenerate_decision("Which language?", options=["Python", "Go", "Rust"]),
)
Extraction
from llmcall import extract, extract_pdf, extract_image
from pydantic import BaseModel
class EmailSchema(BaseModel):
email_subject: str
email_body: str
email_topic: str
email_sentiment: str
# i. Extract from plain text
text = """To whom it may concern, Request for Admission at Harvard University ..."""
result: EmailSchema = extract(text=text, output_schema=EmailSchema)
# ii. Extract from a PDF — URL, local path, or raw bytes all work
class InvoiceSchema(BaseModel):
vendor: str
total: float
line_items: list[str]
result: InvoiceSchema = extract_pdf(
source="https://example.com/invoice.pdf",
output_schema=InvoiceSchema,
)
# local file
result: InvoiceSchema = extract_pdf(source="/path/to/invoice.pdf", output_schema=InvoiceSchema)
# raw bytes
with open("invoice.pdf", "rb") as f:
result: InvoiceSchema = extract_pdf(source=f.read(), output_schema=InvoiceSchema)
# iii. Extract from an image — URL, local path, or raw bytes all work
class ReceiptSchema(BaseModel):
store: str
total: float
items: list[str]
result: ReceiptSchema = extract_image(
source="https://example.com/receipt.jpg",
output_schema=ReceiptSchema,
)
# local PNG (MIME type auto-detected from extension)
result: ReceiptSchema = extract_image(source="/path/to/receipt.png", output_schema=ReceiptSchema)
# raw bytes with explicit MIME type
with open("receipt.webp", "rb") as f:
result: ReceiptSchema = extract_image(source=f.read(), output_schema=ReceiptSchema, media_type="image/webp")
Model requirements: PDF extraction requires a model with document-understanding support (e.g.
anthropic/claude-sonnet-4-6,openai/gpt-4.1,google/gemini-3-flash-preview). Image extraction requires a vision-capable model. An informativeValueErroris raised if the configured model does not support the required capability.
Async multimodal extraction
from llmcall import aextract_pdf, aextract_image
import asyncio
invoice, receipt = await asyncio.gather(
aextract_pdf("https://example.com/invoice.pdf", InvoiceSchema),
aextract_image("https://example.com/receipt.jpg", ReceiptSchema),
)
Roadmap
- Simple API for generating unstructured text
- Structured output generation using
Pydantic - Decision making
- Custom model selection (via
LiteLLM- See documentation) - Custom base URL for OpenAI-compatible endpoints (Ollama, Azure, LM Studio)
- Structured text extraction
- Structured extraction from PDF (URL, local path, or bytes)
- Structured extraction from Images (URL, local path, or bytes)
- Structured text extraction from Websites
- Async support (
agenerate,aextract,aextract_pdf,aextract_image,agenerate_decision) - Streaming support (
generate(..., stream=True),agenerate(..., stream=True))
Documentation
Please refer to our comprehensive documentation to learn more about this tool.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file llmcall-0.1.0rc3.tar.gz.
File metadata
- Download URL: llmcall-0.1.0rc3.tar.gz
- Upload date:
- Size: 12.9 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
187e034abbee641f7a2ee805b0a4ed65ffb0b111d4bfd1c09449619513ba6cdf
|
|
| MD5 |
72476b45bb61581e03c6f50b1808d7f6
|
|
| BLAKE2b-256 |
3980054f31182a43f4ec17ed946bbb828613361c119ed7a03a5751441bf8cb6a
|
Provenance
The following attestation bundles were made for llmcall-0.1.0rc3.tar.gz:
Publisher:
python-publish.yml on rihoneailabs/llmcall
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
llmcall-0.1.0rc3.tar.gz -
Subject digest:
187e034abbee641f7a2ee805b0a4ed65ffb0b111d4bfd1c09449619513ba6cdf - Sigstore transparency entry: 1276919989
- Sigstore integration time:
-
Permalink:
rihoneailabs/llmcall@7264de50d10c0a86e417b850873c4bb28c8e047c -
Branch / Tag:
refs/tags/v0.1.0rc3 - Owner: https://github.com/rihoneailabs
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
python-publish.yml@7264de50d10c0a86e417b850873c4bb28c8e047c -
Trigger Event:
release
-
Statement type:
File details
Details for the file llmcall-0.1.0rc3-py3-none-any.whl.
File metadata
- Download URL: llmcall-0.1.0rc3-py3-none-any.whl
- Upload date:
- Size: 12.9 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
2a957bf6796468fb08a8dbaf710621d49c646c9e8f3b5b77b0d9a61bd9d59755
|
|
| MD5 |
45af5ba947722ab9eb156c0f653d48f2
|
|
| BLAKE2b-256 |
784bdeb506230d2ace64276ae913add72379ebe256c711e7ab5b020e55478cc1
|
Provenance
The following attestation bundles were made for llmcall-0.1.0rc3-py3-none-any.whl:
Publisher:
python-publish.yml on rihoneailabs/llmcall
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
llmcall-0.1.0rc3-py3-none-any.whl -
Subject digest:
2a957bf6796468fb08a8dbaf710621d49c646c9e8f3b5b77b0d9a61bd9d59755 - Sigstore transparency entry: 1276920017
- Sigstore integration time:
-
Permalink:
rihoneailabs/llmcall@7264de50d10c0a86e417b850873c4bb28c8e047c -
Branch / Tag:
refs/tags/v0.1.0rc3 - Owner: https://github.com/rihoneailabs
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
python-publish.yml@7264de50d10c0a86e417b850873c4bb28c8e047c -
Trigger Event:
release
-
Statement type: