A high-level Python SDK for Large Language Models with automatic tool execution, structured output support, multi-agent workflows, and evaluation data recording
Project description
GlueLLM
TL;DR: A high-level Python SDK for LLMs that handles the annoying stuff (tools, retries, structured output, batching) so you can ship features instead of glue code.
GlueLLM is opinionated in the “I’ve been burned by this in production” way. If you like sensible defaults, clear APIs, and fewer bespoke wrappers, you’ll feel at home.
Works great with Spiderweb
If you’re building RAG, you probably don’t just need LLM calls — you need crawling, extraction, chunking, validation, and storage too. That’s Spiderweb.
- GlueLLM: LLM calls + tool execution + structured output + embeddings + batching
- Spiderweb: documents/web → clean chunks → vector store → query
Tiny “together” example:
import asyncio
from gluellm import GlueLLM
from spiderweb import Spiderweb
async def main():
async with Spiderweb(llm_client=GlueLLM()) as web:
await web.crawl("https://example.com", ingest=True, save_to="./crawled")
results = await web.query("What is this site about?", top_k=5)
print(results.chunks[0]["content"][:200])
asyncio.run(main())
What is this?
GlueLLM is a high-level SDK that makes working with LLMs actually pleasant:
- You call
complete()orstructured_complete()and get results. - Tools are plain Python functions.
- Retries and error classification are built-in.
- Batching and rate limiting are first-class.
- Providers are unified via
any-llm-sdk.
Why you might like it
- Zero ceremony: minimal code to get real results
- Tool execution loop: automatic tool calling orchestration
- Structured output: Pydantic models, validated (including streaming: parse on final chunk)
- Streaming:
stream_complete()with optional structured output on the last chunk - Process status events: optional
on_statuscallback for LLM/tool/stream progress - Provider-agnostic: one API for OpenAI, Anthropic, XAI, and others
- Embeddings: same ergonomics + error handling
- Batch processing: concurrency control, retry strategies, key pools
- Observability hooks: logging + optional tracing
Why you might not
- If you want a thin client that exposes every raw provider knob, GlueLLM isn’t trying to be that.
- If you hate opinions, you’ll hate opinions (mine included).
Installation
# Using uv (recommended)
uv pip install gluellm
# From source (dev)
uv pip install -e ".[dev]"
Quick start
Simple completion
import asyncio
from gluellm.api import complete
async def main():
result = await complete(
user_message="What is the capital of France?",
system_prompt="You are a helpful geography assistant.",
)
print(result.final_response)
asyncio.run(main())
Tool calling (tools are just functions)
import asyncio
from gluellm.api import complete
def get_weather(location: str, unit: str = "celsius") -> str:
"""Get the current weather for a location."""
return f"Weather in {location}: 22°{unit[0].upper()}, sunny"
async def main():
result = await complete(
user_message="What's the weather in Tokyo and Paris?",
system_prompt="Use get_weather for weather queries.",
tools=[get_weather],
)
print(result.final_response)
asyncio.run(main())
Structured output
import asyncio
from pydantic import BaseModel, Field
from typing import Annotated
from gluellm.api import structured_complete
class PersonInfo(BaseModel):
name: Annotated[str, Field(description="Full name")]
age: Annotated[int, Field(description="Age in years")]
city: Annotated[str, Field(description="City of residence")]
async def main():
person = await structured_complete(
user_message="Extract info: John Smith, 35, lives in Seattle",
response_format=PersonInfo,
)
print(person.model_dump())
asyncio.run(main())
Streaming
Stream token-by-token with stream_complete(). When tools are enabled, the final response after tool runs is returned as one chunk (streaming resumes between tool rounds).
import asyncio
from gluellm import stream_complete
async def main():
async for chunk in stream_complete("Tell me a short joke."):
print(chunk.content, end="", flush=True)
if chunk.done:
print(f"\nTool calls: {chunk.tool_calls_made}")
asyncio.run(main())
Streaming + structured output: Pass response_format to get a parsed Pydantic instance on the final chunk (the stream is plain text; we parse when the stream ends).
from pydantic import BaseModel, Field
from gluellm import stream_complete
class Answer(BaseModel):
word: str
async for chunk in stream_complete(
"Reply with JSON: {\"word\": \"hello\"}",
response_format=Answer,
tools=[],
):
if chunk.done and chunk.structured_output:
print(chunk.structured_output.word) # hello
Process status events
Use the optional on_status callback to observe what’s happening (LLM call start/end, tool execution, stream start/chunk/end, completion). Handy for progress UIs or logging.
from gluellm import complete, ProcessEvent
def on_status(e: ProcessEvent) -> None:
print(f"{e.kind}: {e.tool_name or e.iteration or ''}")
result = await complete(
"What is 2+2?",
on_status=on_status,
)
# llm_call_start, llm_call_end, complete (and tool_call_* if tools run)
on_status is supported on complete(), stream_complete(), and structured_complete() (and the GlueLLM client methods).
Embeddings
import asyncio
from gluellm import embed
async def main():
result = await embed("Hello, world!")
print(result.dimension, result.tokens_used)
asyncio.run(main())
Configuration (the boring part)
Providers are configured via environment variables:
export OPENAI_API_KEY=sk-...
export ANTHROPIC_API_KEY=sk-ant-...
export XAI_API_KEY=xai-...
Models use provider:model strings:
openai:gpt-4o-minianthropic:claude-3-5-sonnet-20241022
Docs (when you want the details)
GlueLLM keeps deeper docs in docs/ so the README stays readable:
More runnable examples live in examples/.
Contributing
PRs welcome. Please read CONTRIBUTING.md.
License
MIT — see LICENSE.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file gluellm-1.1.9.tar.gz.
File metadata
- Download URL: gluellm-1.1.9.tar.gz
- Upload date:
- Size: 182.3 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.9.13 {"installer":{"name":"uv","version":"0.9.13"},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null}
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
aba34a1cc53c7cb2a559af0f3952f69a3e60040f7444ce645a24d2085b183e8d
|
|
| MD5 |
19f91d595d15896d9a312bacd0023bd4
|
|
| BLAKE2b-256 |
168a583cb609a17dfca081da4618afa3aaccfec38034015cb9d2d8ddce35f04d
|
File details
Details for the file gluellm-1.1.9-py3-none-any.whl.
File metadata
- Download URL: gluellm-1.1.9-py3-none-any.whl
- Upload date:
- Size: 166.2 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.9.13 {"installer":{"name":"uv","version":"0.9.13"},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null}
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
57fa80f65b751dea26add26f6f8f883f0be57e3e85602d729942988bd8416b6e
|
|
| MD5 |
d48e596d5795391b12ec20a24e143855
|
|
| BLAKE2b-256 |
c5f195659cbc44533b01cf88144d6ab45722f606694cdd860584fe3eb5c9c027
|