A provider-agnostic LLM toolkit with tool calling, skills, and parallel execution.
Project description
llmstitch
A provider-agnostic LLM toolkit with tool calling, skills, and parallel execution.
Stitch together Anthropic, OpenAI, Gemini, Groq, and OpenRouter behind one Agent loop. Define tools with a decorator, compose behaviors as skills, and execute tool calls concurrently — all with a tiny, typed core.
Install
pip install llmstitch[anthropic] # just the Anthropic SDK
pip install llmstitch[openai] # just the OpenAI SDK
pip install llmstitch[gemini] # just the Gemini SDK
pip install llmstitch[groq] # just the Groq SDK
pip install llmstitch[openrouter] # OpenRouter (reuses the openai SDK)
pip install llmstitch[all] # all five
The bare pip install llmstitch has zero runtime dependencies — provider SDKs are opt-in extras.
30-second example
import asyncio
from llmstitch import Agent, tool
from llmstitch.providers.anthropic import AnthropicAdapter
@tool
def get_weather(city: str) -> str:
"""Return a canned weather report for the given city."""
return f"{city}: 72°F and sunny"
agent = Agent(
provider=AnthropicAdapter(),
model="claude-opus-4-7",
system="You are a helpful weather assistant.",
)
agent.tools.register(get_weather)
messages = asyncio.run(agent.run("What's the weather in Tokyo?"))
print(messages[-1].content)
Features
- Provider-agnostic — swap
AnthropicAdapterforOpenAIAdapter,GeminiAdapter,GroqAdapter, orOpenRouterAdapterwithout touching your agent code. - Typed
@tooldecorator — JSON Schema generated from type hints (Optional,Literal, defaults, async). - Parallel tool execution — when a model returns multiple tool calls in one turn, they run concurrently.
- Streaming —
Agent.run_stream()yields provider-neutral events (TextDelta,ToolUseStart/Delta/Stop,MessageStop, terminalStreamDone) and handles tool execution between turns. - Skills — bundle a system prompt with a set of tools; compose with
.extend(). - PEP 561 typed — ships with
py.typed, fully checked undermypy --strict.
Streaming example
import asyncio
from llmstitch import Agent, TextDelta, StreamDone
from llmstitch.providers.anthropic import AnthropicAdapter
async def main() -> None:
agent = Agent(provider=AnthropicAdapter(), model="claude-opus-4-7")
async for event in agent.run_stream("Tell me a haiku about streams."):
if isinstance(event, TextDelta):
print(event.text, end="", flush=True)
elif isinstance(event, StreamDone):
print(f"\n[stop_reason={event.response.stop_reason}]")
asyncio.run(main())
More examples
The examples/ directory has runnable scripts for:
basic.py— minimal agent with one tool.skills_demo.py— composing twoSkills with.extend().streaming.py—Agent.run_streamwith rich event handling.providers_gallery.py— the same agent against every provider.parallel_tools.py— parallel tool execution with order-preserving results.async_and_timeout.py— async tools, per-call timeout, captured-exception semantics.
Status
Alpha. Retries and MCP support are on the roadmap. See CHANGELOG.md for release history and ARCHITECTURE.md for a walkthrough of how the library is put together.
License
MIT
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file llmstitch-0.1.2.tar.gz.
File metadata
- Download URL: llmstitch-0.1.2.tar.gz
- Upload date:
- Size: 38.7 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
93ef0ef94ea784ddc231ceb39d993800b041df0193f5b9778037d6e5930f7c55
|
|
| MD5 |
6c2eb79fba311ef2e51865e6086cfec1
|
|
| BLAKE2b-256 |
848ae3863207840fcabe29a8a69ad669204ed6d5d3249aaecbbbe1fea4c79dff
|
Provenance
The following attestation bundles were made for llmstitch-0.1.2.tar.gz:
Publisher:
release.yml on bengeos/llmstitch
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
llmstitch-0.1.2.tar.gz -
Subject digest:
93ef0ef94ea784ddc231ceb39d993800b041df0193f5b9778037d6e5930f7c55 - Sigstore transparency entry: 1352003210
- Sigstore integration time:
-
Permalink:
bengeos/llmstitch@3049a3b0e98bd0f847cf3e688068f719254c834c -
Branch / Tag:
refs/heads/main - Owner: https://github.com/bengeos
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yml@3049a3b0e98bd0f847cf3e688068f719254c834c -
Trigger Event:
pull_request
-
Statement type:
File details
Details for the file llmstitch-0.1.2-py3-none-any.whl.
File metadata
- Download URL: llmstitch-0.1.2-py3-none-any.whl
- Upload date:
- Size: 19.8 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
f354c58c5699026bb205a698ff8fb2e7bab6a8841763088f401055d597715421
|
|
| MD5 |
354ff6b9346d7e9b17cf00941ac867cf
|
|
| BLAKE2b-256 |
509d4047eaa5ebff286826ddf85588bfdc3790553cd3ae6adac592315c5da75f
|
Provenance
The following attestation bundles were made for llmstitch-0.1.2-py3-none-any.whl:
Publisher:
release.yml on bengeos/llmstitch
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
llmstitch-0.1.2-py3-none-any.whl -
Subject digest:
f354c58c5699026bb205a698ff8fb2e7bab6a8841763088f401055d597715421 - Sigstore transparency entry: 1352003312
- Sigstore integration time:
-
Permalink:
bengeos/llmstitch@3049a3b0e98bd0f847cf3e688068f719254c834c -
Branch / Tag:
refs/heads/main - Owner: https://github.com/bengeos
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yml@3049a3b0e98bd0f847cf3e688068f719254c834c -
Trigger Event:
pull_request
-
Statement type: