AgentHub is the LLM API Hub for the Agent era, built for high-precision autonomous agents.
Project description
AgentHub Python Implementation
This document demonstrates how to use AutoLLMClient for unified LLM interactions in AgentHub.
Building
make install # Install dependencies
make build # Build Python package
make lint # Run ruff linter
make test # Run tests
AutoLLMClient Overview
AutoLLMClient is a stateful client that automatically routes requests to the appropriate model-specific implementation. It maintains conversation history and provides a unified interface for different LLM providers.
Initialization
Create a client by specifying the model name:
from agenthub import AutoLLMClient
# Initialize with model name
client = AutoLLMClient(model="gpt-5.5")
# Optionally specify API key (if not using environment variables)
client = AutoLLMClient(model="gpt-5.5", api_key="your-openai-api-key")
The client automatically selects the appropriate client based on the model name.
Core Methods
streaming_response
Stateless method that requires passing the full message history on each call:
import asyncio
from agenthub import AutoLLMClient
async def main():
client = AutoLLMClient(model="gpt-5.5")
async for event in client.streaming_response(
messages=[
{
"role": "user",
"content_items": [{"type": "text", "text": "Hello!"}]
}
],
config={}
):
print(event)
asyncio.run(main())
streaming_response_stateful
Stateful method that maintains conversation history internally:
import asyncio
from agenthub import AutoLLMClient
async def main():
client = AutoLLMClient(model="gpt-5.5")
# First message
async for event in client.streaming_response_stateful(
message={
"role": "user",
"content_items": [{"type": "text", "text": "My name is Alice"}]
},
config={}
):
print(event)
# Second message - history is maintained automatically
async for event in client.streaming_response_stateful(
message={
"role": "user",
"content_items": [{"type": "text", "text": "What's my name?"}]
},
config={}
):
print(event)
asyncio.run(main())
get_history
Retrieve the conversation history:
# Get all messages in the conversation
history = client.get_history()
print(f"Total messages: {len(history)}")
for msg in history:
print(f"Role: {msg['role']}")
print(f"Content: {msg['content_items']}")
clear_history
Clear the conversation history:
# Clear all conversation history
client.clear_history()
# Verify history is empty
assert len(client.get_history()) == 0
set_history
Replace the conversation history with a copy of the provided list:
# Save current history
saved_history = client.get_history()
# ... do other things, then restore
client.set_history(saved_history)
# Verify history was replaced
assert len(client.get_history()) == len(saved_history)
Tool Calling
When using tools, you must handle tool_call_id correctly:
import asyncio
import json
from agenthub import AutoLLMClient
def get_weather(location: str) -> str:
"""Mock function to get weather."""
return f"Temperature in {location}: 22°C"
async def main():
# Define tool
weather_function = {
"name": "get_weather",
"description": "Gets the current weather for a given location.",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city name"
}
},
"required": ["location"]
}
}
client = AutoLLMClient(model="gpt-5.5")
config = {"tools": [weather_function]}
# User asks about weather
events = []
async for event in client.streaming_response_stateful(
message={
"role": "user",
"content_items": [{"type": "text", "text": "What's the weather in London?"}]
},
config=config
):
events.append(event)
# Extract function call and tool_call_id
tool_call = None
for event in events:
for item in event["content_items"]:
if item["type"] == "tool_call":
tool_call = item
break
if tool_call:
break
# Execute function and send result back with tool_call_id
if tool_call:
result = get_weather(**tool_call["argument"])
# IMPORTANT: Include tool_call_id in the tool response
async for event in client.streaming_response_stateful(
message={
"role": "user",
"content_items": [
{
"type": "tool_result",
"text": result,
"tool_call_id": tool_call["tool_call_id"] # Required for tool responses
}
]
},
config=config
):
print(event)
asyncio.run(main())
Message Format
UniMessage Structure
{
"role": "user" | "assistant",
"content_items": [
{"type": "text", "text": "Hello"},
{"type": "image_url", "image_url": "https://..."},
{"type": "tool_call", "name": "get_weather", "argument": {"location": "London"}, "tool_call_id": "call_abc123"}
]
}
Tool Response with tool_call_id
When responding to a tool call, include the tool_call_id in the result content item:
{
"role": "user",
"content_items": [
{
"type": "tool_result",
"text": "London is 22°C today.",
"tool_call_id": "call_abc123" # From tool_call event
}
]
}
Configuration Options
from agenthub import PromptCaching, ThinkingLevel
config = {
"max_tokens": 500,
"temperature": 1.0,
"tools": [tool_definition],
"thinking_summary": True,
"thinking_level": ThinkingLevel.HIGH,
"tool_choice": "auto", # "auto", "required", "none", or ["tool_name"]
"system_prompt": "You are a helpful assistant",
"prompt_caching": PromptCaching.ENABLE,
"trace_id": "agent1/conversation_001" # Optional: save conversation trace
}
Conversation Tracing
AgentHub provides a built-in Tracer to save and browse conversation history. When you specify a trace_id in the config, conversations are automatically saved to both JSON and TXT formats.
Basic Usage
from agenthub import AutoLLMClient
client = AutoLLMClient(model="gpt-5.5")
# Add trace_id to config
config = {"trace_id": "agent1/conversation_001"}
async for event in client.streaming_response_stateful(
message={"role": "user", "content_items": [{"type": "text", "text": "Hello"}]},
config=config
):
pass # Conversation is automatically saved
The default cache directory is cache, you can change it by setting AGENTHUB_CACHE_DIR environment variable.
This creates two files in the cache directory:
cache/agent1/conversation_001.json- Structured data with full history and configcache/agent1/conversation_001.txt- Human-readable conversation format
Browsing Traces with Web Interface
Start a web server to browse and view saved conversations:
from agenthub.integration.tracer import Tracer
# Start web server
Tracer("path/to/cache").start_web_server(host="127.0.0.1", port=25750)
Or use the CLI:
python -m agenthub.integration.tracer --cache_dir ./cache --host 127.0.0.1 --port 25750
Then visit http://127.0.0.1:25750 in your browser to browse saved conversations.
Test with Playground
Start a web server to test with the playground:
from agenthub.integration.playground import start_playground_server
start_playground_server()
Or use the CLI:
python -m agenthub.integration.playground --host 127.0.0.1 --port 25751
Then visit http://127.0.0.1:25751 in your browser to test with the playground.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file agenthub_python-0.3.1.tar.gz.
File metadata
- Download URL: agenthub_python-0.3.1.tar.gz
- Upload date:
- Size: 39.1 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
a78ea3a18e2b03fb05c30ce8f70ae4f8ac11fb7a4c8e1e8527d7a53aeba46448
|
|
| MD5 |
b94b455e0debbad480d41c21cd90892a
|
|
| BLAKE2b-256 |
7540f92d706c87612adde05f47e5ba5791f09b90c983482207f2118909a670e8
|
Provenance
The following attestation bundles were made for agenthub_python-0.3.1.tar.gz:
Publisher:
publish.yml on Prism-Shadow/AgentHub
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
agenthub_python-0.3.1.tar.gz -
Subject digest:
a78ea3a18e2b03fb05c30ce8f70ae4f8ac11fb7a4c8e1e8527d7a53aeba46448 - Sigstore transparency entry: 1397652738
- Sigstore integration time:
-
Permalink:
Prism-Shadow/AgentHub@676bbf14519bed3ce81c22cddf1248e74f14be54 -
Branch / Tag:
refs/tags/v0.3.1 - Owner: https://github.com/Prism-Shadow
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@676bbf14519bed3ce81c22cddf1248e74f14be54 -
Trigger Event:
release
-
Statement type:
File details
Details for the file agenthub_python-0.3.1-py3-none-any.whl.
File metadata
- Download URL: agenthub_python-0.3.1-py3-none-any.whl
- Upload date:
- Size: 57.7 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
ec7150c035482f629879df196b83327e636fbffad05209a9bc1f53e07c95fff3
|
|
| MD5 |
30f7946f2f11a11b737714ff12666544
|
|
| BLAKE2b-256 |
e84b2782532afdb18864a889966e1b908591cedd650da3369973bc7ac84a5868
|
Provenance
The following attestation bundles were made for agenthub_python-0.3.1-py3-none-any.whl:
Publisher:
publish.yml on Prism-Shadow/AgentHub
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
agenthub_python-0.3.1-py3-none-any.whl -
Subject digest:
ec7150c035482f629879df196b83327e636fbffad05209a9bc1f53e07c95fff3 - Sigstore transparency entry: 1397652753
- Sigstore integration time:
-
Permalink:
Prism-Shadow/AgentHub@676bbf14519bed3ce81c22cddf1248e74f14be54 -
Branch / Tag:
refs/tags/v0.3.1 - Owner: https://github.com/Prism-Shadow
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@676bbf14519bed3ce81c22cddf1248e74f14be54 -
Trigger Event:
release
-
Statement type: