An SDK to build AI agents
Project description
Stark
A powerful Python SDK for building AI agents with support for MCP servers, function tools, and hierarchical sub-agents.
Features
- 🤖 Multi-LLM Support: Built-in support for multiple LLM providers via LiteLLM
- 🔧 MCP Server Integration: Connect to Model Context Protocol (MCP) servers for extended capabilities
- 🛠️ Function Tools: Define custom Python functions as tools for your agents
- 🌳 Hierarchical Agents: Create complex agent hierarchies with sub-agents
- 📡 Streaming Support: Real-time streaming of agent responses and tool calls
- 🔄 Async/Sync APIs: Both synchronous and asynchronous execution modes
- 📊 Iteration Control: Configurable maximum iterations to prevent infinite loops
Installation
pip install stark-agents
Quick Start
Basic Agent
from stark import Agent, Runner
agent = Agent(
name="Assistant",
instructions="You are a helpful assistant",
model="claude-sonnet-4-5"
)
result = Runner(agent).run(input=[{"role": "user", "content": "Hello!"}])
print(result)
Agent with MCP Servers
import os
from stark import Agent, Runner
mcp_servers = {
"slack": {
"command": "uvx",
"args": ["mcp-slack"],
"env": {
"SLACK_BOT_TOKEN": os.environ.get("SLACK_BOT_TOKEN", "")
}
}
}
agent = Agent(
name="Slack-Agent",
instructions="You can interact with Slack",
model="claude-sonnet-4-5",
mcp_servers=mcp_servers
)
result = Runner(agent).run(input=[{"role": "user", "content": "Send a message to #general"}])
Agent with Function Tools
import json
from stark import Agent, Runner
def search_database(input: str):
"""
{
"description": "Search the database for information",
"parameters": {
"properties": {
"query": {
"description": "Search query",
"type": "string"
}
},
"required": ["query"],
"type": "object"
}
}
"""
# Your function implementation
return json.dumps({"results": ["item1", "item2"]})
agent = Agent(
name="Search-Agent",
instructions="You can search the database",
model="claude-sonnet-4-5",
function_tools=[search_database]
)
result = Runner(agent).run(input=[{"role": "user", "content": "Search for users"}])
Hierarchical Sub-Agents
from stark import Agent, Runner
# Define sub-agents
delivery_agent = Agent(
name="Delivery-Agent",
description="Handles pizza delivery",
instructions="Confirm delivery details and provide tracking",
model="claude-sonnet-4-5"
)
pizza_agent = Agent(
name="Pizza-Agent",
description="Handles pizza preparation",
instructions="Prepare the pizza and call delivery agent",
model="claude-sonnet-4-5",
sub_agents=[delivery_agent]
)
# Main agent with sub-agents
master_agent = Agent(
name="Master-Agent",
instructions="Coordinate pizza orders using available agents",
model="claude-sonnet-4-5",
sub_agents=[pizza_agent]
)
result = Runner(master_agent).run(
input=[{"role": "user", "content": "I want to order a pepperoni pizza"}]
)
# Access sub-agent responses
print(result.sub_agents_response.get("Pizza-Agent"))
print(result.sub_agents_response.get("Delivery-Agent"))
Streaming Responses
import asyncio
from stark import Agent, Runner, RunnerStream
async def main():
agent = Agent(
name="Streaming-Agent",
instructions="You are a helpful assistant",
model="claude-sonnet-4-5"
)
async for event in Runner(agent).run_stream(
input=[{"role": "user", "content": "Tell me a story"}]
):
if event.type == RunnerStream.CONTENT_CHUNK:
print(RunnerStream.data_dump(event), end="", flush=True)
elif event.type == RunnerStream.TOOL_CALLS:
print(f"\nTool calls: {RunnerStream.data_dump(event)}")
elif event.type == RunnerStream.TOOL_RESPONSE:
print(f"Tool response: {RunnerStream.data_dump(event)}")
elif event.type == RunnerStream.AGENT_RUN_END:
print(f"\nAgent finished: {RunnerStream.data_dump(event)}")
asyncio.run(main())
API Reference
Agent
The main agent class that defines the behavior and capabilities of your AI agent.
Agent(
name: str, # Agent name
instructions: str, # System instructions/prompt
model: str, # LLM model to use
description: str = "", # Agent description (required for sub-agents)
mcp_servers: Dict[str, Any] = [], # MCP server configurations
function_tools: List[Callable] = [], # Custom function tools
sub_agents: List[Agent] = [], # Sub-agents
parallel_tool_calls: bool = None, # Enable parallel tool execution
llm_provider: str = LITELLM, # LLM provider
max_iterations: int = 10, # Maximum iterations
custom_llm_provider: str = "openai", # Custom LLM provider
trace_id: str = None # Trace ID for debugging
)
Runner
Executes agents and manages their lifecycle.
Synchronous Execution
runner = Runner(agent)
result = runner.run(input=[{"role": "user", "content": "Hello"}])
Asynchronous Execution
runner = Runner(agent)
result = await runner.run_async(input=[{"role": "user", "content": "Hello"}])
Streaming Execution
runner = Runner(agent)
async for event in runner.run_stream(input=[{"role": "user", "content": "Hello"}]):
# Handle events
pass
RunResponse
The response object returned by agent execution.
class RunResponse:
result: List[Dict[str, Any]] # Complete conversation history
iterations: int # Number of iterations executed
sub_agent_result: List[Dict[str, Any]] # Sub-agent specific results
sub_agents_response: Dict[str, Any] # Responses from sub-agents
max_iterations_reached: bool # Whether max iterations was hit
Stream Events
When using streaming, you'll receive different event types:
RunnerStream.ITER_START: Iteration startedRunnerStream.CONTENT_CHUNK: Content chunk receivedRunnerStream.TOOL_CALLS: Tool calls madeRunnerStream.TOOL_RESPONSE: Tool response receivedRunnerStream.ITER_END: Iteration completedRunnerStream.AGENT_RUN_END: Agent execution finishedRunnerStream.MODEL_STREAM_COMPLETED: Model streaming completed
MCP Server Configuration
MCP servers extend agent capabilities by providing additional tools and resources.
Stdio-based MCP Server
mcp_servers = {
"server-name": {
"command": "uvx", # Command to run
"args": ["mcp-server-package"], # Arguments
"env": { # Environment variables
"API_KEY": "your-key"
}
}
}
Multiple MCP Servers
mcp_servers = {
"jira": {
"command": "uvx",
"args": ["mcp-atlassian"],
"env": {
"JIRA_URL": os.environ.get("JIRA_URL"),
"JIRA_USERNAME": os.environ.get("JIRA_EMAIL"),
"JIRA_API_TOKEN": os.environ.get("JIRA_TOKEN")
}
},
"slack": {
"command": "uvx",
"args": ["mcp-slack"],
"env": {
"SLACK_BOT_TOKEN": os.environ.get("SLACK_BOT_TOKEN")
}
}
}
Function Tools
Function tools are Python functions that agents can call. They must include a JSON schema in their docstring.
Function Tool Format
def my_tool(input: str):
"""
{
"description": "Description of what the tool does",
"parameters": {
"properties": {
"param_name": {
"description": "Parameter description",
"type": "string"
}
},
"required": ["param_name"],
"type": "object"
}
}
"""
# Parse input if needed
if isinstance(input, str):
input = json.loads(input)
# Your implementation
result = {"status": "success"}
# Return as JSON string
return json.dumps(result)
Advanced Usage
Custom LLM Provider
from stark.llms import LITELLM
agent = Agent(
name="Custom-Agent",
instructions="You are a helpful assistant",
model="gpt-4",
llm_provider=LITELLM,
custom_llm_provider="openai"
)
Parallel Tool Calls
agent = Agent(
name="Parallel-Agent",
instructions="You can call multiple tools in parallel",
model="claude-sonnet-4-5",
parallel_tool_calls=True,
function_tools=[tool1, tool2, tool3]
)
Iteration Control
agent = Agent(
name="Controlled-Agent",
instructions="You are a helpful assistant",
model="claude-sonnet-4-5",
max_iterations=5 # Limit to 5 iterations
)
result = Runner(agent).run(input=[{"role": "user", "content": "Hello"}])
if result.max_iterations_reached:
print("Warning: Agent reached maximum iterations!")
Best Practices
- Clear Instructions: Provide clear, specific instructions to guide agent behavior
- Tool Descriptions: Write detailed descriptions for function tools
- Error Handling: Always wrap agent execution in try-except blocks
- Iteration Limits: Set appropriate
max_iterationsto prevent infinite loops - Resource Cleanup: MCP server connections are automatically cleaned up
- Streaming: Use streaming for long-running tasks to provide real-time feedback
- Sub-Agent Descriptions: Always provide descriptions for sub-agents so the parent agent knows when to use them
Error Handling
from stark import Agent, Runner
try:
agent = Agent(
name="Error-Handling-Agent",
instructions="You are a helpful assistant",
model="claude-sonnet-4-5"
)
result = Runner(agent).run(
input=[{"role": "user", "content": "Hello"}]
)
except Exception as e:
print(f"Error: {e}")
# Handle error appropriately
Requirements
Python 3.10 or higher.
Contributing
Contributions are welcome! Please feel free to submit issues and pull requests.
Support
For issues and questions, please open an issue on the GitHub repository.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file stark_agents-0.0.4.tar.gz.
File metadata
- Download URL: stark_agents-0.0.4.tar.gz
- Upload date:
- Size: 197.7 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
3824cf60c4055d1d1b7c43c191eb5214134b432dd7e0710cd86452e2d58966db
|
|
| MD5 |
c40025b5ac5558e026d9edcaed252cd4
|
|
| BLAKE2b-256 |
549d43629ee164f296a18003d60c18bcfbcbca49bd4d8089791d514b1fa4a1b6
|
Provenance
The following attestation bundles were made for stark_agents-0.0.4.tar.gz:
Publisher:
publish-pypi.yml on dev-aliraza/stark-agents
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
stark_agents-0.0.4.tar.gz -
Subject digest:
3824cf60c4055d1d1b7c43c191eb5214134b432dd7e0710cd86452e2d58966db - Sigstore transparency entry: 798303287
- Sigstore integration time:
-
Permalink:
dev-aliraza/stark-agents@0e51cee903d31b9f2a5c7747efdeadd2450f0e9c -
Branch / Tag:
refs/tags/0.0.4 - Owner: https://github.com/dev-aliraza
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish-pypi.yml@0e51cee903d31b9f2a5c7747efdeadd2450f0e9c -
Trigger Event:
release
-
Statement type:
File details
Details for the file stark_agents-0.0.4-py3-none-any.whl.
File metadata
- Download URL: stark_agents-0.0.4-py3-none-any.whl
- Upload date:
- Size: 19.6 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
244ecc0340a17f337c539f308c25e347488ffd7bbd470adddd415ee4188c7a84
|
|
| MD5 |
96f79d7f9d44a43abaf5f2a7ded1eeb4
|
|
| BLAKE2b-256 |
0e1046227db081ed67b40393ec23389b84cda65a7416be462d7101dea0a163dc
|
Provenance
The following attestation bundles were made for stark_agents-0.0.4-py3-none-any.whl:
Publisher:
publish-pypi.yml on dev-aliraza/stark-agents
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
stark_agents-0.0.4-py3-none-any.whl -
Subject digest:
244ecc0340a17f337c539f308c25e347488ffd7bbd470adddd415ee4188c7a84 - Sigstore transparency entry: 798303291
- Sigstore integration time:
-
Permalink:
dev-aliraza/stark-agents@0e51cee903d31b9f2a5c7747efdeadd2450f0e9c -
Branch / Tag:
refs/tags/0.0.4 - Owner: https://github.com/dev-aliraza
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish-pypi.yml@0e51cee903d31b9f2a5c7747efdeadd2450f0e9c -
Trigger Event:
release
-
Statement type: