An SDK to build AI agents
Project description
Stark Agents
A powerful Python SDK for building AI agents with support for MCP servers, function tools, hierarchical sub-agents, and advanced execution control.
Features
- 🤖 Multi-LLM Support: Built-in support for OpenAI, Anthropic, and Gemini via LiteLLM
- 🔧 MCP Server Integration: Connect to Model Context Protocol (MCP) servers for extended capabilities
- 🛠️ Function Tools: Define custom Python functions or classes as tools with automatic schema generation
- 🌳 Hierarchical Agents: Create complex agent hierarchies with sub-agents
- 📡 Streaming Support: Real-time streaming of agent responses and tool calls
- 🔄 Async/Sync APIs: Both synchronous and asynchronous execution modes
- 📊 Iteration Control: Configurable maximum iterations to prevent infinite loops
- 🔍 Web Search: Built-in web search capabilities for OpenAI and Anthropic models
- ✅ Tool Approvals: Optional approval system for tool and sub-agent execution
- 🎯 Input Filtering: Custom input filtering before LLM calls
- 🧠 Skills System: Load reusable capabilities from markdown files
- 💭 Reasoning Models: Support for thinking/reasoning models (e.g., O1, Claude 3.5 Sonnet)
- 📝 Tracing: Built-in trace ID support for debugging and monitoring
Installation
pip install stark-agents
Quick Start
Basic Agent
from stark import Agent, Runner
agent = Agent(
name="Assistant",
instructions="You are a helpful assistant",
model="claude-sonnet-4-5"
)
result = Runner(agent).run(input=[{"role": "user", "content": "Hello!"}])
print(result.result[-1]["content"])
Agent with MCP Servers
import os
from stark import Agent, Runner
mcp_servers = {
"slack": {
"command": "uvx",
"args": ["mcp-slack"],
"env": {
"SLACK_BOT_TOKEN": os.environ.get("SLACK_BOT_TOKEN", "")
}
}
}
agent = Agent(
name="Slack-Agent",
instructions="You can interact with Slack",
model="claude-sonnet-4-5",
mcp_servers=mcp_servers
)
result = Runner(agent).run(
input=[{"role": "user", "content": "Send a message to #general"}]
)
Agent with Function Tools
Using the @stark_tool Decorator (Recommended)
The @stark_tool decorator automatically generates JSON schemas from your function signatures:
from stark import Agent, Runner, stark_tool
@stark_tool
def search_database(query: str, limit: int = 10) -> str:
"""Search the database for information"""
# Your function implementation
results = ["item1", "item2"]
return f"Found {len(results)} results for '{query}'"
@stark_tool
def get_user_info(user_id: int, include_details: bool = False) -> str:
"""Retrieve user information from the database"""
return f"User {user_id} details"
agent = Agent(
name="Search-Agent",
instructions="You can search the database and get user info",
model="claude-sonnet-4-5",
function_tools=[search_database, get_user_info]
)
result = Runner(agent).run(
input=[{"role": "user", "content": "Search for users named John"}]
)
Using Class-Based Tools
You can also organize related tools into classes:
from stark import Agent, Runner, stark_tool
class DatabaseTools:
def __init__(self, db_connection):
self.db = db_connection
@stark_tool
def search(self, query: str, limit: int = 10) -> str:
"""Search the database"""
return f"Search results for: {query}"
@stark_tool
def insert(self, table: str, data: dict) -> str:
"""Insert data into a table"""
return f"Inserted into {table}"
# Pass the class instance
db_tools = DatabaseTools(db_connection="my_db")
agent = Agent(
name="DB-Agent",
instructions="You can interact with the database",
model="claude-sonnet-4-5",
function_tools=[db_tools]
)
Built-in Code Tools
Stark includes a comprehensive CodeTool class for file operations:
from stark import Agent, Runner
from stark.tools import CodeTool
code_tool = CodeTool(workspace_dir="./my_project")
agent = Agent(
name="Code-Agent",
instructions="You can read, write, and manage files",
model="claude-sonnet-4-5",
function_tools=[code_tool]
)
result = Runner(agent).run(
input=[{"role": "user", "content": "Create a new Python file called app.py"}]
)
Skills System
Stark supports a unique "Skills" system where you can define reusable agent capabilities using markdown files.
Directory Structure
Create a skills directory with subdirectories for each skill:
skills/
├── python_expert/
│ └── SKILL.md
└── data_analyst/
└── SKILL.md
The SKILL.md Format
Each skill is defined in a SKILL.md file with YAML frontmatter:
---
name: python_expert
description: A skill that provides Python programming expertise
---
You are an expert Python programmer. You follow PEP 8 standards.
When writing code, always include type hints and docstrings.
Loading Skills
from stark import Agent, Runner
agent = Agent(
name="Dev-Agent",
instructions="You are a senior developer.",
model="claude-sonnet-4-5",
skills=["./skills/python_expert"] # Path to the skill folder
)
Reasoning Models
Stark supports reasoning (or "thinking") models like OpenAI's O1 series. You can control the reasoning effort:
agent = Agent(
name="Thinking-Agent",
instructions="Solve this complex logic puzzle",
model="o1",
thinking_level="high" # Options: "none", "low", "medium", "high"
)
Hierarchical Sub-Agents
from stark import Agent, Runner
# Define sub-agents
delivery_agent = Agent(
name="Delivery-Agent",
description="Handles pizza delivery",
instructions="Confirm delivery details and provide tracking",
model="claude-sonnet-4-5"
)
pizza_agent = Agent(
name="Pizza-Agent",
description="Handles pizza preparation",
instructions="Prepare the pizza and call delivery agent",
model="claude-sonnet-4-5",
sub_agents=[delivery_agent]
)
# Main agent with sub-agents
master_agent = Agent(
name="Master-Agent",
instructions="Coordinate pizza orders using available agents",
model="claude-sonnet-4-5",
sub_agents=[pizza_agent]
)
result = Runner(master_agent).run(
input=[{"role": "user", "content": "I want to order a pepperoni pizza"}]
)
# Access sub-agent responses
print(result.sub_agents_response.get("Pizza-Agent"))
print(result.sub_agents_response.get("Delivery-Agent"))
Streaming Responses
import asyncio
from stark import Agent, Runner, RunnerStream, Stream
async def main():
agent = Agent(
name="Streaming-Agent",
instructions="You are a helpful assistant",
model="claude-sonnet-4-5"
)
async for event in Runner(agent).run_stream(
input=[{"role": "user", "content": "Tell me a story"}]
):
if event.type == Stream.CONTENT_CHUNK:
print(RunnerStream.data_dump(event), end="", flush=True)
elif event.type == Stream.TOOL_CALLS:
print(f"\nTool calls: {RunnerStream.data_dump(event)}")
elif event.type == Stream.TOOL_RESPONSE:
print(f"Tool response: {RunnerStream.data_dump(event)}")
elif event.type == Stream.ITER_START:
print(f"\n--- Iteration {RunnerStream.data_dump(event)} ---")
elif event.type == Stream.ITER_END:
print(f"\n--- Iteration Complete ---")
elif event.type == Stream.AGENT_RUN_END:
print(f"\nAgent finished: {RunnerStream.data_dump(event)}")
asyncio.run(main())
Web Search
Enable web search capabilities for your agents:
from stark import Agent, Runner
from stark.llm_providers import OPENAI, ANTHROPIC, GEMINI
# OpenAI web search
openai_agent = Agent(
name="Research-Agent",
instructions="You can search the web for information",
model="gpt-4o",
llm_provider=OPENAI,
enable_web_search=True
)
# Anthropic web search
anthropic_agent = Agent(
name="Research-Agent",
instructions="You can search the web for information",
model="claude-sonnet-4-5",
llm_provider=ANTHROPIC,
enable_web_search=True
)
# Gemini web search
gemini_agent = Agent(
name="Research-Agent",
instructions="You can search the web for information",
model="gemini-1.5-pro",
llm_provider=GEMINI,
enable_web_search=True
)
result = Runner(openai_agent).run(
input=[{"role": "user", "content": "What's the latest news about AI?"}]
)
Tool Approvals
Implement approval workflows for sensitive operations:
from stark import Agent, Runner
def approve_file_deletion(tool_name: str, arguments: dict) -> bool:
"""Approve file deletion operations"""
file_path = arguments.get("path", "")
print(f"Approve deletion of {file_path}? (y/n)")
return input().lower() == 'y'
async def approve_api_call(tool_name: str, arguments: dict) -> bool:
"""Async approval for API calls"""
print(f"Approve API call to {tool_name}? (y/n)")
return input().lower() == 'y'
agent = Agent(
name="Controlled-Agent",
instructions="You can perform file operations",
model="claude-sonnet-4-5",
function_tools=[file_tool],
approvals={
"delete": approve_file_deletion, # Matches tool names containing "delete"
"api_.*": approve_api_call, # Regex pattern for API tools
}
)
Input Filtering
Filter or modify input before sending to the LLM:
from stark import Agent, Runner
def filter_sensitive_data(messages: list) -> list:
"""Remove sensitive information from messages"""
filtered = []
for msg in messages:
if msg.get("role") == "user":
content = msg.get("content", "")
# Remove credit card numbers, etc.
content = content.replace("1234-5678-9012-3456", "[REDACTED]")
filtered.append({"role": msg["role"], "content": content})
else:
filtered.append(msg)
return filtered
agent = Agent(
name="Secure-Agent",
instructions="You are a helpful assistant",
model="claude-sonnet-4-5"
)
result = Runner(agent).run(
input=[{"role": "user", "content": "My card is 1234-5678-9012-3456"}],
input_filter=filter_sensitive_data
)
API Reference
Agent
The main agent class that defines the behavior and capabilities of your AI agent.
Agent(
name: str, # Agent name (required)
instructions: str, # System instructions/prompt (required)
model: str, # LLM model to use (required)
description: str = "", # Agent description (required for sub-agents)
mcp_servers: Dict[str, Any] = [], # MCP server configurations
function_tools: List[Callable] = [], # Custom function tools or class instances
enable_web_search: bool = False, # Enable web search capabilities
sub_agents: List[Agent] = [], # Sub-agents for delegation
approvals: Dict[str, Callable] = None, # Tool approval functions (regex patterns)
skills: List[str] = None, # List of paths to skill directories
skill_model: str = None, # Model to use for skill execution (defaults to main model)
parallel_tool_calls: bool = None, # Enable parallel tool execution
thinking_level: str = None, # Reasoning effort: "none", "low", "medium", "high"
llm_provider: str = OPENAI, # LLM provider (OPENAI, ANTHROPIC, GEMINI)
max_iterations: int = 10, # Maximum iterations before stopping
max_output_tokens: int = None, # Maximum tokens in response
trace_id: str = None # Trace ID for debugging
)
Runner
Executes agents and manages their lifecycle.
Synchronous Execution
runner = Runner(agent)
result = runner.run(
input=[{"role": "user", "content": "Hello"}],
input_filter=None # Optional input filter function
)
Asynchronous Execution
runner = Runner(agent)
result = await runner.run_async(
input=[{"role": "user", "content": "Hello"}],
input_filter=None # Optional input filter function
)
Streaming Execution
runner = Runner(agent)
async for event in runner.run_stream(
input=[{"role": "user", "content": "Hello"}],
input_filter=None # Optional input filter function
):
# Handle events
pass
RunContext
The response object returned by agent execution.
class RunContext:
messages: List[Dict[str, Any]] # Complete conversation history
output: str # Final output of the agent
iterations: int # Number of iterations executed
subagents_messages: List[Dict[str, Any]] # Sub-agent messages (typically empty for Single Agent or Master Agent)
subagents_response: Dict[str, Any] # Responses from all sub-agents (typically empty for Single Agent)
max_iterations_reached: bool # Whether max iterations was hit
Stream Events
When using streaming, you'll receive different event types:
Runner Events:
Stream.ITER_START: Iteration started (data: iteration number)Stream.TOOL_RESPONSE: Tool response received (data: ToolCallResponse)Stream.ITER_END: Iteration completed (data: IterationData)Stream.AGENT_RUN_END: Agent execution finished (data: RunContext)
Provider Events:
Stream.CONTENT_CHUNK: Content chunk received (data: string)Stream.TOOL_CALLS: Tool calls made (data: list of tool calls)Stream.PROVIDER_STREAM_COMPLETED: Provider streaming completed (data: ProviderResponse)
Utility Classes
Util
Helper utilities for common operations:
from stark import Util
# Parse JSON from LLM responses (handles markdown code blocks)
data = Util.load_json('```json\n{"key": "value"}\n```')
# Create partial functions with pre-filled arguments
from functools import partial
approval_func = Util.pass_function_with_args(my_approval, user_id=123)
RunnerStream
Helper methods for working with stream events:
from stark import RunnerStream
# Create stream events
event = RunnerStream.iteration_start(1)
event = RunnerStream.tool_response(tool_response)
event = RunnerStream.iteration_end(iteration_data)
event = RunnerStream.agent_run_end(run_response)
# Dump event data to string
data_str = RunnerStream.data_dump(event)
MCP Server Configuration
MCP servers extend agent capabilities by providing additional tools and resources.
Stdio-based MCP Server
mcp_servers = {
"server-name": {
"command": "uvx", # Command to run
"args": ["mcp-server-package"], # Arguments
"env": { # Environment variables
"API_KEY": "your-key"
}
}
}
Multiple MCP Servers
mcp_servers = {
"jira": {
"command": "uvx",
"args": ["mcp-atlassian"],
"env": {
"JIRA_URL": os.environ.get("JIRA_URL"),
"JIRA_USERNAME": os.environ.get("JIRA_EMAIL"),
"JIRA_API_TOKEN": os.environ.get("JIRA_TOKEN")
}
},
"slack": {
"command": "uvx",
"args": ["mcp-slack"],
"env": {
"SLACK_BOT_TOKEN": os.environ.get("SLACK_BOT_TOKEN")
}
}
}
Function Tools
Using the @stark_tool Decorator
The @stark_tool decorator automatically generates JSON schemas from Python type hints:
from stark import stark_tool
from typing import List
@stark_tool
def my_tool(
query: str, # Required parameter
limit: int = 10, # Optional with default
tags: List[str] = None, # Optional list
include_metadata: bool = False # Optional boolean
) -> str:
"""
Description of what the tool does.
This docstring becomes the tool description.
"""
# Your implementation
return "result"
Supported Types:
str→ stringint→ integerfloat→ numberbool→ booleandict→ objectList[T]→ array with items of type T
Class-Based Tools
Organize related tools into classes:
from stark import stark_tool
class MyTools:
def __init__(self, config):
self.config = config
@stark_tool
def tool_one(self, param: str) -> str:
"""First tool description"""
return f"Result: {param}"
@stark_tool
def tool_two(self, value: int) -> str:
"""Second tool description"""
return f"Value: {value}"
# Use the class instance
tools = MyTools(config="my_config")
agent = Agent(
name="Agent",
instructions="Instructions",
model="claude-sonnet-4-5",
function_tools=[tools]
)
Built-in CodeTool
The CodeTool class provides comprehensive file and shell operations:
from stark.tools import CodeTool
code_tool = CodeTool(workspace_dir="./project")
# Available methods:
# - read(path, encoding='utf-8')
# - write(path, content, create_dirs=True)
# - update(path, search, replace, count=-1)
# - delete(path, recursive=False)
# - create_directory(path, parents=True)
# - list_directory(path=".", pattern="*", recursive=False)
# - move(source, destination)
# - copy(source, destination, recursive=True)
# - shell_exec(cmd, dir_path=None, timeout=30)
Advanced Usage
LLM Providers
from stark import Agent, Runner
from stark.llm_providers import OPENAI, ANTHROPIC, GEMINI
# OpenAI
openai_agent = Agent(
name="OpenAI-Agent",
instructions="You are a helpful assistant",
model="gpt-4o",
llm_provider=OPENAI
)
# Anthropic
anthropic_agent = Agent(
name="Anthropic-Agent",
instructions="You are a helpful assistant",
model="claude-sonnet-4-5",
llm_provider=ANTHROPIC
)
# Gemini
gemini_agent = Agent(
name="Gemini-Agent",
instructions="You are a helpful assistant",
model="gemini-1.5-pro",
llm_provider=GEMINI
)
Parallel Tool Calls
Enable parallel execution of multiple tools:
agent = Agent(
name="Parallel-Agent",
instructions="You can call multiple tools in parallel",
model="claude-sonnet-4-5",
parallel_tool_calls=True,
function_tools=[tool1, tool2, tool3]
)
Iteration Control
agent = Agent(
name="Controlled-Agent",
instructions="You are a helpful assistant",
model="claude-sonnet-4-5",
max_iterations=5 # Limit to 5 iterations
)
result = Runner(agent).run(input=[{"role": "user", "content": "Hello"}])
if result.max_iterations_reached:
print("Warning: Agent reached maximum iterations!")
Token Limits
Control the maximum output tokens:
agent = Agent(
name="Limited-Agent",
instructions="You are a helpful assistant",
model="claude-sonnet-4-5",
max_output_tokens=1000 # Limit response to 1000 tokens
)
Tracing and Debugging
Use trace IDs to track agent execution:
import uuid
agent = Agent(
name="Traced-Agent",
instructions="You are a helpful assistant",
model="claude-sonnet-4-5",
trace_id=str(uuid.uuid4())
)
result = Runner(agent).run(input=[{"role": "user", "content": "Hello"}])
print(f"Trace ID: {agent.get_trace_id()}")
Best Practices
- Clear Instructions: Provide clear, specific instructions to guide agent behavior
- Tool Descriptions: Write detailed descriptions for function tools and sub-agents
- Error Handling: Always wrap agent execution in try-except blocks
- Iteration Limits: Set appropriate
max_iterationsto prevent infinite loops - Resource Cleanup: MCP server connections are automatically cleaned up
- Streaming: Use streaming for long-running tasks to provide real-time feedback
- Sub-Agent Descriptions: Always provide descriptions for sub-agents so the parent agent knows when to use them
- Type Hints: Use type hints with
@stark_toolfor automatic schema generation - Approvals: Implement approval workflows for sensitive operations
- Input Filtering: Use input filters to sanitize or modify data before LLM processing
Error Handling
from stark import Agent, Runner
try:
agent = Agent(
name="Error-Handling-Agent",
instructions="You are a helpful assistant",
model="claude-sonnet-4-5"
)
result = Runner(agent).run(
input=[{"role": "user", "content": "Hello"}]
)
if result.max_iterations_reached:
print("Warning: Maximum iterations reached")
except Exception as e:
print(f"Error: {e}")
# Handle error appropriately
Examples
Check out the examples/ directory for more comprehensive examples:
- Basic agent usage
- MCP server integration
- Function tools and class-based tools
- Hierarchical sub-agents
- Streaming responses
- Web search integration
- Tool approvals and input filtering
Requirements
- Python 3.10 or higher
- Dependencies are automatically installed with the package
Contributing
Contributions are welcome! Please feel free to submit issues and pull requests.
License
See LICENSE file for details.
Support
For issues and questions, please open an issue on the GitHub repository.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file stark_agents-0.2.0.tar.gz.
File metadata
- Download URL: stark_agents-0.2.0.tar.gz
- Upload date:
- Size: 216.4 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
9936f6c8504667c56d873131d253c96a258b03d961c3eb98cdcdc0d9dbfe37d8
|
|
| MD5 |
91eb7d95d58f7b918b2bfa541595460e
|
|
| BLAKE2b-256 |
6fe787eb2295b408cfa7fbb59a0e23a6154b9c52296aece11173d8be9b8f4322
|
Provenance
The following attestation bundles were made for stark_agents-0.2.0.tar.gz:
Publisher:
publish-pypi.yml on dev-aliraza/stark-agents
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
stark_agents-0.2.0.tar.gz -
Subject digest:
9936f6c8504667c56d873131d253c96a258b03d961c3eb98cdcdc0d9dbfe37d8 - Sigstore transparency entry: 844812572
- Sigstore integration time:
-
Permalink:
dev-aliraza/stark-agents@6b8d94950702f364ebc43e2290914d4eff531caa -
Branch / Tag:
refs/tags/0.2.0 - Owner: https://github.com/dev-aliraza
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish-pypi.yml@6b8d94950702f364ebc43e2290914d4eff531caa -
Trigger Event:
release
-
Statement type:
File details
Details for the file stark_agents-0.2.0-py3-none-any.whl.
File metadata
- Download URL: stark_agents-0.2.0-py3-none-any.whl
- Upload date:
- Size: 35.6 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
e6752d48f3b3303c728192d8b7a1d5921413ea3fee4ac59f6e4ae8587f0c0809
|
|
| MD5 |
c2563004c1ad0385df0d75bafe9f4364
|
|
| BLAKE2b-256 |
4fe6edfc5857746fe2e7c37da1f0b0cd8b736f998bbde7e21f5c1a4a27957394
|
Provenance
The following attestation bundles were made for stark_agents-0.2.0-py3-none-any.whl:
Publisher:
publish-pypi.yml on dev-aliraza/stark-agents
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
stark_agents-0.2.0-py3-none-any.whl -
Subject digest:
e6752d48f3b3303c728192d8b7a1d5921413ea3fee4ac59f6e4ae8587f0c0809 - Sigstore transparency entry: 844812573
- Sigstore integration time:
-
Permalink:
dev-aliraza/stark-agents@6b8d94950702f364ebc43e2290914d4eff531caa -
Branch / Tag:
refs/tags/0.2.0 - Owner: https://github.com/dev-aliraza
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish-pypi.yml@6b8d94950702f364ebc43e2290914d4eff531caa -
Trigger Event:
release
-
Statement type: