Skip to main content

An SDK to build AI agents

Project description

Stark Agents

A powerful Python SDK for building AI agents with support for MCP servers, function tools, hierarchical sub-agents, and advanced execution control.

Features

  • 🤖 Multi-LLM Support: Built-in support for OpenAI and Anthropic via LiteLLM
  • 🔧 MCP Server Integration: Connect to Model Context Protocol (MCP) servers for extended capabilities
  • 🛠️ Function Tools: Define custom Python functions or classes as tools with automatic schema generation
  • 🌳 Hierarchical Agents: Create complex agent hierarchies with sub-agents
  • 📡 Streaming Support: Real-time streaming of agent responses and tool calls
  • 🔄 Async/Sync APIs: Both synchronous and asynchronous execution modes
  • 📊 Iteration Control: Configurable maximum iterations to prevent infinite loops
  • 🔍 Web Search: Built-in web search capabilities for OpenAI and Anthropic models
  • Tool Approvals: Optional approval system for tool and sub-agent execution
  • 🎯 Input Filtering: Custom input filtering before LLM calls
  • 📝 Tracing: Built-in trace ID support for debugging and monitoring

Installation

pip install stark-agents

Quick Start

Basic Agent

from stark import Agent, Runner

agent = Agent(
    name="Assistant",
    instructions="You are a helpful assistant",
    model="claude-sonnet-4-5"
)

result = Runner(agent).run(input=[{"role": "user", "content": "Hello!"}])
print(result.result[-1]["content"])

Agent with MCP Servers

import os
from stark import Agent, Runner

mcp_servers = {
    "slack": {
        "command": "uvx",
        "args": ["mcp-slack"],
        "env": {
            "SLACK_BOT_TOKEN": os.environ.get("SLACK_BOT_TOKEN", "")
        }
    }
}

agent = Agent(
    name="Slack-Agent",
    instructions="You can interact with Slack",
    model="claude-sonnet-4-5",
    mcp_servers=mcp_servers
)

result = Runner(agent).run(
    input=[{"role": "user", "content": "Send a message to #general"}]
)

Agent with Function Tools

Using the @stark_tool Decorator (Recommended)

The @stark_tool decorator automatically generates JSON schemas from your function signatures:

from stark import Agent, Runner, stark_tool

@stark_tool
def search_database(query: str, limit: int = 10) -> str:
    """Search the database for information"""
    # Your function implementation
    results = ["item1", "item2"]
    return f"Found {len(results)} results for '{query}'"

@stark_tool
def get_user_info(user_id: int, include_details: bool = False) -> str:
    """Retrieve user information from the database"""
    return f"User {user_id} details"

agent = Agent(
    name="Search-Agent",
    instructions="You can search the database and get user info",
    model="claude-sonnet-4-5",
    function_tools=[search_database, get_user_info]
)

result = Runner(agent).run(
    input=[{"role": "user", "content": "Search for users named John"}]
)

Using Class-Based Tools

You can also organize related tools into classes:

from stark import Agent, Runner, stark_tool

class DatabaseTools:
    def __init__(self, db_connection):
        self.db = db_connection
    
    @stark_tool
    def search(self, query: str, limit: int = 10) -> str:
        """Search the database"""
        return f"Search results for: {query}"
    
    @stark_tool
    def insert(self, table: str, data: dict) -> str:
        """Insert data into a table"""
        return f"Inserted into {table}"

# Pass the class instance
db_tools = DatabaseTools(db_connection="my_db")

agent = Agent(
    name="DB-Agent",
    instructions="You can interact with the database",
    model="claude-sonnet-4-5",
    function_tools=[db_tools]
)

Built-in Code Tools

Stark includes a comprehensive CodeTool class for file operations:

from stark import Agent, Runner
from stark.tools import CodeTool

code_tool = CodeTool(workspace_dir="./my_project")

agent = Agent(
    name="Code-Agent",
    instructions="You can read, write, and manage files",
    model="claude-sonnet-4-5",
    function_tools=[code_tool]
)

result = Runner(agent).run(
    input=[{"role": "user", "content": "Create a new Python file called app.py"}]
)

Hierarchical Sub-Agents

from stark import Agent, Runner

# Define sub-agents
delivery_agent = Agent(
    name="Delivery-Agent",
    description="Handles pizza delivery",
    instructions="Confirm delivery details and provide tracking",
    model="claude-sonnet-4-5"
)

pizza_agent = Agent(
    name="Pizza-Agent",
    description="Handles pizza preparation",
    instructions="Prepare the pizza and call delivery agent",
    model="claude-sonnet-4-5",
    sub_agents=[delivery_agent]
)

# Main agent with sub-agents
master_agent = Agent(
    name="Master-Agent",
    instructions="Coordinate pizza orders using available agents",
    model="claude-sonnet-4-5",
    sub_agents=[pizza_agent]
)

result = Runner(master_agent).run(
    input=[{"role": "user", "content": "I want to order a pepperoni pizza"}]
)

# Access sub-agent responses
print(result.sub_agents_response.get("Pizza-Agent"))
print(result.sub_agents_response.get("Delivery-Agent"))

Streaming Responses

import asyncio
from stark import Agent, Runner, RunnerStream, Stream

async def main():
    agent = Agent(
        name="Streaming-Agent",
        instructions="You are a helpful assistant",
        model="claude-sonnet-4-5"
    )

    async for event in Runner(agent).run_stream(
        input=[{"role": "user", "content": "Tell me a story"}]
    ):
        if event.type == Stream.CONTENT_CHUNK:
            print(RunnerStream.data_dump(event), end="", flush=True)
        
        elif event.type == Stream.TOOL_CALLS:
            print(f"\nTool calls: {RunnerStream.data_dump(event)}")
        
        elif event.type == Stream.TOOL_RESPONSE:
            print(f"Tool response: {RunnerStream.data_dump(event)}")
        
        elif event.type == Stream.ITER_START:
            print(f"\n--- Iteration {RunnerStream.data_dump(event)} ---")
        
        elif event.type == Stream.ITER_END:
            print(f"\n--- Iteration Complete ---")
        
        elif event.type == Stream.AGENT_RUN_END:
            print(f"\nAgent finished: {RunnerStream.data_dump(event)}")

asyncio.run(main())

Web Search

Enable web search capabilities for your agents:

from stark import Agent, Runner
from stark.llm_providers import OPENAI, ANTHROPIC

# OpenAI web search
openai_agent = Agent(
    name="Research-Agent",
    instructions="You can search the web for information",
    model="gpt-4o",
    llm_provider=OPENAI,
    enable_web_search=True
)

# Anthropic web search
anthropic_agent = Agent(
    name="Research-Agent",
    instructions="You can search the web for information",
    model="claude-sonnet-4-5",
    llm_provider=ANTHROPIC,
    enable_web_search=True
)

result = Runner(openai_agent).run(
    input=[{"role": "user", "content": "What's the latest news about AI?"}]
)

Tool Approvals

Implement approval workflows for sensitive operations:

from stark import Agent, Runner

def approve_file_deletion(tool_name: str, arguments: dict) -> bool:
    """Approve file deletion operations"""
    file_path = arguments.get("path", "")
    print(f"Approve deletion of {file_path}? (y/n)")
    return input().lower() == 'y'

async def approve_api_call(tool_name: str, arguments: dict) -> bool:
    """Async approval for API calls"""
    print(f"Approve API call to {tool_name}? (y/n)")
    return input().lower() == 'y'

agent = Agent(
    name="Controlled-Agent",
    instructions="You can perform file operations",
    model="claude-sonnet-4-5",
    function_tools=[file_tool],
    approvals={
        "delete": approve_file_deletion,  # Matches tool names containing "delete"
        "api_.*": approve_api_call,       # Regex pattern for API tools
    }
)

Input Filtering

Filter or modify input before sending to the LLM:

from stark import Agent, Runner

def filter_sensitive_data(messages: list) -> list:
    """Remove sensitive information from messages"""
    filtered = []
    for msg in messages:
        if msg.get("role") == "user":
            content = msg.get("content", "")
            # Remove credit card numbers, etc.
            content = content.replace("1234-5678-9012-3456", "[REDACTED]")
            filtered.append({"role": msg["role"], "content": content})
        else:
            filtered.append(msg)
    return filtered

agent = Agent(
    name="Secure-Agent",
    instructions="You are a helpful assistant",
    model="claude-sonnet-4-5"
)

result = Runner(agent).run(
    input=[{"role": "user", "content": "My card is 1234-5678-9012-3456"}],
    input_filter=filter_sensitive_data
)

API Reference

Agent

The main agent class that defines the behavior and capabilities of your AI agent.

Agent(
    name: str,                                    # Agent name (required)
    instructions: str,                            # System instructions/prompt (required)
    model: str,                                   # LLM model to use (required)
    description: str = "",                        # Agent description (required for sub-agents)
    mcp_servers: Dict[str, Any] = [],            # MCP server configurations
    function_tools: List[Callable] = [],         # Custom function tools or class instances
    enable_web_search: bool = False,             # Enable web search capabilities
    sub_agents: List[Agent] = [],                # Sub-agents for delegation
    approvals: Dict[str, Callable] = None,       # Tool approval functions (regex patterns)
    parallel_tool_calls: bool = None,            # Enable parallel tool execution
    llm_provider: str = OPENAI,                  # LLM provider (OPENAI or ANTHROPIC)
    max_iterations: int = 10,                    # Maximum iterations before stopping
    max_output_tokens: int = None,               # Maximum tokens in response
    trace_id: str = None                         # Trace ID for debugging
)

Runner

Executes agents and manages their lifecycle.

Synchronous Execution

runner = Runner(agent)
result = runner.run(
    input=[{"role": "user", "content": "Hello"}],
    input_filter=None  # Optional input filter function
)

Asynchronous Execution

runner = Runner(agent)
result = await runner.run_async(
    input=[{"role": "user", "content": "Hello"}],
    input_filter=None  # Optional input filter function
)

Streaming Execution

runner = Runner(agent)
async for event in runner.run_stream(
    input=[{"role": "user", "content": "Hello"}],
    input_filter=None  # Optional input filter function
):
    # Handle events
    pass

RunResponse

The response object returned by agent execution.

class RunResponse:
    result: List[Dict[str, Any]]           # Complete conversation history
    iterations: int                         # Number of iterations executed
    sub_agent_result: List[Dict[str, Any]] # Sub-agent specific results
    sub_agents_response: Dict[str, Any]    # Responses from all sub-agents
    max_iterations_reached: bool           # Whether max iterations was hit

Stream Events

When using streaming, you'll receive different event types:

Runner Events:

  • Stream.ITER_START: Iteration started (data: iteration number)
  • Stream.TOOL_RESPONSE: Tool response received (data: ToolCallResponse)
  • Stream.ITER_END: Iteration completed (data: IterationData)
  • Stream.AGENT_RUN_END: Agent execution finished (data: RunResponse)

Provider Events:

  • Stream.CONTENT_CHUNK: Content chunk received (data: string)
  • Stream.TOOL_CALLS: Tool calls made (data: list of tool calls)
  • Stream.PROVIDER_STREAM_COMPLETED: Provider streaming completed (data: ProviderResponse)

Utility Classes

Util

Helper utilities for common operations:

from stark import Util

# Parse JSON from LLM responses (handles markdown code blocks)
data = Util.load_json('```json\n{"key": "value"}\n```')

# Create partial functions with pre-filled arguments
from functools import partial
approval_func = Util.pass_function_with_args(my_approval, user_id=123)

RunnerStream

Helper methods for working with stream events:

from stark import RunnerStream

# Create stream events
event = RunnerStream.iteration_start(1)
event = RunnerStream.tool_response(tool_response)
event = RunnerStream.iteration_end(iteration_data)
event = RunnerStream.agent_run_end(run_response)

# Dump event data to string
data_str = RunnerStream.data_dump(event)

MCP Server Configuration

MCP servers extend agent capabilities by providing additional tools and resources.

Stdio-based MCP Server

mcp_servers = {
    "server-name": {
        "command": "uvx",              # Command to run
        "args": ["mcp-server-package"], # Arguments
        "env": {                        # Environment variables
            "API_KEY": "your-key"
        }
    }
}

Multiple MCP Servers

mcp_servers = {
    "jira": {
        "command": "uvx",
        "args": ["mcp-atlassian"],
        "env": {
            "JIRA_URL": os.environ.get("JIRA_URL"),
            "JIRA_USERNAME": os.environ.get("JIRA_EMAIL"),
            "JIRA_API_TOKEN": os.environ.get("JIRA_TOKEN")
        }
    },
    "slack": {
        "command": "uvx",
        "args": ["mcp-slack"],
        "env": {
            "SLACK_BOT_TOKEN": os.environ.get("SLACK_BOT_TOKEN")
        }
    }
}

Function Tools

Using the @stark_tool Decorator

The @stark_tool decorator automatically generates JSON schemas from Python type hints:

from stark import stark_tool
from typing import List

@stark_tool
def my_tool(
    query: str,                    # Required parameter
    limit: int = 10,               # Optional with default
    tags: List[str] = None,        # Optional list
    include_metadata: bool = False # Optional boolean
) -> str:
    """
    Description of what the tool does.
    This docstring becomes the tool description.
    """
    # Your implementation
    return "result"

Supported Types:

  • str → string
  • int → integer
  • float → number
  • bool → boolean
  • dict → object
  • List[T] → array with items of type T

Class-Based Tools

Organize related tools into classes:

from stark import stark_tool

class MyTools:
    def __init__(self, config):
        self.config = config
    
    @stark_tool
    def tool_one(self, param: str) -> str:
        """First tool description"""
        return f"Result: {param}"
    
    @stark_tool
    def tool_two(self, value: int) -> str:
        """Second tool description"""
        return f"Value: {value}"

# Use the class instance
tools = MyTools(config="my_config")
agent = Agent(
    name="Agent",
    instructions="Instructions",
    model="claude-sonnet-4-5",
    function_tools=[tools]
)

Built-in CodeTool

The CodeTool class provides comprehensive file and shell operations:

from stark.tools import CodeTool

code_tool = CodeTool(workspace_dir="./project")

# Available methods:
# - read(path, encoding='utf-8')
# - write(path, content, create_dirs=True)
# - update(path, search, replace, count=-1)
# - delete(path, recursive=False)
# - create_directory(path, parents=True)
# - list_directory(path=".", pattern="*", recursive=False)
# - move(source, destination)
# - copy(source, destination, recursive=True)
# - shell_exec(cmd, dir_path=None, timeout=30)

Advanced Usage

LLM Providers

from stark import Agent, Runner
from stark.llm_providers import OPENAI, ANTHROPIC

# OpenAI
openai_agent = Agent(
    name="OpenAI-Agent",
    instructions="You are a helpful assistant",
    model="gpt-4o",
    llm_provider=OPENAI
)

# Anthropic
anthropic_agent = Agent(
    name="Anthropic-Agent",
    instructions="You are a helpful assistant",
    model="claude-sonnet-4-5",
    llm_provider=ANTHROPIC
)

Parallel Tool Calls

Enable parallel execution of multiple tools:

agent = Agent(
    name="Parallel-Agent",
    instructions="You can call multiple tools in parallel",
    model="claude-sonnet-4-5",
    parallel_tool_calls=True,
    function_tools=[tool1, tool2, tool3]
)

Iteration Control

agent = Agent(
    name="Controlled-Agent",
    instructions="You are a helpful assistant",
    model="claude-sonnet-4-5",
    max_iterations=5  # Limit to 5 iterations
)

result = Runner(agent).run(input=[{"role": "user", "content": "Hello"}])

if result.max_iterations_reached:
    print("Warning: Agent reached maximum iterations!")

Token Limits

Control the maximum output tokens:

agent = Agent(
    name="Limited-Agent",
    instructions="You are a helpful assistant",
    model="claude-sonnet-4-5",
    max_output_tokens=1000  # Limit response to 1000 tokens
)

Tracing and Debugging

Use trace IDs to track agent execution:

import uuid

agent = Agent(
    name="Traced-Agent",
    instructions="You are a helpful assistant",
    model="claude-sonnet-4-5",
    trace_id=str(uuid.uuid4())
)

result = Runner(agent).run(input=[{"role": "user", "content": "Hello"}])
print(f"Trace ID: {agent.get_trace_id()}")

Best Practices

  1. Clear Instructions: Provide clear, specific instructions to guide agent behavior
  2. Tool Descriptions: Write detailed descriptions for function tools and sub-agents
  3. Error Handling: Always wrap agent execution in try-except blocks
  4. Iteration Limits: Set appropriate max_iterations to prevent infinite loops
  5. Resource Cleanup: MCP server connections are automatically cleaned up
  6. Streaming: Use streaming for long-running tasks to provide real-time feedback
  7. Sub-Agent Descriptions: Always provide descriptions for sub-agents so the parent agent knows when to use them
  8. Type Hints: Use type hints with @stark_tool for automatic schema generation
  9. Approvals: Implement approval workflows for sensitive operations
  10. Input Filtering: Use input filters to sanitize or modify data before LLM processing

Error Handling

from stark import Agent, Runner

try:
    agent = Agent(
        name="Error-Handling-Agent",
        instructions="You are a helpful assistant",
        model="claude-sonnet-4-5"
    )
    
    result = Runner(agent).run(
        input=[{"role": "user", "content": "Hello"}]
    )
    
    if result.max_iterations_reached:
        print("Warning: Maximum iterations reached")
    
except Exception as e:
    print(f"Error: {e}")
    # Handle error appropriately

Examples

Check out the examples/ directory for more comprehensive examples:

  • Basic agent usage
  • MCP server integration
  • Function tools and class-based tools
  • Hierarchical sub-agents
  • Streaming responses
  • Web search integration
  • Tool approvals and input filtering

Requirements

  • Python 3.10 or higher
  • Dependencies are automatically installed with the package

Contributing

Contributions are welcome! Please feel free to submit issues and pull requests.

License

See LICENSE file for details.

Support

For issues and questions, please open an issue on the GitHub repository.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

stark_agents-0.1.0.tar.gz (210.7 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

stark_agents-0.1.0-py3-none-any.whl (30.6 kB view details)

Uploaded Python 3

File details

Details for the file stark_agents-0.1.0.tar.gz.

File metadata

  • Download URL: stark_agents-0.1.0.tar.gz
  • Upload date:
  • Size: 210.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for stark_agents-0.1.0.tar.gz
Algorithm Hash digest
SHA256 369389bbe2c850026194a7c020458a29e03a0a012c41fe4418a30d7e5d3788cd
MD5 c4cfd2c527346b5a3c6a2ebd892472b9
BLAKE2b-256 02805e3abc3545eb10690487900aaa0c168fcd7be42bc673bc9bd8b5ece29c3f

See more details on using hashes here.

Provenance

The following attestation bundles were made for stark_agents-0.1.0.tar.gz:

Publisher: publish-pypi.yml on dev-aliraza/stark-agents

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file stark_agents-0.1.0-py3-none-any.whl.

File metadata

  • Download URL: stark_agents-0.1.0-py3-none-any.whl
  • Upload date:
  • Size: 30.6 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for stark_agents-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 30065915fffca45b535a536b8ff187891b3d4c6a8bf3f6c67f76d068a08a9b6c
MD5 c470df5b01dd9205a3c5a291e355ea94
BLAKE2b-256 3d908a941bbfc54f9f5d4774f966322b0d46601ba46242439043431d9409dd58

See more details on using hashes here.

Provenance

The following attestation bundles were made for stark_agents-0.1.0-py3-none-any.whl:

Publisher: publish-pypi.yml on dev-aliraza/stark-agents

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page