Skip to main content

Type-safe, extensible tool framework for AI agents with monadic error handling

Project description

Toolcase

Type-safe, extensible tool framework for AI agents.

Features

  • Async-first with sync compatibility
  • Type-safe parameters via Pydantic generics
  • Monadic error handling with Result[T, E] types
  • Multi-framework converters (OpenAI, Anthropic, Google)
  • MCP protocol & HTTP server for Cursor/Claude Desktop
  • Middleware pipeline (logging, retry, timeout, rate limiting, circuit breaker)
  • Agentic primitives (router, fallback, race, gate, escalation)
  • Structured concurrency with TaskGroup and CancelScope
  • Distributed tracing (OTLP, Datadog, Honeycomb, Zipkin)
  • Caching with TTL (memory, Redis, Memcached)
  • Dependency injection with scoped lifecycle
  • Streaming with SSE, WebSocket, JSON Lines adapters

Installation

pip install toolcase

# Optional integrations
pip install toolcase[mcp]        # MCP protocol (Cursor, Claude Desktop)
pip install toolcase[http]       # HTTP REST server
pip install toolcase[langchain]  # LangChain tools
pip install toolcase[redis]      # Redis cache backend
pip install toolcase[otel]       # OpenTelemetry exporters
pip install toolcase[all]        # Everything

Quick Start

from toolcase import tool, init_tools

@tool(description="Search the web")
async def search(query: str, limit: int = 5) -> str:
    return f"Found {limit} results for: {query}"

registry = init_tools(search)  # Registers discovery tool + your tools
result = await registry.execute("search", {"query": "python", "limit": 3})

Class-Based Tools

from pydantic import BaseModel, Field
from toolcase import BaseTool, ToolMetadata

class SearchParams(BaseModel):
    query: str = Field(..., description="Search query")
    limit: int = Field(default=5, ge=1, le=20)

class SearchTool(BaseTool[SearchParams]):
    metadata = ToolMetadata(name="search", description="Search the web", category="search")
    params_schema = SearchParams

    async def _async_run(self, params: SearchParams) -> str:
        return f"Found {params.limit} results for: {params.query}"

Core Concepts

Middleware Pipeline

from toolcase import LoggingMiddleware, RetryMiddleware, TimeoutMiddleware

registry.use(LoggingMiddleware())
registry.use(RetryMiddleware(max_retries=3))
registry.use(TimeoutMiddleware(30.0))

Monadic Error Handling

from toolcase import Ok, Err, Result, try_tool_operation

def _run_result(self, params) -> Result[str, ToolError]:
    return (
        self._validate(params)
        .flat_map(lambda p: self._fetch(p))
        .map(lambda d: self._format(d))
    )

# Auto-wrap exceptions
result = try_tool_operation("my_tool", lambda: risky_call())
match result:
    case Ok(value): print(value)
    case Err(error): print(error.message)

See docs/MONADIC_ERRORS.md for complete guide.

Agentic Composition

from toolcase import router, fallback, race, gate, Route

smart_search = router(Route(lambda p: "code" in p["query"], code_search), default=web_search)
resilient = fallback(primary_api, backup_api, cache)  # Try until success
fastest = race(api_a, api_b, timeout=5.0)             # First success wins
premium = gate(lambda ctx: ctx.get("is_premium"), expensive_tool)

Multi-Framework Export

from toolcase.foundation.formats import to_openai, to_anthropic, to_google

openai_tools = to_openai(registry)      # OpenAI function calling
anthropic_tools = to_anthropic(registry) # Anthropic tool_use
gemini_tools = to_google(registry)       # Google Gemini

MCP & HTTP Server

from toolcase.ext.mcp import serve_mcp, serve_http

serve_mcp(registry, transport="sse", port=8080)  # Cursor, Claude Desktop
serve_http(registry, port=8000)                   # REST API

Dependency Injection

from toolcase import Container, Scope

container = Container()
container.provide("db", lambda: Database(), Scope.SINGLETON)
container.provide("http", lambda: httpx.AsyncClient(), Scope.SCOPED)

async with container.scope() as ctx:
    db = await container.resolve("db", ctx)

Batch Execution

from toolcase import BatchConfig

params_list = [{"query": q} for q in ["python", "rust", "go"]]
results = await tool.batch_run(params_list, BatchConfig(concurrency=5))
print(f"Success: {results.success_rate:.0%}")

Streaming

from toolcase import sse_adapter

@tool(description="Stream response", streaming=True)
async def stream_search(query: str):
    for i in range(10):
        yield f"Result {i} for {query}"

async for event in sse_adapter(stream_search.stream({"query": "test"})):
    print(event)

Observability

from toolcase import configure_tracing, configure_logging, TracingMiddleware

configure_tracing(exporter="otlp", endpoint="http://localhost:4317")
configure_logging(level="INFO", json=True)
registry.use(TracingMiddleware())

Testing

from toolcase import ToolTestCase, mock_tool

class TestSearch(ToolTestCase):
    async def test_search(self):
        tool = mock_tool("search", return_value="mocked")
        result = await tool.run({"query": "test"})
        self.assert_success(result)

Settings & Environment

from toolcase import get_settings, load_env

load_env()  # Load .env files
settings = get_settings()
print(settings.cache.ttl, settings.retry.max_retries)

CLI Help

toolcase help              # List topics
toolcase help tool         # Tool creation
toolcase help middleware   # Middleware pipeline
toolcase help agents       # Agentic patterns
toolcase help mcp          # MCP/HTTP server
toolcase help result       # Monadic errors

API Reference

Core

BaseTool[T] · ToolMetadata · ToolCapabilities · @tool · init_tools

Errors

Result[T, E] · Ok · Err · ErrorCode · ToolError · try_tool_operation

Runtime

Middleware · compose · pipeline · parallel · streaming_pipeline

Agents

router · fallback · race · gate · retry_with_escalation

Concurrency

TaskGroup · CancelScope · Lock · Semaphore · run_sync

Observability

configure_tracing · configure_logging · TracingMiddleware · Span

IO

ToolCache · MemoryCache · StreamChunk · StreamEvent · BatchConfig

License

Griffin License - Made by GriffinCanCode

See LICENSE for full terms.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

toolcase-0.1.0.tar.gz (326.6 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

toolcase-0.1.0-py3-none-any.whl (433.6 kB view details)

Uploaded Python 3

File details

Details for the file toolcase-0.1.0.tar.gz.

File metadata

  • Download URL: toolcase-0.1.0.tar.gz
  • Upload date:
  • Size: 326.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.0

File hashes

Hashes for toolcase-0.1.0.tar.gz
Algorithm Hash digest
SHA256 f39a8176c7928918c67e1bd5f5954d52a34b8b2584f2254cc62e4929b044ed02
MD5 323090167788eb409c7295a5cf1abf39
BLAKE2b-256 b3a44e460d5cdc866bf2c36ef69f5ce80369747fe2b549333b5c555ab9569ed4

See more details on using hashes here.

File details

Details for the file toolcase-0.1.0-py3-none-any.whl.

File metadata

  • Download URL: toolcase-0.1.0-py3-none-any.whl
  • Upload date:
  • Size: 433.6 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.0

File hashes

Hashes for toolcase-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 44b67beeffa02e031599c1bae975b03753f94cd1cad428869148a4e23d6c06b0
MD5 41d224d8e396f6df2a551873ad880e3f
BLAKE2b-256 d41c6c46c69c7119bb18d174954fa13ab441dd587c5380938e1c94a804293f3d

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page