Skip to main content

An Async-first tool calling & LLM client orchestration. OpenAI-compatible, cacheable, framework-agnostic.

Project description

🧙‍♂️ Kebogyro

License: MIT Python Build Status Coverage

An async-first, Javanese-coded LLM orchestration toolkit — made for geeks who speak tools.

kebogyro is a Python package born from the union of 🛠️ Gyro Gearloose’s mechanical genius and 🎶 Kebogiro, the Javanese gamelan suite played in moments of ceremony, transition, and wonder.

Like its namesake, kebogyro is all about orchestrating — not music, but function calls, LLM responses, and microservice bridges. It's fast, extendable, and knows its way around a toolbench.


🎡 What's in a Name?

"Gyro" = Gearloose-style inventiveness and tool-wrangling "Kebo" = Javanese for buffalo, symbolizing calm power, persistence, and resilience — qualities you want in long-running AI services ⚙️💡

Together: Kebogyro — a humble-yet-geeky orchestration layer that never breaks under load, and plays well with others.


🧬 Why Use kebogyro?

Because building LLM apps should be fast, flexible, and async-native:

  • 🔁 Multi-Provider Orchestration — OpenAI, OpenRouter, Groq, Mistral, and more, with one clean interface.
  • 🪛 Tool Calling — Let your model invoke Python functions directly.
  • 🧠 Cache Smart — Redis, Django ORM, or memory-backed tool caching.
  • 🔌 Zero Framework Lock-In — Integrate with anything: FastAPI, Flask, raw ASGI, or Celery.
  • 🛠 Hackable by Design — Extend it, override it, plug it in where you want.

🧠 Core Concepts

Kebogyro is built around three key components:

🧵 LLMClientWrapper

Wraps any supported LLM provider (OpenAI, OpenRouter, Groq, etc.) with support for:

  • Tool-calling via additional_tools
  • System prompts
  • Result caching
  • Optional mcp_client integration
llm = LLMClientWrapper(
    provider="openrouter",
    model_name="mistralai/mistral-7b-instruct",
    model_info={"api_key": os.getenv("OPENROUTER_API_KEY")},
    system_prompt="You are a helpful AI assistant.",
    mcp_client=mcp_client,                # Optional
    additional_tools=[my_tool],           # Optional
    llm_cache=MyLLMCacheAdapter(),        # Optional
)

Works seamlessly with or without tools.


🛠 BBServerMCPClient

Adapter for remote tool orchestration using the MCP protocol. Supports multiple transports:

  • stdio
  • sse
  • streamable HTTP
mcp_client = BBServerMCPClient(
    connections={
        # Keys are namespaces (server names). You can define multiple MCP servers.
        # Each one is scoped separately in cache.
        "workroom_tools": {
            "url": "http://localhost:5000/sse",
            "transport": "sse",
        },
        "finance_tools": {
            "url": "http://localhost:5100/sse",
            "transport": "sse",
        }
    },
    cache_adapter=MyLLMCacheAdapter()
)

Namespaces isolate tools per server and reflect in cache keys.


🎭 create_agent

Creates an orchestration pipeline that connects:

  • LLM
  • Optional tools
  • Optional MCP tools
from kebogyro.agent_executor import create_agent

agent = create_agent(
    llm_client=llm,
    tools=[my_tool],                # Optional standard tools
    mcp_tools=mcp_client,          # Optional BBServerMCPClient
    system_prompt="You're a coding agent.",
    stream=True
)

response = await agent.ainvoke({"input": "What's the weather like in Yogyakarta?"})

You can mix & match standard tools and MCP tools — or use none at all.


⚡️ Full Example: Caching + Custom Tools

from kebogyro.wrapper import LLMClientWrapper
from kebogyro.mcp_adapter.client import BBServerMCPClient
from kebogyro.agent_executor import create_agent
from kebogyro.cache import AbstractLLMCache
from kebogyro.utils import SimpleTool
import os

class MyLLMCacheAdapter(AbstractLLMCache):
    async def aget_value(self, key): ...
    async def aset_value(self, key, value, expiry_seconds): ...
    async def adelete_value(self, key): ...
    async def is_expired(self, key): ...

def say_hello(name: str) -> str:
    return f"Hello, {name}!"

hello_tool = SimpleTool.from_fn(
    name="say_hello",
    description="Greets the user.",
    fn=say_hello
)

mcp_client = BBServerMCPClient(
    connections={
        "tools": {
            "url": "http://localhost:5000/sse",
            "transport": "sse"
        }
    },
    cache_adapter=MyLLMCacheAdapter()
)

llm = LLMClientWrapper(
    provider="openrouter",
    model_name="mistralai/mistral-7b-instruct",
    model_info={"api_key": os.getenv("OPENROUTER_API_KEY")},
    mcp_client=mcp_client,
    additional_tools=[hello_tool],
    llm_cache=MyLLMCacheAdapter()
)

agent = create_agent(
    llm_client=llm,
    tools=[hello_tool],
    mcp_tools=mcp_client,
    system_prompt="You're a cached, tool-capable agent.",
    stream=False
)

response = await agent.ainvoke({"input": "Say hello to Lantip"})

🌐 LLM Provider Config

Edit config.py to register or override providers:

class LLMClientConfig(BaseModel):
    base_urls: Dict[str, HttpUrl] = Field(
        default_factory=lambda: {
            "openrouter": "https://openrouter.ai/api/v1",
            "anthropic": "https://api.anthropic.com/v1/",
            "cerebras": "https://api.cerebras.ai/v1",
            "groq": "https://api.groq.ai/openai/v1",
            "requesty": "https://router.requesty.ai/v1"
        }
    )
    google_default_base_url: HttpUrl = "https://generativelanguage.googleapis.com/v1beta/openai/"

Add custom:

config = LLMClientConfig()
config.base_urls["my_llm"] = "https://my-custom-llm.com/api"

📦 Install

pip install ./src/kebogyro

🙌 Contributing

Pull requests, bug reports, ideas, memes — all welcome.


📄 License

MIT — Use it, remix it, extend it. Let the gamelan guide your tools.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

kebogyro-0.1.0.tar.gz (22.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

kebogyro-0.1.0-py3-none-any.whl (21.9 kB view details)

Uploaded Python 3

File details

Details for the file kebogyro-0.1.0.tar.gz.

File metadata

  • Download URL: kebogyro-0.1.0.tar.gz
  • Upload date:
  • Size: 22.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.3

File hashes

Hashes for kebogyro-0.1.0.tar.gz
Algorithm Hash digest
SHA256 4ee11ae5c60e7865c6029f901136ff2b823730ff1922a446d2dc471df49e2101
MD5 9040815000e480c205f6e9a09535cd9d
BLAKE2b-256 924d49f6251027485bd77d54ad2c9646b8be7c352a0a0873b33ee87ecbf47129

See more details on using hashes here.

File details

Details for the file kebogyro-0.1.0-py3-none-any.whl.

File metadata

  • Download URL: kebogyro-0.1.0-py3-none-any.whl
  • Upload date:
  • Size: 21.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.3

File hashes

Hashes for kebogyro-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 6316785e690cacd77f526387fce8ca058e01d46142d14449bea3afbf9938b437
MD5 5b70c95ae9bb1d31c552a9136020921a
BLAKE2b-256 74e3c9d8aae8f21f83c23c14519c60982da5f610c560ae8acdf33e53f7535467

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page