Protocol and Language Agnostic Tooling Yielding Universal Semantics
Project description
๐ฆซ plat (aka "platypus")
"call it like you're there"
plat is a Protocol and Language Agnostic Tooling Yielding Proxy-like Universal Semantics. In short:
- just write your methods...
- ...then call them
It doesn't matter what you are designing
- Standard REST API
- MCP Server
- A CLI tool
- Database CRUD
- AI Tool Calls
- Inter Process Communication
- Worker Queues
- Client-to-client Chat Room
At the end of the day, all there is is handlers and callers.
Yes there is an API layer there, but it isn't really your concern.
- auth
- exceptions
- headers
- serialization/deserialization/type coercion
- param validation
- route building
- response building
- rate limiting
- caching
- client-side retry logic
- client-side param validation
All of that can be handled easily behind the scenes with easily standardizable middleware plugins.
In fact, the transport method itself doesn't even matter:
- HTTP
- WS
- File Queues
- DB triggers
- Zapier Integrations
- External APIs
- USPS mail delivery
It's all important, but your function handler does not need to know about it, and neither does your call site.
Okay, seriously ... what are we even talking about here?
Fair... try this
Quickstart
1. Install
pip install modularizer-plat
2. Make a server
from plat import Controller, POST, RouteContext, create_server
@Controller()
class OrdersApi:
@POST()
def createOrder(
self,
input: dict[str, str | int],
ctx: RouteContext,
) -> dict[str, str]:
return {"orderId": "ord_123", "status": "pending"}
server = create_server({"port": 3000}, OrdersApi)
server.listen()
3. Serve it with the CLI
plat serve
4. See the docs
open http://localhost:3000/
5. Make a client
plat gen client http://localhost:3000/ --dst client.py
from client import create_client
client = create_client("http://localhost:3000")
order = client.createOrder(itemId="sku_123", qty=2)
print(order)
6. Make a CLI
plat gen cli http://localhost:3000/ --dst cli.py
python cli.py createOrder --itemId=sku_123 --qty=2
7. Let your AI loose
from plat import create_openapi_client
client = create_openapi_client(
"http://localhost:3000/openapi.json",
"http://localhost:3000",
)
tools = client.tools
# hand `tools` to your AI provider
# then call back into `client.createOrder(...)` when it selects a tool
Use it
- Client call:
client.createOrder(itemId="...", qty=2) - Generated CLI call:
plat createOrder --itemId=... --qty=2 - AI tool call: an LLM can see
createOrderas a tool with a name, input shape, and result shape - Documentation: generated
openapi.json, plus docs/tool metadata derived from the same method
๐ฏ Why plat exists
Most API frameworks make you think about:
- routes
- methods
- headers
- authentication
- serialization/deserialization/coercion
- sending responses
- REST hierarchies
- request shapes
- wire protocols
- client generation drift
- transport details
plat tries to make most of that disappear.
What you should be thinking about is:
- what methods exist
- what input each method accepts
- what result each method returns
Thatโs the part plat treats as sacred.
It feels like you are adding a new class, and behind the scenes an API is born
One of the biggest reasons plat exists is to make it easy to use any AI provider:
- on the client side
- on the server side
- as the initiator of your tasks
- or as the doer of your tasks
- or both
Everything below is in service of that same promise: define useful methods once, then let clients, CLIs, docs, and AI tools all see the same surface.
Diagram
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ Tool Definitions โ
โ (controllers + decorated methods) โ
โ โ
โ TypeScript (plain types) Python (type hints) โ
โ class Orders { @Controller() โ
โ @Post() class Orders: โ
โ create(input, ctx) {} @POST() โ
โ @Get() def create(self): ... โ
โ list(input, ctx) {} @GET() โ
โ } def list(self): ... โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโฌโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ
โโโโโโโโโโโโดโโโโโโโโโโโ
โ Operation Registry โ
โ โ
โ operationId โโโโโโบ bound handler
โ method+path โโโโโโบ bound handler
โโโโโโโโโโโโฌโโโโโโโโโโโ
โ
server protocol plugins (how tool calls arrive)
โ
โโโโโโโโโฌโโโโโโโโโฌโโโโโโดโโโโฌโโโโโโโโโฌโโโโโโโโฌโโโโโโโโ
โ โ โ โ โ โ โ
โโโโโดโโโโโโโโดโโโโโโโโโดโโโโโโโโโโดโโโโโโโโโดโโโโโโโโดโโโโโโโโดโโโ
โ HTTP โโ WS โโ File โโ WebRTC โโ DB โโBullMQโโ MQTT โ
โ REST โโ RPC โโ Queue โโ Data โโ Poll โโ RedisโโPub/ โ
โ โโ โโ โโ Chan โโ Rows โโQueue โโ Sub โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
...
literally anything that can carry a JSON envelope
...
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ HTTP โโ WS โโ File โโ WebRTC โโ POST โโ eBay โโ FB โ
โ fetchโโ RPC โโ IO โโ Peer โโto extโโ list โโ Msg โ
โ โโ โโ โโ Conn โโ API โโ poll โโ poll โ
โโโโโฌโโโโโโโโฌโโโโโโโโโฌโโโโโโโโโโฌโโโโโโโโโฌโโโโโโโโฌโโโโโโโโฌโโโ
โ โ โ โ โ โ โ
โโโโโโโโโดโโโโโโโโโดโโโโโฌโโโโโดโโโโโโโโโดโโโโโโโโดโโโโโโโโ
โ
client transport plugins (how tool calls are sent)
โ
โโโโโโโโโโโโดโโโโโโโโโโโ
โ OpenAPI Client โ
โ (typed proxy) โ
โโโโโโโโโโโโฌโโโโโโโโโโโ
โ
โโโโโโโโโโโฌโโโโโโโโโผโโโโโโโโโฌโโโโโโโโโโโโ
โ โ โ โ โ
โโโโโโดโโโโโโโโโโดโโโโโโโโโดโโโโโโโโโดโโโโโโโโโโโโดโโโโโโ
โ TS โโ Python โโ CLI โโ curl โโ LLM Agent โ
โ โโ โโ โโ bash โโ โ
โ node โโ sync โโ plat do โโ write โโ Claude โ
โ bun โโ async โโplat pollโโ JSON โโ ChatGPT โ
โ browserโโ promiseโโ โโto inboxโโ Gemini โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
The transport protocol, serialization, deserialization, queueing, and delivery mechanics are intentionally pushed out of your way.
That is especially powerful for AI-heavy systems, because you can keep swapping providers and execution patterns while preserving the same tool-shaped surface.
๐ญ What the user experience should feel like
It should feel like this:
order = client.createOrder(itemId="sku_123", qty=2)
Not like this:
- choosing between totally different client libraries
- hand-authoring RPC envelopes
- thinking about HTTP vs WS every time you call a method
- manually syncing method names, routes, SDK methods, and OpenAPI operation IDs
- re-implementing error handling, retries, and auth every time you make a request
- refactoring if you change languages or protocols
- endless boilerplate
It's like an SDK except you don't have to write it. It just comes for free with every openapi.json.
๐ฆซ Flat by design
plat is intentionally opinionated about the API shape.
The rules
- Method names are globally unique
- Method names are the canonical route names
- Input comes in as one object
- Return values matter as first-class API types
- Controllers organize code and docs, not URL hierarchies
- The API surface stays flat and easy to call
Example
from plat import Controller, GET, POST, RouteContext
@Controller()
class OrdersApi:
@GET()
def getOrder(self, input: dict[str, str], ctx: RouteContext) -> dict[str, str]:
return {"id": input["id"], "status": "pending"}
@POST()
def createOrder(self, input: dict[str, str | int], ctx: RouteContext) -> dict[str, str]:
return {"id": "ord_123", "status": "pending"}
Canonical routes:
GET /getOrderPOST /createOrder
Canonical client calls:
client.getOrder(id="ord_123")
client.createOrder(itemId="sku_123", qty=2)
That flatness matters because it makes the generated and dynamic clients obvious:
- easy for humans to remember
- easy for CLIs to expose
- easy for AI agents to understand
- easy for generated clients to mirror exactly
- easy to hand to any AI provider as tool definitions
โณ Long-running calls without changing the mental model
Sometimes a method is fast:
client.createOrder(itemId="sku_123", qty=2)
Sometimes a method is slow, and you want visibility:
import logging
logger = logging.getLogger("plat.example")
client.importCatalog(
source="s3://bucket/catalog.csv",
_rpc_events=lambda event: logger.info("%s %s", event.event, event.data),
)
Or you want deferred execution:
handle = client.importCatalog(
source="s3://bucket/catalog.csv",
_execution="deferred",
)
result = handle.wait()
The important part is that it is still the same method.
As a bonus, in the right mode you can get:
- progress updates
- logs
- chunks/messages
- cancellation
That is what most users actually care about. The carrier and plugin details are for transport authors.
๐ Python support
plat supports Python servers and clients too.
You can:
- write Python controllers with plat decorators
- generate OpenAPI from
*.api.py - generate Python clients from OpenAPI
- use sync, async, and promise-style Python clients
Python highlights
- Sync clients
- Async clients
- Promise-style clients
- Deferred call handles
- Automatic input coercion
- Automatic output serialization
- First-class HTTP exception types
๐ One client, many transports
The same method call should stay usable even when transport changes.
http_client = create_client("http://localhost:3000")
rpc_client = create_client("ws://localhost:3000")
file_client = create_client("file:///tmp/plat-queue")
http_client.createOrder(itemId="sku_123", qty=2)
rpc_client.createOrder(itemId="sku_123", qty=2)
file_client.createOrder(itemId="sku_123", qty=2)
Same tool call. Different carrier.
Diagram
createOrder({ itemId, qty })
โ
โโโโโโโโโผโโโโโโโโโ
โ โ โ
โผ โผ โผ
HTTP WS File
โ โ โ
โโโโโโโโโผโโโโโโโโโ
โผ
same type-aware method call
๐ค AI tool calling
plat is a natural fit for LLM tools because the API shape is already tool-shaped.
Every operation has:
- a stable name
- one input object
- one result
- generated schema
That means you can use AI providers in whichever role you want:
- as the caller deciding what tools to use
- as the worker fulfilling part of a task
- as interchangeable providers inside the same larger workflow
- on the client side or the server side
That makes the same API useful to:
- normal app code
- a CLI
- generated SDKs
- an LLM agent
๐งฐ Dynamic clients and generated clients
plat supports both styles.
Dynamic clients
The OpenAPI client can work directly from an OpenAPI document and a runtime proxy.
Best when you want:
- low ceremony
- transport flexibility
- no generated wrapper code
Generated clients
plat can also generate clients that materialize types and methods.
Especially useful in Python, where explicit generated models and wrappers help more than in TypeScript.
๐ฅ๏ธ CLI
plat includes a spec-first CLI.
plat gen openapi
plat gen client
plat gen cli
plat run openapi.json
plat serve
The CLI is available from both Node and Python packaging surfaces, with capability moving toward parity.
๐งฉ Plugin architecture
The plugin architecture matters, but mostly as an implementation and extension story.
For normal plat users, the important thing is:
- methods stay flat
- typing stays strong
- clients feel direct
- transport details stay hidden
- provider complexity stays hidden too
For plugin developers, plat provides the escape hatch.
Client-side transport plugins
Transport plugins follow a generic lifecycle:
- connect
- send request
- receive updates
- receive result
- disconnect
Server-side protocol plugins
Protocol plugins are how tool calls arrive and how updates/results leave.
The goal is for the core method/typing/invocation story to be independent from:
- HTTP
- WebSockets
- Node
- any specific host process
Why this matters
That is what enables ideas like:
- a browser-side server
- a mobile-hosted server
- a worker-hosted server
- IndexedDB-backed local APIs
- WebRTC-based peer-to-peer tools
- custom carriers like DB polling or Redis streams
Most users should not have to think about any of this unless they are building a transport.
๐ What makes plat different
Most systems force you to choose:
- REST or RPC
- server or client
- app integration or AI tool integration
- HTTP or "something custom"
plat is trying to collapse those choices into one model:
- Define useful tools
- Expose them everywhere
- Change carriers without changing the API itself
That makes plat especially interesting for:
- internal tools
- AI agents
- automation systems
- offline-first systems
- browser-hosted local APIs
- weird protocol experiments
๐ฃ๏ธ Direction
plat is actively moving toward:
- deeper transport neutrality
- stronger portable server core extraction
- easier custom protocol plugins
- stronger generated clients and CLIs
- better cross-language symmetry
The north star is simple:
Define tools once. Call them from anywhere. Carry them over anything.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file modularizer_plat-0.7.0.tar.gz.
File metadata
- Download URL: modularizer_plat-0.7.0.tar.gz
- Upload date:
- Size: 78.5 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
ec1c2e0b0ee0bdfb71f45421ae2f78a1c2e1c1521490335ada4392fabaf5c746
|
|
| MD5 |
1f5e5c258821204df03ef4ba8e7a1d51
|
|
| BLAKE2b-256 |
8aaf4ecf5f73e77ecea73246e91b2b6815d8004991fdc7305cf696bd611c22c5
|
Provenance
The following attestation bundles were made for modularizer_plat-0.7.0.tar.gz:
Publisher:
publish.yml on modularizer/plat
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
modularizer_plat-0.7.0.tar.gz -
Subject digest:
ec1c2e0b0ee0bdfb71f45421ae2f78a1c2e1c1521490335ada4392fabaf5c746 - Sigstore transparency entry: 1245806300
- Sigstore integration time:
-
Permalink:
modularizer/plat@aa75bc7415d14b07c1ccaeee452b2e17b4d05293 -
Branch / Tag:
refs/tags/v0.7.0 - Owner: https://github.com/modularizer
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@aa75bc7415d14b07c1ccaeee452b2e17b4d05293 -
Trigger Event:
release
-
Statement type:
File details
Details for the file modularizer_plat-0.7.0-py3-none-any.whl.
File metadata
- Download URL: modularizer_plat-0.7.0-py3-none-any.whl
- Upload date:
- Size: 79.5 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
c0b4e4bd4923c14fc0d98bedbbd6e9a2259d21d88e48e70c5c0f595413db4474
|
|
| MD5 |
0d94c8abe42441ccad7470cfd43eafcf
|
|
| BLAKE2b-256 |
c67d98001a0d9b90c694891e040c6b498be113dbebf0f0033d989e7546826abd
|
Provenance
The following attestation bundles were made for modularizer_plat-0.7.0-py3-none-any.whl:
Publisher:
publish.yml on modularizer/plat
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
modularizer_plat-0.7.0-py3-none-any.whl -
Subject digest:
c0b4e4bd4923c14fc0d98bedbbd6e9a2259d21d88e48e70c5c0f595413db4474 - Sigstore transparency entry: 1245806322
- Sigstore integration time:
-
Permalink:
modularizer/plat@aa75bc7415d14b07c1ccaeee452b2e17b4d05293 -
Branch / Tag:
refs/tags/v0.7.0 - Owner: https://github.com/modularizer
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@aa75bc7415d14b07c1ccaeee452b2e17b4d05293 -
Trigger Event:
release
-
Statement type: