Skip to main content

Plugboard is an event driven modelling and orchestration framework for simulating and driving complex processes with many interconnected stateful components.

Project description

Plugboard

Plugboard is an event-driven modelling and orchestration framework in Python for simulating and driving complex processes with many interconnected stateful components.

You can use it to define models in Python and connect them together easily so that data automatically moves between them. After running your model on a laptop, you can then scale out on multiple processors, or go to a compute cluster in the cloud.

Some examples of what you can build with Plugboard include:

  • Digital twin models of complex processes:
    • It can easily handle common problems in industrial process simulation like material recirculation;
    • Models can be composed from different underlying components, e.g. physics-based simulations, machine-learning, AI models;
  • AI integrations:
    • You can feed data to/from different LLMs using Plugboard components;
    • Easily reconfigure and swap model providers for optimal performance.

🖋️ Key Features

  • Reusable classes containing the core framework, which you can extend to define your own model logic;
  • Support for different simulation paradigms: discrete time and event based.
  • YAML model specification format for saving model definitions, allowing you to run the same model locally or in cloud infrastructure;
  • A command line interface for executing models;
  • Built to handle the data intensive simulation requirements of industrial process applications;
  • Modern implementation with Python 3.12 and above based around asyncio with complete type annotation coverage;
  • Built-in integrations for loading/saving data from cloud storage and SQL databases;
  • Detailed logging of component inputs, outputs and state for monitoring and process mining or surrogate modelling use-cases.

🔌 Installation

Plugboard requires Python >= 3.12. Install the package with pip inside a virtual env as below.

python -m pip install plugboard

Optional integrations for different cloud providers can be installed using plugboard[aws], plugboard[azure] or plugboard[gcp].

Support for parallelisation can be installed using plugboard[ray].

⚡ Quickstart with AI

The fastest way to get started is to let your AI coding assistant build your first model. Run the following command in your project directory:

plugboard ai init

This copies a context file (AGENTS.md) into your project that gives your AI tool everything it needs to know about Plugboard: how components work, how to wire them together, library components that are already available, and best practices for building models.

Once initialised, simply open your AI tool of choice (GitHub Copilot, Cursor, Claude, etc.) and describe the model you want to build. For example:

"I want to simulate a production line with three machines in series. Each machine has a processing time drawn from a normal distribution and a 5% chance of failure. Model the throughput over 1000 time steps and save the results to a CSV."

Your AI assistant will use the AGENTS.md context to generate working Plugboard code — components, connectors, and a runnable process — tailored to your description.

🚀 Usage

Plugboard is built to help you with two things: defining process models, and executing those models. There are two main ways to interact with plugboard: via the Python API; or, via the CLI using model definitions saved in yaml format.

Building models with the Python API

A model is made up of one or more components, though Plugboard really shines when you have many! First we start by defining the Components within our model. Components can have only inputs, only outputs, or both. To keep it simple we just have two components here, showing the most basic functionality. Each component has several methods which are called at different stages during model execution: init for optional initialisation actions; step to take a single step forward through time; run to execute all steps; and destroy for optional teardown actions.

import typing as _t
from plugboard.component import Component, IOController as IO
from plugboard.schemas import ComponentArgsDict

class A(Component):
    io = IO(outputs=["out_1"])

    def __init__(self, iters: int, **kwargs: _t.Unpack[ComponentArgsDict]) -> None:
        super().__init__(**kwargs)
        self._iters = iters

    async def init(self) -> None:
        self._seq = iter(range(self._iters))

    async def step(self) -> None:
        try:
            self.out_1 = next(self._seq)
        except StopIteration:
            await self.io.close()


class B(Component):
    io = IO(inputs=["in_1"])

    def __init__(self, path: str, **kwargs: _t.Unpack[ComponentArgsDict]) -> None:
        super().__init__(**kwargs)
        self._path = path

    async def init(self) -> None:
        self._f = open(self._path, "w")

    async def step(self) -> None:
        out = 2 * self.in_1
        self._f.write(f"{out}\n")

    async def destroy(self) -> None:
        self._f.close()

There is also a @component decorator which simplifies creating Components for small stateless transform type functions. A component instance can be created by calling the .component method of the object returned by the decorator. The wrapped function can be sync or async and will be called as the step method with the named inputs being passed in. Inputs must be specified matching function args. Outputs must be specified and the function must return a dictionary where the keys match the outputs.

@component(inputs=["in_1"], outputs=["out_1"])
def pow2(in_1: int) -> int:
  return {"out_1": in_1 ** 2}

result = pow2(2)  # Preserves original function call -> result = {"out_1": 4}
comp_pow2 = pow2.component(name="component-pow2")

Now we take these components, connect them up as a Process, and fire off the model. Using the Process context handler takes care of calling init at the beginning and destroy at the end for all Components. Calling Process.run triggers all the components to start iterating through all their inputs until a termination condition is reached. Simulations proceed in an event-driven manner: when inputs arrive, the components are triggered to step forward in time. The framework handles the details of the inter-component communication, you just need to specify the logic of your components, and the connections between them.

from plugboard.connector import AsyncioConnector
from plugboard.process import LocalProcess
from plugboard.schemas import ConnectorSpec

process = LocalProcess(
    components=[A(name="component-a", iters=5), B(name="component-b", path="b.txt"), comp_pow2],
    connectors=[
        AsyncioConnector(
            spec=ConnectorSpec(source="component-a.out_1", target="component-b.in_1"),
        ),
        AsyncioConnector(
            spec=ConnectorSpec(source="component-a.out_1", target=f"{comp_pow2.name}.in_1"),
        )
    ],
)
async with process:
    await process.run()

Visually, we've created the model below, with Plugboard automatically handling the flow of data between the components.

flowchart LR
  subgraph Process
    direction LR
    comp_a(A<br>**component-a**)
    comp_b(B<br>**component-b**)
    comp_pow2(pow2<br>**component-pow2**)
  end
  comp_a -- out_1 --> comp_b
  comp_a -- out_1 --> comp_pow2

Executing pre-defined models on the CLI

In many cases, we want to define components once, with suitable parameters, and then use them repeatedly in different simulations. Plugboard enables this workflow with model specification files in yaml format. Once the components have been defined, the simple model above can be represented as follows. Components auto-generated with the @component decorator can be referenced by the name of the wrapped function.

# my-model.yaml
plugboard:
  process:
    args:
      components:
      - type: hello_world.A
        args:
          name: "component-a"
          iters: 10
      - type: hello_world.B
        args:
          name: "component-b"
          path: "./b.txt"
      - type: hello_world.pow2
        args:
          name: "component-pow2"
      connectors:
      - source: "component-a.out_1"
        target: "component-b.in_1"
      - source: "component-a.out_1"
        target: "component-pow2.in_1"

We can now run this model using the plugboard CLI with the command:

plugboard process run my-model.yaml

📖 Documentation

For more information including a detailed API reference and step-by-step usage examples, refer to the documentation site. We recommend diving into the tutorials for a step-by-step to getting started.

🐾 Roadmap

Plugboard is under active development, with new features in the works:

  • Support for strongly typed data messages and validation based on pydantic.
  • Support for different parallelisation patterns such as: single-threaded with coroutines, single-host multi process, or distributed with Ray in Kubernetes.
  • Data exchange between components with popular messaging technologies like RabbitMQ and Google Pub/Sub.
  • Support for different message exchange patterns such as: one-to-one, one-to-many, many-to-one etc via a broker; or peer-to-peer with http requests.

👋 Contributions

Contributions are welcomed and warmly received! For bug fixes and smaller feature requests feel free to open an issue on this repo. For any larger changes please get in touch with us to discuss first. More information for developers can be found in the contributing section of the docs.

⚖️ Licence

Plugboard is offered under the Apache 2.0 Licence so it's free for personal or commercial use within those terms.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

plugboard-0.7.0.tar.gz (548.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

plugboard-0.7.0-py3-none-any.whl (106.5 kB view details)

Uploaded Python 3

File details

Details for the file plugboard-0.7.0.tar.gz.

File metadata

  • Download URL: plugboard-0.7.0.tar.gz
  • Upload date:
  • Size: 548.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for plugboard-0.7.0.tar.gz
Algorithm Hash digest
SHA256 d9cf7b1024889d46c4976ca3f0040433f5cecb1f8b6a7510cad888f381bfa3a0
MD5 8afcdbcf8738b4e9c283f6d7f54a594e
BLAKE2b-256 bd8c414be507678a26ac88b97ba5c16d2bb3c9fbd1f17ed3e2c17275d968bf56

See more details on using hashes here.

Provenance

The following attestation bundles were made for plugboard-0.7.0.tar.gz:

Publisher: pypi.yaml on plugboard-dev/plugboard

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file plugboard-0.7.0-py3-none-any.whl.

File metadata

  • Download URL: plugboard-0.7.0-py3-none-any.whl
  • Upload date:
  • Size: 106.5 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for plugboard-0.7.0-py3-none-any.whl
Algorithm Hash digest
SHA256 146083d197c2d108520cb21bf1c1c7f90066b4040ab5c46bff811c90a07d3a72
MD5 9b0ed4b1bcb2c9096dac625fd12ae6a1
BLAKE2b-256 79ea22186b1fdae248d4794d0a224cd1e314da69304f73b58a749e1172517ce2

See more details on using hashes here.

Provenance

The following attestation bundles were made for plugboard-0.7.0-py3-none-any.whl:

Publisher: pypi.yaml on plugboard-dev/plugboard

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page