Skip to main content

A generic bridge between EPICS IOCs and Python logic.

Project description

EPICS Bridge

Python License

EPICS Bridge is a high-availability Python framework designed for implementing a robust EPICS-Python interface. It provides a structured environment for bridging external control logic with the EPICS control system, emphasizing synchronous execution, fault tolerance, and strict process monitoring.

This library addresses the common reliability challenges like preventing silent stalls ("zombie processes") and handling network IO failures deterministically.

Documentation

Comprehensive project documentation lives in docs/README.md.

System Architecture

The core of epics-bridge relies on a Twin-Thread Architecture that decouples the control logic from the monitoring signal.

1. Synchronous Control Loop (Main Thread)

The primary thread executes the user-defined logic in a strict, synchronous cycle:

  1. Trigger: Waits for an input event or timer.
  2. Run Task: Executes user-defined task
  3. Acknowledge: Updates the task status and completes the handshake.

2. Isolated Heartbeat Monitor (Daemon Thread)

A separate, isolated thread acts as an internal watchdog. It monitors the activity timestamp of the Main Thread.

  • Operational: Pulses the Heartbeat PV as long as the Main Thread is active.
  • Stalled (Zombie Protection): If the Main Thread hangs (e.g., infinite loop, deadlocked IO) for longer than the defined tolerance, the Heartbeat thread ceases pulsing immediately. This alerts external watchdogs (e.g., the IOC or alarm handler) that the process is unresponsive.

3. Automatic Recovery ("Suicide Pact")

To support containerized environments (Docker, Kubernetes) or systemd supervisors, the daemon implements a fail-fast mechanism. If network connectivity is lost or IO errors persist beyond a configurable threshold (max_stuck_cycles), the watchdog performs a hard-kill of the process (os._exit(1)). This allows the external supervisor to perform a clean restart of the service.

4. Logger

Output important messages in the daemon shell to a configured log file.

Installation

# Install the package
pip install .

# Install test dependencies
pip install -r requirements-test.txt

Note: requirements-test.txt is configured to use an ESS Artifactory Python index. For ESS internal usage, install as-is. If you do not have access, remove the --index-url ... line or install equivalent dependencies from your own index.

Conda environment (recommended for integration tests)

Integration tests run a real IOC and require EPICS tooling. A working reference environment is provided in environment.yml.

conda env create -f environment.yml
conda activate epics-bridge
pip install -e .

Note: environment.yml uses the ess-conda-local channel. If you are outside ESS, you may need to adjust channels and package availability for the EPICS toolchain.

Project Structure

  • epics_bridge.daemon Main control loop, heartbeat logic, and failure handling

  • epics_bridge.io Synchronous P4P client wrapper with strict error handling

  • epics_bridge.base_pv_interface PV template definitions and prefix validation

  • epics_bridge.utils Small utilities (for example, the Timer context manager)


Quick Start

1. EPICS Interface

There should be a standard epics db to handle the basic functionalities of the daemon and any amount of specialized dbs to fulfill the intended functionality.

The standard db should always be loaded by the IOC that interfaces with the daemon. These are its contents:

record(bo, "$(P)Trigger") {
    field(DESC, "Start Task")
    field(ZNAM, "Idle")
    field(ONAM, "Run")
}

record(bi, "$(P)Busy") {
    field(DESC, "Task Running Status")
    field(ZNAM, "Idle")
    field(ONAM, "Busy")
}

record(bi, "$(P)Heartbeat") {
    field(DESC, "Daemon Heartbeat")
}

record(mbbi, "$(P)TaskStatus") {
    field(DESC, "Last Cycle Result")
    # State 0: Success (Green)
    field(ZRVL, "0")
    field(ZRST, "Success")
    field(ZRSV, "NO_ALARM")

    # State 1: Logic Failure (Yellow - e.g. Interlock)
    field(ONVL, "1")
    field(ONST, "Task Fail")
    field(ONSV, "MINOR")

    # State 2: EPICS IO Failure (Yellow - e.g. PV Read/Write Error)
    field(TWVL, "2")
    field(TWST, "IO Failure")
    field(TWSV, "MINOR")

    # State 3: Exception (Red - Software/Hardware Crash)
    field(THVL, "3")
    field(THST, "Exception")
    field(THSV, "MAJOR")
    # State 4: Skipped (e.g. trigger=False)
    field(FRVL, "4")
    field(FRST, "Skipped")
    field(FRSV, "NO_ALARM")
}

record(ai, "$(P)TaskDuration") {
    field(DESC, "Task duration")
    field(PREC, "2")
    field(EGU,  "s")
    field(HIGH, "5")
    field(HSV,  "MINOR")
    field(HIHI, "10")
    field(HHSV, "MAJOR")
}

record(waveform, "$(P)DebugLog") {
    field(DESC, "Recent debug log entries")
    field(FTVL, "STRING")
    field(NELM, "100")
}

record(waveform, "$(P)InfoLog") {
    field(DESC, "Recent info log entries")
    field(FTVL, "STRING")
    field(NELM, "100")
}

record(waveform, "$(P)WarningLog") {
    field(DESC, "Recent warning log entries")
    field(FTVL, "STRING")
    field(NELM, "100")
}

record(waveform, "$(P)ErrorLog") {
    field(DESC, "Recent error log entries")
    field(FTVL, "STRING")
    field(NELM, "100")
}

record(mbbo, "$(P)LogLevel") {
    field(DESC, "Runtime daemon log level")
    field(PINI, "YES")
    field(VAL,  "1")
    field(ZRST, "DEBUG")
    field(ONST, "INFO")
    field(TWST, "WARNING")
    field(THST, "ERROR")
}

record(waveform, "$(P)SilencedLoggers") {
    field(DESC, "Third-party logger names to clamp")
    field(FTVL, "STRING")
    field(NELM, "20")
}

2. Define a Python PV Interface

Subclass BasePVInterface and create PV instances in your constructor. Call super().__init__(prefixes=...), add your PVs; placeholder resolution runs automatically. Placeholders use Python format syntax {key} (e.g. {main}, {p1}) and are replaced from prefixes. Standard PVs (trigger, busy, heartbeat, task_status, task_duration) are created by the base.

from epics_bridge import BasePVInterface, PV

class MotorInterface(BasePVInterface):
    def __init__(self, prefixes: dict | None = None) -> None:
        super().__init__(prefixes=prefixes)
        self.position_rbv = PV("{main}Pos:RBV")
        self.velocity_sp = PV("{main}Vel:SP")
        self.temperature = PV("{sys}Temp:Mon")

You can also keep PV instances in lists or small helper objects (for example self.other_pvs = [PV("{main}Ch:01"), ...]). Placeholders in those names are still expanded from prefixes after __init__, but they are not treated as top-level interface attributes: use filter_pvs(), pv_to_attr, or iteration/in over the interface only for direct self.<name> PV fields; access nested ones through your own structure.

3. Implement Control Logic

Subclass BridgeDaemon and implement the synchronous run_task() method. Use pvget(PV or list of PVs) to read (mutates each PV’s .val and .raw), then read pv.val. Use pvput(list of PVs) to write (each PV’s .val is written to its channel). Let exceptions from run_task() bubble up; the base class guarantees cleanup and logs failures.

from epics_bridge import BridgeDaemon, TaskStatus

class MotorControlDaemon(BridgeDaemon):
    def run_task(self) -> TaskStatus:
        self.io.pvget(self.iface.velocity_sp)
        velocity = self.iface.velocity_sp.val

        if velocity is None:
            return TaskStatus.IO_FAILURE

        self.iface.position_rbv.val = velocity * 0.5
        self.io.pvput([self.iface.position_rbv])

        return TaskStatus.SUCCESS

4. Run the Daemon

def main():

    prefixes = {
        "main": "IOC:MOTOR:01:",
        "sys": "IOC:SYS:"
    }

    interface = MotorInterface(prefixes=prefixes)

    daemon = MotorControlDaemon(
        iface=interface,
    )

    daemon.start()

if __name__ == "__main__":
    main()

Example: Echo daemon (IOC + daemon)

This repository includes a complete example under examples/echo_daemon/:

  • st.cmd: IOC startup script (loads base_interface.db + echo-specific DBs from this directory)
  • echo_interface.py: PV interface (PVs created in constructor)
  • echo_daemon.py: example BridgeDaemon subclass
  • entrypoint.py: runnable entrypoint which sets up logging and starts the daemon
  • opi/main.bob: optional operator interface asset for the example

Typical workflow (requires EPICS + pvxs tooling; easiest via environment.yml):

# Terminal A: start IOC
E3_CMD_TOP="$(pwd)/examples/echo_daemon" run-iocsh examples/echo_daemon/st.cmd

# Terminal B: start daemon (logs under --log-dir)
python examples/echo_daemon/entrypoint.py --log-dir /tmp

Testing

# Unit tests (pure Python)
pytest -m "not slow" -v
# or
conda run -n epics-bridge pytest -m "not slow" -v

# Integration tests (IOC + daemon)
pytest -m slow -v

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

epics_bridge-4.4.0.tar.gz (26.7 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

epics_bridge-4.4.0-py3-none-any.whl (25.8 kB view details)

Uploaded Python 3

File details

Details for the file epics_bridge-4.4.0.tar.gz.

File metadata

  • Download URL: epics_bridge-4.4.0.tar.gz
  • Upload date:
  • Size: 26.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.12

File hashes

Hashes for epics_bridge-4.4.0.tar.gz
Algorithm Hash digest
SHA256 a144514c5803b4530f2d305c5c61d9060d3a0ab1fd3f9701504c999700bbba92
MD5 e90f8b7787dab736bd364f352bc7057e
BLAKE2b-256 5057bca9f9569334e4e3712d49ea8c807eeb0a55689597bb5f356fa626b2f7eb

See more details on using hashes here.

File details

Details for the file epics_bridge-4.4.0-py3-none-any.whl.

File metadata

  • Download URL: epics_bridge-4.4.0-py3-none-any.whl
  • Upload date:
  • Size: 25.8 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.12

File hashes

Hashes for epics_bridge-4.4.0-py3-none-any.whl
Algorithm Hash digest
SHA256 54f671039401d9dd8b2f95cede6a8c1d4eb6a4551d4e5a4261000b0508621a21
MD5 875abcbe867411eebf24b0c567745181
BLAKE2b-256 3edb710aaddd5ca10e3e9c5460fa58fc037d1af7f391510f3e5025821afba4b7

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page