No project description provided
Project description
ophyd-epicsrs
Rust EPICS Channel Access backend for ophyd.
Replaces pyepics (Python → ctypes → libca.so) with epics-rs (Python → PyO3 → Rust CA client), releasing the GIL during all network I/O.
Installation
pip install ophyd-epicsrs
Building from source requires a Rust toolchain (1.85+):
pip install maturin
maturin develop
Usage
Call use_epicsrs() once at startup, before constructing any ophyd Signals or Devices:
from ophyd_epicsrs import use_epicsrs
use_epicsrs()
# All ophyd devices now use the Rust CA backend
import ophyd
motor = ophyd.EpicsMotor("IOC:m1", name="motor1")
motor.wait_for_connection(timeout=5)
print(motor.read())
use_epicsrs() assigns ophyd.cl directly. It must be called before any
Signal or Device is constructed, since they capture ophyd.cl.get_pv
at construction time.
Parallel PV Read (bulk_caget)
Read multiple PVs concurrently in a single call. All CA requests are sent simultaneously using tokio async, completing in one network round-trip instead of N sequential reads.
from ophyd_epicsrs import EpicsRsContext
ctx = EpicsRsContext()
data = ctx.bulk_caget([
"IOC:enc_wf",
"IOC:I0_wf",
"IOC:ROI1:total_wf",
"IOC:ROI2:total_wf",
# ... 수십~수백 개 PV
], timeout=5.0)
# Returns dict: {"IOC:enc_wf": array, "IOC:I0_wf": array, ...}
Fly Scan Acceleration
Combine bulk_caget with bluesky-dataforge's AsyncMongoWriter for maximum fly scan throughput:
from ophyd_epicsrs import EpicsRsContext
from bluesky_dataforge import AsyncMongoWriter
import numpy as np
import time
ctx = EpicsRsContext()
writer = AsyncMongoWriter("mongodb://localhost:27017", "metadatastore")
RE.subscribe(writer) # replaces RE.subscribe(db.insert)
# In your flyer's collect_pages():
def collect_pages(self):
# 1. Parallel PV read — all waveforms in ~1ms
pvnames = [self.enc_wf_pv, self.i0_wf_pv]
pvnames += [f"ROI{r}:total_wf" for r in range(1, self.numROI + 1)]
raw = ctx.bulk_caget(pvnames)
# 2. Deadtime correction (numpy, fast)
enc = np.array(raw[self.enc_wf_pv])[:self.numPoints]
i0 = np.array(raw[self.i0_wf_pv])[:self.numPoints]
rois = {f"ROI{r}": np.array(raw[f"ROI{r}:total_wf"])[:self.numPoints]
for r in range(1, self.numROI + 1)}
# 3. Yield single EventPage — one bulk insert instead of N row inserts
now = time.time()
ts = [now] * self.numPoints
data = {"ENC": enc.tolist(), "I0": i0.tolist(), **{k: v.tolist() for k, v in rois.items()}}
timestamps = {k: ts for k in data}
yield {
"data": data,
"timestamps": timestamps,
"time": ts,
"seq_num": list(range(1, self.numPoints + 1)),
}
# → AsyncMongoWriter receives EventPage
# → Rust background thread: BSON conversion + insert_many
# → Python is free to start the next scan immediately
writer.flush() # wait for all pending inserts after scan
Before (sequential):
read PV1 (30ms) → read PV2 (30ms) → ... → read PV50 (30ms) = 1500ms
yield row1 → db.insert (5ms) → yield row2 → db.insert (5ms) → ... = 500ms
Total: ~2000ms
After (parallel + EventPage):
bulk_caget(50 PVs) = 1ms
numpy deadtime = 1ms
yield 1 EventPage → AsyncMongoWriter.enqueue → 0.1ms
Total: ~2ms (Python free), MongoDB insert continues in background
Performance
Measured against pyepics on the same IOC (EPICS motor record, LAN):
| Operation | pyepics | epicsrs | Speedup |
|---|---|---|---|
| CA get (no monitor) | 0.33 ms | 0.09 ms | 3.7x |
| CA get (with monitor) | 0.01 ms | 0.00 ms | — |
| CA put → immediate get | 0.85 ms | 0.44 ms | 1.9x |
| bulk_caget (50 PVs) | ~1500 ms | ~1 ms | 1500x |
| Device connect (200 PVs) | ~2 s | ~0.16 s | 12x |
The put→get improvement comes from the single-owner writer task architecture in epics-rs, which pipelines write and read requests on the same TCP connection without mutex contention. Combined with TCP_NODELAY, this eliminates the ~45ms head-of-line blocking that occurred when reads waited for writes to flush.
Advantages over pyepics backend
Zero-latency monitor callbacks
In the pyepics backend, all monitor callbacks are queued through ophyd's dispatcher thread:
EPICS event → C libca → pyepics callback → dispatcher queue → ophyd callback
This queuing introduces latency. When a motor moves fast, the DMOV (done-moving) signal transitions 0→1 quickly, but the callback is stuck behind hundreds of RBV position updates in the queue. This causes EpicsMotor.move(wait=True) to return before the motor actually stops — the well-known "another set call is still running" problem.
The epicsrs backend eliminates this by firing monitor callbacks directly from the Rust thread, bypassing the dispatcher queue entirely:
EPICS event → Rust tokio → ophyd callback (direct)
Rust's thread safety guarantees (Send/Sync traits, GIL-aware PyO3) make this safe without additional locking. The result: DMOV transitions are never missed, regardless of motor speed.
No PV cache — safe Device re-creation
The pyepics backend caches PV objects by name. Creating a second ophyd Device with the same PV prefix (e.g. switching xspress3 detector channels) causes subscription conflicts because two Devices share one PV object.
The epicsrs backend creates a fresh PV object per get_pv() call. The Rust runtime handles TCP connection sharing (virtual circuits) at the transport layer, so there is no performance penalty. Multiple Devices with the same PV prefix work independently.
Device-level bulk connect
When an ophyd Device (e.g. areaDetector with 200+ PVs) calls wait_for_connection(), the epicsrs backend collects all unconnected PVs and connects them in a single bulk operation:
pyepics: PV1 connect+read → PV2 connect+read → ... → PV200 connect+read
200 sequential GIL round-trips, each blocking on network I/O
epicsrs: collect 200 PVs → bulk_connect_and_prefetch(200 PVs)
1 GIL release → tokio: 200 connects + 200 reads in parallel → 1 GIL return
This is a structural advantage that pyepics cannot match: libca processes CA reads sequentially at the Python level (PV.get() blocks one at a time), while epicsrs crosses the Python↔Rust boundary once and runs all network I/O concurrently in the tokio runtime.
The speedup scales with PV count — a 200-PV areaDetector Device initializes in ~30ms instead of several seconds.
GIL-released bulk read
bulk_caget reads multiple PVs concurrently using tokio join_all, completing in a single network round-trip with the GIL released. See the Parallel PV Read section above.
Architecture
ophyd (Python)
└── _epicsrs_shim.py ophyd control layer interface
└── ophyd_epicsrs this package
└── _native.so PyO3 bindings
└── epics-rs pure Rust CA/PVA client (no libca.so)
GIL behavior
| Operation | GIL |
|---|---|
| CA get / put | released — py.allow_threads() → tokio async |
| CA monitor receive | released — tokio background task |
| Monitor callback → Python | held — dispatch thread |
| Connection wait | released — tokio async |
| bulk_caget | released — tokio join_all |
Key types
EpicsRsContext— Shared tokio runtime + CA client. One per session.EpicsRsPV— PV channel wrapper withwait_for_connection,get_with_metadata,put,add_monitor_callback.
Requirements
- Python >= 3.8
- ophyd >= 1.9 (vanilla PyPI — no fork required)
- epics-rs (bundled at build time)
Related
- bluesky-dataforge — Rust-accelerated document subscriber + async MongoDB writer
- epics-rs — Pure Rust EPICS implementation
License
BSD 3-Clause
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distributions
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file ophyd_epicsrs-0.5.2.tar.gz.
File metadata
- Download URL: ophyd_epicsrs-0.5.2.tar.gz
- Upload date:
- Size: 29.0 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: maturin/1.13.1
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
8c489e9ea34dc1a95f0ad790c1a0c29928d6a532d025dd62fbb4932e9ff9d01f
|
|
| MD5 |
1ec3bd3aab9db0b7b983a5ef89d69375
|
|
| BLAKE2b-256 |
d1e50fa37c6ea6c123b937ab2169ff5beeb01f74cc7d5d2fad313616a3655fbb
|
File details
Details for the file ophyd_epicsrs-0.5.2-cp38-abi3-win_amd64.whl.
File metadata
- Download URL: ophyd_epicsrs-0.5.2-cp38-abi3-win_amd64.whl
- Upload date:
- Size: 943.2 kB
- Tags: CPython 3.8+, Windows x86-64
- Uploaded using Trusted Publishing? No
- Uploaded via: maturin/1.13.1
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
01dce987e16cb132c80e60073ccd815a28113b8059fc54f9cfcd4d4fbdf8deee
|
|
| MD5 |
866e9f6f343011960a225c36d545fe40
|
|
| BLAKE2b-256 |
6da7dd4aed296f4c4079791d8371a9fecce41b1202d221cbc356dfce65912196
|
File details
Details for the file ophyd_epicsrs-0.5.2-cp38-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.
File metadata
- Download URL: ophyd_epicsrs-0.5.2-cp38-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
- Upload date:
- Size: 1.2 MB
- Tags: CPython 3.8+, manylinux: glibc 2.17+ x86-64
- Uploaded using Trusted Publishing? No
- Uploaded via: maturin/1.13.1
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
c17ed2b0cb2a07084a15b5d2ed6df5b2e792fb30b6fef41b369deff80edb6c94
|
|
| MD5 |
fe57fc13012c019757d54b2e5301eeb6
|
|
| BLAKE2b-256 |
d0759f687ae88669cb9c8d0e385c18ecd298280de8da752bd6a85c668366e894
|
File details
Details for the file ophyd_epicsrs-0.5.2-cp38-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl.
File metadata
- Download URL: ophyd_epicsrs-0.5.2-cp38-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl
- Upload date:
- Size: 1.2 MB
- Tags: CPython 3.8+, manylinux: glibc 2.17+ ARM64
- Uploaded using Trusted Publishing? No
- Uploaded via: maturin/1.13.1
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
f72f6e62a046cc4417216859d7eccaabbb68957a6a922a3fd8414ff7c0d79488
|
|
| MD5 |
dd83412c8c18d1a48f3606a343d96e36
|
|
| BLAKE2b-256 |
6dcd740a84a85e18ffc85b18dad8462497c53382403bc8247d6541be2da1eb02
|
File details
Details for the file ophyd_epicsrs-0.5.2-cp38-abi3-macosx_11_0_arm64.whl.
File metadata
- Download URL: ophyd_epicsrs-0.5.2-cp38-abi3-macosx_11_0_arm64.whl
- Upload date:
- Size: 1.0 MB
- Tags: CPython 3.8+, macOS 11.0+ ARM64
- Uploaded using Trusted Publishing? No
- Uploaded via: maturin/1.13.1
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
b5230d20c35af26dc91b72f869f3858a42ede642eac4d702e0b22ba530d1af2a
|
|
| MD5 |
f75bf3e7d9723c7d8f2eca57624d556f
|
|
| BLAKE2b-256 |
b235932e5ad435311aaa2109985030596ee72064cde85beaa9af915cd69ae89f
|
File details
Details for the file ophyd_epicsrs-0.5.2-cp38-abi3-macosx_10_12_x86_64.whl.
File metadata
- Download URL: ophyd_epicsrs-0.5.2-cp38-abi3-macosx_10_12_x86_64.whl
- Upload date:
- Size: 1.1 MB
- Tags: CPython 3.8+, macOS 10.12+ x86-64
- Uploaded using Trusted Publishing? No
- Uploaded via: maturin/1.13.1
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
45a590dc10c6ebaad723cd034d4c70f89cf6d8132465645452862926b5668c5a
|
|
| MD5 |
b0c9cd1060fa67ef535969b6dc4e8603
|
|
| BLAKE2b-256 |
82be21d2d200dbd221cbd32821b23113b9d17a6d3d34fb99f1668ad2434a2ef1
|