Skip to main content

A Rust-backed subprocess wrapper with split stdout/stderr streaming

Project description

running-process

PyPI Crates.io codecov

running-process is what you wished python's subprocess was. Blazing fast, highly concurrent, huge feature list, dead process tracking, pty support. Built in Rust with a thin python api.

Platform Build Lint Unit Test Integration Test
Linux x86 Build Lint Unit Test Integration Test
Linux ARM Build Lint Unit Test Integration Test
Windows x86 Build Lint Unit Test Integration Test
Windows ARM Build Lint Unit Test Integration Test
macOS x86 Build Lint Unit Test Integration Test
macOS ARM Build Lint Unit Test Integration Test

Why?

This project started off as a fix for python's sub process module. It was in python originally, but then moved to OS specific rust. Now it's blazing fast: using OS threads, atomics and proper signaling back to the python api. This library also allows stderr and stdout stream reading in parallel, something subprocess lacks. It also has cross platform process tracking, pty generation. It has zombie process tracking. It also has builtin expect for keyword event triggers, idle tracking (great for agent CLI's that dont' notifiy when they are done, they just stop sending data).

This libary is design for speed and correctness and portability. Usually terminal utilities are for windows or linux/mac. This is designed to run everywhere.

PTY Support Matrix

PTY support is a guaranteed part of the package contract on:

  • Windows
  • Linux
  • macOS

On those platforms, RunningProcess.pseudo_terminal(...), wait_for_expect(...), and wait_for_idle(...) are core functionality rather than optional extras.

Pty.is_available() remains as a compatibility shim and only reports False on unsupported platforms.

Pipe-backed API

from running_process import RunningProcess

process = RunningProcess(
    ["python", "-c", "import sys; print('out'); print('err', file=sys.stderr)"]
)

process.wait()

print(process.stdout)          # stdout only
print(process.stderr)          # stderr only
print(process.combined_output) # combined compatibility view

Captured data values stay plain str | bytes. Live stream handles are exposed separately:

if process.stdout_stream.available():
    print(process.stdout_stream.drain())

Process priority is a first-class launch option:

from running_process import CpuPriority, RunningProcess

process = RunningProcess(
    ["python", "-c", "import time; time.sleep(1)"],
    nice=CpuPriority.LOW,
)

nice= behavior:

  • accepts either a raw int niceness or a platform-neutral CpuPriority
  • on Unix, it maps directly to process niceness
  • on Windows, positive values map to below-normal or idle priority classes and negative values map to above-normal or high priority classes
  • 0 leaves the default scheduler priority unchanged
  • positive values are the portable default; negative values may require elevated privileges
  • the enum intentionally stops at HIGH; there is no realtime tier

Available helpers:

  • get_next_stdout_line(timeout)
  • get_next_stderr_line(timeout)
  • get_next_line(timeout) for combined compatibility reads
  • stream_iter(timeout) or for stdout, stderr, exit_code in process
  • drain_stdout()
  • drain_stderr()
  • drain_combined()
  • stdout_stream.available()
  • stderr_stream.available()
  • combined_stream.available()

stream_iter(...) yields tuple-like ProcessOutputEvent(stdout, stderr, exit_code) records. Only one stream payload is populated per nonterminal item. When both pipes are drained, it yields (EOS, EOS, exit_code) if the child has already exited, or (EOS, EOS, None) followed by a final (EOS, EOS, exit_code) if the child closed both pipes before it exited.

RunningProcess.run(...) supports common subprocess.run(...) style cases including:

  • capture_output=True
  • text=True
  • encoding=...
  • errors=...
  • shell=True
  • env=...
  • nice=...
  • stdin=subprocess.DEVNULL
  • input=... in text or bytes form

Unsupported subprocess.run(...) kwargs now fail loudly instead of being silently ignored.

Expect API

expect(...) is available on both the pipe-backed and PTY-backed process APIs.

import re
import subprocess
from running_process import RunningProcess

process = RunningProcess(
    ["python", "-c", "print('prompt>'); import sys; print('echo:' + sys.stdin.readline().strip())"],
    stdin=subprocess.PIPE,
)

process.expect("prompt>", timeout=5, action="hello\n")
match = process.expect(re.compile(r"echo:(.+)"), timeout=5)
print(match.groups)

Supported action= forms:

  • str or bytes: write to stdin
  • "interrupt": send Ctrl-C style interrupt when supported
  • "terminate"
  • "kill"

Pipe-backed expect(...) matches line-delimited output. If the child writes prompts without trailing newlines, use the PTY API instead.

PTY API

Use RunningProcess.pseudo_terminal(...) for interactive terminal sessions. It is chunk-oriented by design and preserves carriage returns and terminal control flow instead of normalizing it away.

from running_process import ExpectRule, RunningProcess

pty = RunningProcess.pseudo_terminal(
    ["python", "-c", "import sys; sys.stdout.write('name?'); sys.stdout.flush(); print('hello ' + sys.stdin.readline().strip())"],
    text=True,
    expect=[ExpectRule("name?", "world\n")],
    expect_timeout=5,
)

print(pty.output)

PTY behavior:

  • accepts str and list[str] commands
  • auto-splits simple string commands into argv when shell syntax is not present
  • uses shell mode automatically when shell metacharacters are present
  • is guaranteed on supported Windows, Linux, and macOS builds
  • keeps output chunk-buffered by default
  • preserves \r for redraw-style terminal output
  • supports write(...), read(...), drain(), available(), expect(...), resize(...), and send_interrupt()
  • supports nice=... at launch
  • supports interrupt_and_wait(...) for staged interrupt escalation
  • supports wait_for_idle(...) with activity filtering
  • exposes exit_reason, interrupt_count, interrupted_by_caller, and exit_status

wait_for_idle(...) has two modes:

  • default fast path: built-in PTY activity rules and optional process metrics
  • slow path: IdleDetection(idle_reached=...), where your Python callback receives an IdleInfoDiff delta and returns IdleDecision.DEFAULT, IdleDecision.ACTIVE, IdleDecision.BEGIN_IDLE, or IdleDecision.IS_IDLE

There is also a compatibility alias: RunningProcess.psuedo_terminal(...).

You can also inspect the intended interactive launch semantics without launching a child:

from running_process import RunningProcess

spec = RunningProcess.interactive_launch_spec("console_isolated")
print(spec.ctrl_c_owner)
print(spec.creationflags)

Supported launch specs:

  • pseudo_terminal
  • console_shared
  • console_isolated

For an actual launch, use RunningProcess.interactive(...):

process = RunningProcess.interactive(
    ["python", "-c", "print('hello from interactive mode')"],
    mode="console_shared",
    nice=5,
)
process.wait()

Abnormal Exits

By default, nonzero exits stay subprocess-like: you get a return code and can inspect exit_status.

process = RunningProcess(["python", "-c", "import sys; sys.exit(3)"])
process.wait()
print(process.exit_status)

If you want abnormal exits to raise, opt in:

from running_process import ProcessAbnormalExit, RunningProcess

try:
    RunningProcess.run(
        ["python", "-c", "import sys; sys.exit(3)"],
        capture_output=True,
        raise_on_abnormal_exit=True,
    )
except ProcessAbnormalExit as exc:
    print(exc.status.summary)

Notes:

  • keyboard interrupts still raise KeyboardInterrupt
  • kill -9 / SIGKILL is classified as an abnormal signal exit
  • possible OOM conditions are exposed as a hint on exit_status.possible_oom
  • OOM cannot be identified perfectly across platforms from exit status alone, so it is best-effort rather than guaranteed

Text and bytes

Pipe mode is byte-safe internally:

  • invalid UTF-8 does not break capture
  • text mode decodes with UTF-8 and errors="replace" by default
  • binary mode returns bytes unchanged
  • \r\n is normalized as a line break in pipe mode
  • bare \r is preserved

PTY mode is intentionally more conservative:

  • output is handled as chunks, not lines
  • redraw-oriented \r is preserved
  • no automatic terminal-output normalization is applied

Development

./install
./lint
./test

./install bootstraps rustup into the shared user locations (~/.cargo and ~/.rustup, or CARGO_HOME / RUSTUP_HOME if you override them), then installs the exact toolchain pinned in rust-toolchain.toml. Toolchain installs are serialized with a lock so concurrent repo bootstraps do not race the same shared version.

./lint applies cargo fmt and Ruff autofixes before running the remaining lint checks, so fixable issues are rewritten in place.

./test runs the Rust tests, rebuilds the native extension with the unoptimized dev profile, runs the non-live Python tests, and then runs the @pytest.mark.live coverage that exercises real OS process and signal behavior.

On local developer machines, ./test also runs the Linux Docker preflight so Windows and macOS development catches Linux wheel, lint, and non-live pytest regressions before push. GitHub-hosted Actions skip that Docker-only preflight and run the native platform suite directly.

If you want to invoke pytest directly, set RUNNING_PROCESS_LIVE_TESTS=1 and run uv run pytest -m live.

For direct Rust commands, prefer the repo trampolines, which prepend the shared rustup proxy location:

./_cargo check --workspace
./_cargo fmt --all --check
./_cargo clippy --workspace --all-targets -- -D warnings

On Windows, native rebuilds that compile bundled C code should run from a Visual Studio developer shell. When the environment is ambiguous, point maturin at the MSVC toolchain binaries directly rather than relying on the generic cargo proxy.

For local extension rebuilds, prefer:

uv run build.py

That defaults to building a dev-profile wheel and reinstalling it into the repo's uv environment, which keeps the native extension in site-packages instead of copying it into src/. For publish-grade artifacts, use:

uv run build.py --release

Process Containment

ContainedProcessGroup ensures all child processes are killed when the group is dropped, using OS-level mechanisms (Job Objects on Windows, process groups + SIGKILL on Unix).

from running_process import ContainedProcessGroup

with ContainedProcessGroup() as group:
    proc = group.spawn(["sleep", "3600"])
# all children killed on exit, even on crash

Crash-resilient orphan discovery

When a parent crashes, its in-process registry is lost. ContainedProcessGroup can stamp every child with an environment variable that survives parent death:

from running_process import ContainedProcessGroup, find_processes_by_originator

# At launch: tag children with your tool name
with ContainedProcessGroup(originator="MYTOOL") as group:
    proc = group.spawn(["long-running-worker"])

# Later (from any process, any session): find orphans
stale = find_processes_by_originator("MYTOOL")
for info in stale:
    if not info.parent_alive:
        print(f"Orphaned PID {info.pid} from dead parent {info.parent_pid}")

The env var RUNNING_PROCESS_ORIGINATOR=TOOL:PID is inherited by all descendants. The scanner uses process start times to guard against PID reuse.

Tracked PID Cleanup

RunningProcess, InteractiveProcess, and PTY-backed launches register their live PIDs in a SQLite database. The default location is:

  • Windows: %LOCALAPPDATA%\\running-process\\tracked-pids.sqlite3
  • Override: RUNNING_PROCESS_PID_DB=/custom/path/tracked-pids.sqlite3

If a bad run leaves child processes behind, terminate everything still tracked in the database:

python scripts/terminate_tracked_processes.py

Notes

  • stdout and stderr are no longer merged by default.
  • combined_output exists for compatibility when you need the merged view.
  • RunningProcess(..., use_pty=True) is no longer the preferred path; use RunningProcess.pseudo_terminal(...) for PTY sessions.
  • On supported Windows builds, PTY support is provided by the native Rust extension rather than a Python winpty fallback.
  • The test suite checks that running_process.__version__, package metadata, and manifest versions stay in sync.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

running_process-3.0.16.tar.gz (150.7 kB view details)

Uploaded Source

Built Distributions

If you're not sure about the file name format, learn more about wheel file names.

running_process-3.0.16-cp310-abi3-win_arm64.whl (1.2 MB view details)

Uploaded CPython 3.10+Windows ARM64

running_process-3.0.16-cp310-abi3-win_amd64.whl (1.3 MB view details)

Uploaded CPython 3.10+Windows x86-64

running_process-3.0.16-cp310-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (2.6 MB view details)

Uploaded CPython 3.10+manylinux: glibc 2.17+ x86-64

running_process-3.0.16-cp310-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl (2.5 MB view details)

Uploaded CPython 3.10+manylinux: glibc 2.17+ ARM64

running_process-3.0.16-cp310-abi3-macosx_11_0_arm64.whl (1.3 MB view details)

Uploaded CPython 3.10+macOS 11.0+ ARM64

running_process-3.0.16-cp310-abi3-macosx_10_12_x86_64.whl (1.4 MB view details)

Uploaded CPython 3.10+macOS 10.12+ x86-64

File details

Details for the file running_process-3.0.16.tar.gz.

File metadata

  • Download URL: running_process-3.0.16.tar.gz
  • Upload date:
  • Size: 150.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.8.11

File hashes

Hashes for running_process-3.0.16.tar.gz
Algorithm Hash digest
SHA256 5f551462a081de3d062a8e7a132dc96e46a57f0a3563b335b663399230f0237e
MD5 3fec4de2e9ab23986ec2e0f6866f449e
BLAKE2b-256 17878448ac13dc965db129fbc0518e107ae31d8ecfb86e11cec49da47c420c89

See more details on using hashes here.

File details

Details for the file running_process-3.0.16-cp310-abi3-win_arm64.whl.

File metadata

File hashes

Hashes for running_process-3.0.16-cp310-abi3-win_arm64.whl
Algorithm Hash digest
SHA256 b4d6a6e178198903a90aadb2598d04782bebae3767c3ab96df1954548b582b82
MD5 4073bf743c4ba5ddbe7eeb9f0508c732
BLAKE2b-256 f65b2a99abcee34448f0787d0eec35c4fc58ea02a73f4661117c6a2bfe7c6c74

See more details on using hashes here.

File details

Details for the file running_process-3.0.16-cp310-abi3-win_amd64.whl.

File metadata

File hashes

Hashes for running_process-3.0.16-cp310-abi3-win_amd64.whl
Algorithm Hash digest
SHA256 f3ace5653f6b19a387d8fb4e8591e82a29b855d5e10b08e5a10d71e087d91dab
MD5 7da915ef296b4fd812623450d23cd025
BLAKE2b-256 b358469e0b56d52add063043b4f118d1b8bae1d57d95a3c1fd8647d211200298

See more details on using hashes here.

File details

Details for the file running_process-3.0.16-cp310-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for running_process-3.0.16-cp310-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 c1512c9737239d02e9e79ef12a50e24f3c9f698328aa8776c67ea0ea83395f55
MD5 2d22afa4429fc19af86ae2cd2391c8e0
BLAKE2b-256 571d403b8fe85700fc077c88553786da823614ab09938f7b01c9e5df6312b9fb

See more details on using hashes here.

File details

Details for the file running_process-3.0.16-cp310-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl.

File metadata

File hashes

Hashes for running_process-3.0.16-cp310-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl
Algorithm Hash digest
SHA256 1d9fea2e2844998de76c0afd7813eecb8c6cc2fee644d2848c716086624046ed
MD5 b9c0da21a994498d49314c47103a66e3
BLAKE2b-256 c7b31ffc0d2821551b1588e50d43433ac4c87f463553080fa7047b8309e25e7b

See more details on using hashes here.

File details

Details for the file running_process-3.0.16-cp310-abi3-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for running_process-3.0.16-cp310-abi3-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 7864b24ae7e12a9c4e82b352e90a988533b524cd84822c97966ba663152785be
MD5 9acbfd0dcf5ddb61b1668b37a0354103
BLAKE2b-256 0ba2f7505eebdb3b89062af0ac9435a97ed5c75fd486d686c8e16e570216dca8

See more details on using hashes here.

File details

Details for the file running_process-3.0.16-cp310-abi3-macosx_10_12_x86_64.whl.

File metadata

File hashes

Hashes for running_process-3.0.16-cp310-abi3-macosx_10_12_x86_64.whl
Algorithm Hash digest
SHA256 9fdf93a7ea6c311a194f7c072611fdf703c75300e9283f6d501511a55e60288e
MD5 54380b2dbbf6189dc171455430b458a9
BLAKE2b-256 2e2fce369b9f941c6aea3ff0ccede3b2b1c4958b4b6a2d156d8d6d943538796b

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page