Skip to main content

A shell for subprocess

Project description

subprocess_shell

is a Python package providing an alternative interface to sub processes. The aim is simplicity comparable to shell scripting and transparency for more complex use cases.

[[TOC]]

videos/aperitif.mp4

Update: the showcase presents an earlier version which didn't provide run() and wait(logs=...)

Features

  • Simple
    • e.g. 5 functions (start, write, wait, read, run) and 3 operators (>>, +, -)
  • Transparent
  • Separates streams
    • no interleaving of stdout and stderr and from different processes of a chain
  • Avoids deadlocks due to OS pipe buffer limits by using queues
  • Uses Rich if available
  • Supports Windows[^r4]

[^r4]: Insofar as tests succeed most of the time. On my system, tests freeze up sometimes for no apparent reason. If you experience the same and can reproduce it consistently, please open an issue!

images/rich_output.png

Examples

bash -e

subprocess_shell

subprocess

Plumbum[^r1]

initialization
from subprocess_shell import *
import subprocess
from plumbum import local
run command
echo this
["echo", "this"] >> run()
assert subprocess.Popen(["echo", "this"]).wait() == 0
local["echo"]["this"].run_fg()
redirect stream
echo this > /path/to/file
["echo", "this"] >> run(start(stdout="/path/to/file"))
with open("/path/to/file", "wb") as stdout:
    assert subprocess.Popen(["echo", "this"], stdout=stdout).wait() == 0
(local["echo"]["this"] > "/path/to/file").run_fg()
read stream
a=$(echo this)
a = ["echo", "this"] >> run()
process = subprocess.Popen(["echo", "this"], stdout=subprocess.PIPE)
a, _ = process.communicate()
assert process.wait() == 0
a = local["echo"]("this")
write stream
cat - <<EOF
this
EOF
["cat", "-"] >> run(write("this"))
process = subprocess.Popen(["cat", "-"], stdin=subprocess.PIPE)
process.communicate(b"this")
assert process.wait() == 0
(local["cat"]["-"] << "this").run_fg()
chain commands
echo this | cat -
["echo", "this"] >> start() + ["cat", "-"] >> run()
process = subprocess.Popen(["echo", "this"], stdout=subprocess.PIPE)
assert subprocess.Popen(["cat", "-"], stdin=process.stdout).wait() == 0
assert process.wait() == 0
(local["echo"]["this"] | local["cat"]["-"]).run_fg()
branch out ?
import sys

_v_ = "import sys; print('stdout'); print('stderr', file=sys.stderr)"
arguments = [sys.executable, "-c", _v_]

process = arguments >> start(pass_stdout=True, pass_stderr=True)
process + ["cat", "-"] >> run()
process - ["cat", "-"] >> run()
import sys

_v_ = "import sys; print('stdout'); print('stderr', file=sys.stderr)"
arguments = [sys.executable, "-c", _v_]

process = subprocess.Popen(arguments, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
assert subprocess.Popen(["cat", "-"], stdin=process.stdout).wait() == 0
assert subprocess.Popen(["cat", "-"], stdin=process.stderr).wait() == 0
assert process.wait() == 0

not supported[^r2]

errors in chains ?
_v_ = ["echo", "this"] >> start(return_codes=(0, 1)) - ["cat", "-"]
_v_ >> run(wait(return_codes=(0, 2)))
first_process = subprocess.Popen(["echo", "this"], stderr=subprocess.PIPE)
second_process = subprocess.Popen(["cat", "-"], stdin=first_process.stderr)
assert first_process.wait() in (0, 1) and second_process.wait() in (0, 2)

not supported[^r2]

callbacks
["echo", "this"] >> run(start(stdout=print))
process = subprocess.Popen(["echo", "this"], stdout=subprocess.PIPE)

for bytes in process.stdout:
    print(bytes)

assert process.wait() == 0

!![^r3]

[^r1]: Mostly adapted versions from https://www.reddit.com/r/Python/comments/16byt8j/comment/jzhh21f/?utm_source=share&utm_medium=web2x&context=3 [^r2]: Has been requested years ago [^r3]: This is very limited and has several issues with potential for deadlocks. An exact equivalent would be too long for this table.

Notes

  • bash -e because errors can have serious consequences
    • e.g.
a=$(failing command)
sudo chown -R root:root "$a/"
  • assert process.wait() == 0 is the shortest (readable) code waiting for a process to stop and asserting the return code
  • complexity of code for Plumbum can be misleading because it has a much wider scope (e.g. remote execution and files)

Quickstart

  • Prepare virtual environment (optional but recommended)

    • e.g. Pipenv: python -m pip install -U pipenv
  • Install subprocess_shell

    • e.g. python -m pipenv run pip install subprocess_shell
  • Import and use it

    • e.g. from subprocess_shell import * and python -m pipenv run python ...
  • Prepare tests

    • e.g. python -m pipenv run pip install subprocess_shell[test]
  • Run tests

    • e.g. python -m pipenv run pytest ./tests

Documentation

from subprocess_shell import *

Start process

process = arguments >> start(
    stdin=subprocess.PIPE,
    stdout=subprocess.PIPE,
    pass_stdout=False,
    stderr=subprocess.PIPE,
    pass_stderr=False,
    queue_size=0,
    logs=None,
    return_codes=(0,),
    force_color=True,
    async_=False,
    **{},
)

arguments

iterable

arguments are converted to string using str(...) and passed to subprocess.Popen(...)

stdin

subprocess.PIPE

provide stdin

any object

same as subprocess.Popen(..., stdin=object)

stdout

subprocess.PIPE

provide stdout

string or pathlib.Path

redirect stdout to file

function(chunk: bytes | str) -> typing.Any

call function for each chunk from stdout

any object

same as subprocess.Popen(..., stdout=object)

pass_stdout

False

if stdout=subprocess.PIPE: queue chunks from stdout

True

don't use stdout

stderr

subprocess.PIPE

provide stderr

string or pathlib.Path

redirect stderr to file

function(chunk: bytes | str) -> typing.Any

call function for each chunk from stderr

any object

same as subprocess.Popen(..., stderr=object)

pass_stderr

False

if stderr=subprocess.PIPE: queue chunks from stderr

True

don't use stderr

queue_size

0

no limit on size of queues

int > 0

wait for other threads to process queues if full; !! can lead to deadlocks !!

logs

None

if in a chain: analog of wait(logs=None)

boolean

if in a chain: analog of wait(logs=False) or wait(logs=True)

return_codes

(0,)

if in a chain: analog of wait(return_codes=(0,))

collection object or None

if in a chain: analog of wait(return_codes=object) or wait(return_codes=None)

force_color

False

don't touch environment variable FORCE_COLOR

True

if environment variable FORCE_COLOR is not set: set to 1

async_

False

if Windows: use asyncio; else: use selectors

True

use asyncio

**

{}

passed to subprocess.Popen(...)

Write to stdin

process = process >> write(object, close=False, encoding=None)

object

string or bytes

en/decoded if necessary, written to stdin and flushed

close

False

keep stdin open

True

close stdin after flush

encoding

None

use the default encoding (UTF-8) for encoding the string

str

use a different encoding

requires start(stdin=subprocess.PIPE)

Wait for process

return_code = process >> wait(
    stdout=True,
    stderr=True,
    logs=None,
    return_codes=(0,),
    rich=True,
    stdout_style="green",
    log_style="dark_orange3",
    error_style="red",
    ascii=False,
    encoding=None,
)

stdout

True

if stdout is queued: collect stdout, format and print to stdout

False

don't use stdout

any object

if stdout is queued: collect stdout, format and print with print(..., file=object)

stderr

True

if stderr is queued: collect stderr, format and print to stderr

False

don't use stderr

any object

if stderr is queued: collect stderr, format and print with print(..., file=object)

logs

None

write stdout first and use log_style for stderr if the return code assert succeeds or error_style otherwise

False

write stdout first and use error_style for stderr

True

write stderr first and use log_style

return_codes

(0,)

assert that the return code is 0

collection object

assert that the return code is in object

None

don't assert the return code

rich

True

use Rich if available

False

don't use Rich

stdout_style

"green"

use color "green" for stdout frame
style object or string

use style for stdout frame, see Styles

log_style

"dark_orange3"

use color "dark_orange3" for stderr frame, see argument logs

style object or string

use style for stderr frame, see argument logs and Styles

error_style

"red"

use color "red" for stderr frame, see argument logs

style object or string

use style for stderr frame, see argument logs and Styles

ascii

False

use Unicode

True

use ASCII

encoding

None

use the default encoding (UTF-8) for encoding strings or decoding bytes objects

str

use a different encoding

Read from stdout/stderr

string = process >> read(
    stdout=True,
    stderr=False,
    bytes=False,
    encoding=None,
    logs=_DEFAULT,
    return_codes=_DEFAULT,
    wait=None,
)
# optionally one of
.cast_str     # shortcut for `typing.cast(str, ...)`
.cast_bytes   #          for `typing.cast(bytes, ...)`
.cast_strs    #          for `typing.cast(tuple[str, str], ...)`
.cast_bytess  #          for `typing.cast(tuple[bytes, bytes], ...)`

stdout

True

execute process >> wait(..., stdout=False), collect stdout, join and return; requires queued stdout

False

execute process >> wait(..., stdout=True)

any object

execute process >> wait(..., stdout=object)

stderr

False

execute process >> wait(..., stderr=True)

True

execute process >> wait(..., stderr=False), collect stderr, join and return; requires queued stderr

any object

execute process >> wait(..., stderr=object)

bytes

False

return a string or tuple of strings

True

return bytes or tuple of bytes

encoding

None

use the default encoding (UTF-8) for encoding strings or decoding bytes objects

str

use a different encoding

logs

_DEFAULT

use logs from argument wait

bool or None

replace logs from argument wait

return_codes

_DEFAULT

use return_codes from argument wait

any object

replace return_codes from argument wait

wait

None

use wait() for waiting

wait(...)

use a different wait object
process.get_stdout_lines(bytes=False, encoding=None)  # generator[str | bytes]
process.get_stderr_lines(bytes=False, encoding=None)  # generator[str | bytes]
process.join_stdout_strings(encoding=None)  # str
process.join_stderr_strings(encoding=None)  # str
process.get_stdout_strings(encoding=None)  # generator[str]
process.get_stderr_strings(encoding=None)  # generator[str]
process.join_stdout_bytes(encoding=None)  # bytes
process.join_stderr_bytes(encoding=None)  # bytes
process.get_stdout_bytes(encoding=None)  # generator[bytes]
process.get_stderr_bytes(encoding=None)  # generator[bytes]
process.get_stdout_objects()  # generator[str | bytes]
process.get_stderr_objects()  # generator[str | bytes]

bytes

False

return iterable of strings

True

return iterable of bytes

encoding

None

use the default encoding (UTF-8) for encoding strings or decoding bytes objects

str

use a different encoding

requires queued stdout/stderr

Chain processes / pass streams

process = source_arguments >> start(...) + arguments >> start(...)
# or
source_process = source_arguments >> start(..., pass_stdout=True)
process = source_process + arguments >> start(...)
process = source_arguments >> start(...) - arguments >> start(...)
# or
source_process = source_arguments >> start(..., pass_stderr=True)
process = source_process - arguments >> start(...)
source_process = process.get_source_process()
  • process >> wait(...) waits for the processes from left/source to right/target

Shortcut

object = object >> run(*args)
# optionally one of
.cast_int     # shortcut for `typing.cast(int, ...)`
.cast_str     #          for `typing.cast(str, ...)`
.cast_bytes   #          for `typing.cast(bytes, ...)`
.cast_strs    #          for `typing.cast(tuple[str, str], ...)`
.cast_bytess  #          for `typing.cast(tuple[bytes, bytes], ...)`

*args

none

short for >> start() >> wait()

sequence of objects returned by start(...), write(...), wait(...) and read(...)

short for >> start(...) {* >> write(...) *} >> {wait(...) | read(...)}

Other

LineStream

If you want to use wait and process the streams line by line at the same time, you can use LineStream.

Example:

import subprocess_shell
import sys

def function(line_string):
    pass

process >> wait(stdout=subprocess_shell.LineStream(function, sys.stdout))

Motivation

Shell scripting is great for simple tasks. When tasks become more complex, e.g. hard to chain or require non-trivial processing, I always switch to Python. The interface provided by subprocess is rather verbose and parts that would look trivial in a shell script end up a repetitive mess. After refactoring up the mess once too often, it was time for a change.

See also

Why the name subprocess_shell

Simply because I like the picture of subprocess with a sturdy layer that is easy and safe to handle. Also, while writing import subprocess it is easy to remember to add _shell.

Before subprocess_shell I chose to call it shell. This was a bad name for several reasons, but most notably because the term shell is commonly used for applications providing an interface to the operating system.


Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

subprocess_shell-1.1.0.tar.gz (23.6 kB view details)

Uploaded Source

Built Distribution

subprocess_shell-1.1.0-py3-none-any.whl (15.5 kB view details)

Uploaded Python 3

File details

Details for the file subprocess_shell-1.1.0.tar.gz.

File metadata

  • Download URL: subprocess_shell-1.1.0.tar.gz
  • Upload date:
  • Size: 23.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.0 CPython/3.12.7

File hashes

Hashes for subprocess_shell-1.1.0.tar.gz
Algorithm Hash digest
SHA256 ba1fc05fbe0f7889f6e6294f43ec66765085fde74c0e573c17f4013730f06a33
MD5 b2fcf75ba3dc935ec1c23e1eb455d478
BLAKE2b-256 3ed00c2c9692c047164834c7cbba895044debf08874a4e8667738ed90d0cd55f

See more details on using hashes here.

File details

Details for the file subprocess_shell-1.1.0-py3-none-any.whl.

File metadata

File hashes

Hashes for subprocess_shell-1.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 beb6b3c675415713ae6b58055716a4b3a8be998d34220a5d57dd51beb2254730
MD5 d6e7e958247068adcaae11fb8104ea0f
BLAKE2b-256 0d0043bf6604fce32f429ebb7a2cac3165c72a672c3cb3e3a3ad698c6a293855

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page