A shell for subprocess
Project description
subprocess_shell
is a Python package providing an alternative interface to sub processes. The aim is simplicity comparable to shell scripting and transparency for more complex use cases.
[[TOC]]
Update: the showcase presents an earlier version which didn't provide run()
and wait(logs=...)
Features
- Simple
- e.g. 5 functions (
start
,write
,wait
,read
,run
) and 3 operators (>>
,+
,-
)
- e.g. 5 functions (
- Transparent
- usability layer for subprocess except streams
- Separates streams
- no interleaving of stdout and stderr and from different processes of a chain
- Avoids deadlocks due to OS pipe buffer limits by using queues
- Uses Rich if available
- Supports Windows[^r4]
[^r4]: Insofar as tests succeed most of the time. On my system, tests freeze up sometimes for no apparent reason. If you experience the same and can reproduce it consistently, please open an issue!
images/rich_output.png
images/rich_output.png
Examples
|
subprocess_shell |
subprocess |
Plumbum[^r1] |
|
---|---|---|---|---|
initialization |
from subprocess_shell import *
|
import subprocess
|
from plumbum import local
|
|
run command |
echo this
|
["echo", "this"] >> run()
|
assert subprocess.Popen(["echo", "this"]).wait() == 0
|
local["echo"]["this"].run_fg()
|
redirect stream |
echo this > /path/to/file
|
["echo", "this"] >> run(start(stdout="/path/to/file"))
|
with open("/path/to/file", "wb") as stdout:
assert subprocess.Popen(["echo", "this"], stdout=stdout).wait() == 0
|
(local["echo"]["this"] > "/path/to/file").run_fg()
|
read stream |
a=$(echo this)
|
a = ["echo", "this"] >> run()
|
process = subprocess.Popen(["echo", "this"], stdout=subprocess.PIPE)
a, _ = process.communicate()
assert process.wait() == 0
|
a = local["echo"]("this")
|
write stream |
cat - <<EOF
this
EOF
|
["cat", "-"] >> run(write("this"))
|
process = subprocess.Popen(["cat", "-"], stdin=subprocess.PIPE)
process.communicate(b"this")
assert process.wait() == 0
|
(local["cat"]["-"] << "this").run_fg()
|
chain commands |
echo this | cat -
|
["echo", "this"] >> start() + ["cat", "-"] >> run()
|
process = subprocess.Popen(["echo", "this"], stdout=subprocess.PIPE)
assert subprocess.Popen(["cat", "-"], stdin=process.stdout).wait() == 0
assert process.wait() == 0
|
(local["echo"]["this"] | local["cat"]["-"]).run_fg()
|
branch out | ? |
import sys
_v_ = "import sys; print('stdout'); print('stderr', file=sys.stderr)"
arguments = [sys.executable, "-c", _v_]
process = arguments >> start(pass_stdout=True, pass_stderr=True)
process + ["cat", "-"] >> run()
process - ["cat", "-"] >> run()
|
import sys
_v_ = "import sys; print('stdout'); print('stderr', file=sys.stderr)"
arguments = [sys.executable, "-c", _v_]
process = subprocess.Popen(arguments, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
assert subprocess.Popen(["cat", "-"], stdin=process.stdout).wait() == 0
assert subprocess.Popen(["cat", "-"], stdin=process.stderr).wait() == 0
assert process.wait() == 0
|
not supported[^r2] |
errors in chains | ? |
_v_ = ["echo", "this"] >> start(return_codes=(0, 1)) - ["cat", "-"]
_v_ >> run(wait(return_codes=(0, 2)))
|
first_process = subprocess.Popen(["echo", "this"], stderr=subprocess.PIPE)
second_process = subprocess.Popen(["cat", "-"], stdin=first_process.stderr)
assert first_process.wait() in (0, 1) and second_process.wait() in (0, 2)
|
not supported[^r2] |
callbacks |
["echo", "this"] >> run(start(stdout=print))
|
process = subprocess.Popen(["echo", "this"], stdout=subprocess.PIPE)
for bytes in process.stdout:
print(bytes)
assert process.wait() == 0
!![^r3] |
[^r1]: Mostly adapted versions from https://www.reddit.com/r/Python/comments/16byt8j/comment/jzhh21f/?utm_source=share&utm_medium=web2x&context=3 [^r2]: Has been requested years ago [^r3]: This is very limited and has several issues with potential for deadlocks. An exact equivalent would be too long for this table.
Notes
bash -e
because errors can have serious consequences- e.g.
a=$(failing command)
sudo chown -R root:root "$a/"
assert process.wait() == 0
is the shortest (readable) code waiting for a process to stop and asserting the return code- complexity of code for Plumbum can be misleading because it has a much wider scope (e.g. remote execution and files)
Quickstart
-
Prepare virtual environment (optional but recommended)
- e.g. Pipenv:
python -m pip install -U pipenv
- e.g. Pipenv:
-
Install subprocess_shell
- e.g.
python -m pipenv run pip install subprocess_shell
- e.g.
-
Import and use it
- e.g.
from subprocess_shell import *
andpython -m pipenv run python ...
- e.g.
-
Prepare tests
- e.g.
python -m pipenv run pip install subprocess_shell[test]
- e.g.
-
Run tests
- e.g.
python -m pipenv run pytest ./tests
- e.g.
Documentation
from subprocess_shell import *
Start process
process = arguments >> start(
stdin=subprocess.PIPE,
stdout=subprocess.PIPE,
pass_stdout=False,
stderr=subprocess.PIPE,
pass_stderr=False,
queue_size=0,
logs=None,
return_codes=(0,),
force_color=True,
async_=False,
**{},
)
|
iterable |
arguments are converted to string using |
|
|
provide stdin |
any |
same as |
|
|
|
provide stdout |
string or |
redirect stdout to file |
|
|
call function for each chunk from stdout | |
any |
same as |
|
|
|
if |
|
don't use stdout | |
|
|
provide stderr |
string or |
redirect stderr to file |
|
|
call function for each chunk from stderr | |
any |
same as |
|
|
|
if |
|
don't use stderr | |
|
|
no limit on size of queues |
|
wait for other threads to process queues if full; !! can lead to deadlocks !! |
|
|
|
if in a chain: analog of |
boolean |
if in a chain: analog of |
|
|
|
if in a chain: analog of |
collection |
if in a chain: analog of |
|
|
|
don't touch environment variable |
|
if environment variable |
|
|
|
if Windows: use |
|
use |
|
|
|
passed to |
Write to stdin
process = process >> write(object, close=False, encoding=None)
|
string or |
en/decoded if necessary, written to stdin and flushed |
|
|
keep stdin open |
|
close stdin after flush | |
|
|
use the default encoding (UTF-8) for encoding the string |
|
use a different encoding |
requires start(stdin=subprocess.PIPE)
Wait for process
return_code = process >> wait(
stdout=True,
stderr=True,
logs=None,
return_codes=(0,),
rich=True,
stdout_style="green",
log_style="dark_orange3",
error_style="red",
ascii=False,
encoding=None,
)
|
|
if stdout is queued: collect stdout, format and print to stdout |
|
don't use stdout | |
any |
if stdout is queued: collect stdout, format and print with |
|
|
|
if stderr is queued: collect stderr, format and print to stderr |
|
don't use stderr | |
any |
if stderr is queued: collect stderr, format and print with |
|
|
|
write stdout first and use |
|
write stdout first and use |
|
|
write stderr first and use |
|
|
|
assert that the return code is 0 |
collection |
assert that the return code is in |
|
|
don't assert the return code | |
|
|
use Rich if available |
|
don't use Rich |
|
|
|
use color "green" for stdout frame |
style object or string |
use style for stdout frame, see Styles |
|
|
|
use color "dark_orange3" for stderr frame, see argument |
style object or string |
use style for stderr frame, see argument |
|
|
|
use color "red" for stderr frame, see argument |
style object or string |
use style for stderr frame, see argument |
|
|
|
use Unicode |
|
use ASCII | |
|
|
use the default encoding (UTF-8) for encoding strings or decoding |
|
use a different encoding |
Read from stdout/stderr
string = process >> read(
stdout=True,
stderr=False,
bytes=False,
encoding=None,
logs=_DEFAULT,
return_codes=_DEFAULT,
wait=None,
)
# optionally one of
.cast_str # shortcut for `typing.cast(str, ...)`
.cast_bytes # for `typing.cast(bytes, ...)`
.cast_strs # for `typing.cast(tuple[str, str], ...)`
.cast_bytess # for `typing.cast(tuple[bytes, bytes], ...)`
|
|
execute |
|
execute |
|
any |
execute |
|
|
|
execute |
|
execute |
|
any |
execute |
|
|
|
return a string or tuple of strings |
|
return |
|
|
|
use the default encoding (UTF-8) for encoding strings or decoding |
|
use a different encoding | |
|
|
use |
|
replace |
|
|
|
use |
any |
replace |
|
|
|
use |
|
use a different wait object |
process.get_stdout_lines(bytes=False, encoding=None) # generator[str | bytes]
process.get_stderr_lines(bytes=False, encoding=None) # generator[str | bytes]
process.join_stdout_strings(encoding=None) # str
process.join_stderr_strings(encoding=None) # str
process.get_stdout_strings(encoding=None) # generator[str]
process.get_stderr_strings(encoding=None) # generator[str]
process.join_stdout_bytes(encoding=None) # bytes
process.join_stderr_bytes(encoding=None) # bytes
process.get_stdout_bytes(encoding=None) # generator[bytes]
process.get_stderr_bytes(encoding=None) # generator[bytes]
process.get_stdout_objects() # generator[str | bytes]
process.get_stderr_objects() # generator[str | bytes]
|
|
return iterable of strings |
|
return iterable of |
|
|
|
use the default encoding (UTF-8) for encoding strings or decoding |
|
use a different encoding |
requires queued stdout/stderr
Chain processes / pass streams
process = source_arguments >> start(...) + arguments >> start(...)
# or
source_process = source_arguments >> start(..., pass_stdout=True)
process = source_process + arguments >> start(...)
process = source_arguments >> start(...) - arguments >> start(...)
# or
source_process = source_arguments >> start(..., pass_stderr=True)
process = source_process - arguments >> start(...)
source_process = process.get_source_process()
process >> wait(...)
waits for the processes from left/source to right/target
Shortcut
object = object >> run(*args)
# optionally one of
.cast_int # shortcut for `typing.cast(int, ...)`
.cast_str # for `typing.cast(str, ...)`
.cast_bytes # for `typing.cast(bytes, ...)`
.cast_strs # for `typing.cast(tuple[str, str], ...)`
.cast_bytess # for `typing.cast(tuple[bytes, bytes], ...)`
|
none |
short for |
sequence of objects returned by |
short for |
|
Other
LineStream
If you want to use wait
and process the streams line by line at the same time, you can use LineStream
.
Example:
import subprocess_shell
import sys
def function(line_string):
pass
process >> wait(stdout=subprocess_shell.LineStream(function, sys.stdout))
Motivation
Shell scripting is great for simple tasks. When tasks become more complex, e.g. hard to chain or require non-trivial processing, I always switch to Python. The interface provided by subprocess is rather verbose and parts that would look trivial in a shell script end up a repetitive mess. After refactoring up the mess once too often, it was time for a change.
See also
Why the name subprocess_shell
Simply because I like the picture of subprocess with a sturdy layer that is easy and safe to handle.
Also, while writing import subprocess
it is easy to remember to add _shell
.
Before subprocess_shell I chose to call it shell. This was a bad name for several reasons, but most notably because the term shell is commonly used for applications providing an interface to the operating system.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file subprocess_shell-1.1.1.tar.gz
.
File metadata
- Download URL: subprocess_shell-1.1.1.tar.gz
- Upload date:
- Size: 23.6 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.1.0 CPython/3.12.7
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 9de7b0211f385fbcce7c9d33c96272911e639e2ef6d69df534af01b717a3c9b8 |
|
MD5 | 173dba160d55a221166d5bf1896e18a3 |
|
BLAKE2b-256 | 30668315036e6e72fd52187f1f996a063e6769054398a57e32bca1460ffe641b |
File details
Details for the file subprocess_shell-1.1.1-py3-none-any.whl
.
File metadata
- Download URL: subprocess_shell-1.1.1-py3-none-any.whl
- Upload date:
- Size: 15.5 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.1.0 CPython/3.12.7
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | b84ad04394850de54e9444beaa2c1e1a64fcad7b74ede76a8ca6e236c1e0294c |
|
MD5 | 113f57ab6805dfb839dc8d88905fc144 |
|
BLAKE2b-256 | 7889c393a8837b784eb77f90986a7303e0551af2741d17d43fcdacfa8248d1b9 |