Compose final output(s), out of layers of images, videos or similar.
Project description
Layered Vision
Goals of this project is:
- A tool to allow the composition of images or videos via a configuration file (e.g. as a virtual webcam).
This project is still very much experimental and may change significantly.
Install
Install with all dependencies:
pip install layered-vision[all]
Install with minimal dependencies:
pip install layered-vision
Extras are provided to make it easier to provide or exclude dependencies when using this project as a library:
extra name | description |
---|---|
bodypix | For bodypix filter |
webcam | Virtual Webcam support via pyfakewebcam |
youtube | YouTube support via pafy and youtube_dl |
all | All of the libraries |
Configuration
The configuration format is file is YAML.
There are a number of example configuration files.
Layers
Every configuration file will contain layers. Layers are generally described from top to down. With the last layer usually being the output layer.
The source to the output layer will be the layer above.
A very simple configuration file that downloads the numpy
logo and saves it to a file might look like (example-config/save-image.yml
):
layers:
- id: in
input_path: "https://github.com/numpy/numpy/raw/master/branding/logo/logomark/numpylogoicon.png"
- id: out
output_path: "numpy-logo.png"
You could also have two outputs (example-config/two-outputs.yml
):
layers:
- id: in
input_path: "https://github.com/numpy/numpy/raw/master/branding/logo/logomark/numpylogoicon.png"
- id: out1
output_path: "data/numpy-logo1.png"
- id: out2
output_path: "data/numpy-logo2.png"
In that case the source layer for both out1
and out2
is in
.
By using window
as the output_path
, the image is displayed in a window (example-config/display-image.yml
):
layers:
- id: in
input_path: "https://github.com/numpy/numpy/raw/master/branding/logo/logomark/numpylogoicon.png"
width: 480
height: 300
repeat: true
- id: out
output_path: window
Input Layer
A layer that has an input_path
property.
The following inputs are currently supported:
- Image
- Video
- Linux Webcam (
/dev/videoN
)
The input_path
may point to a remote location (as is the case with all examples). In that case it will be downloaded and cached locally.
Filter Layer
A layer that has an filter
property.
The following filters are currently supported:
name | description |
---|---|
box_blur |
Blurs the image or channel. |
bodypix |
Uses the bodypix model to mask a person. |
chroma_key |
Uses a chroma key (colour) to add a mask |
copy |
Copies the input. Mainly useful as a placeholder layer with branches . |
dilate |
Dilates the image or channel. For example to increase the alpha mask after using erode |
erode |
Erodes the image or channel. That could be useful to remove outliers from an alpha mask. |
motion_blur |
Adds a motion blur to the image or channel. That could be used to make an alpha mask move more slowly |
pixelate |
Pixelates the input. |
Every filter may have additional properties. Please refer to the examples (or come back in the future) for more detailed information.
Branches Layer
A layer that has an branches
property.
Each branch is required to have a layers
property.
The input to each set of branch layers is the input to the branches layer.
The branches are then combined (added on top of each other).
To make branches useful, at least the last branch image should have an alpha mask.
CLI
CLI Help
python -m layered_vision --help
or
python -m layered_vision <sub command> --help
Example Command
python -m layered_vision start --config-file=example-config/display-image.yml
You could also load the config from a remote location:
python -m layered_vision start --config-file \
"https://raw.githubusercontent.com/de-code/layered-vision/develop/example-config/display-video-chroma-key-replace-background.yml"
It is also possible to override config values via command line arguments, e.g.:
python -m layered_vision start --config-file=example-config/display-image.yml \
--set out.output_path=/path/to/output.png
You could also try replacing the background with a YouTube stream:
python -m layered_vision start \
--config-file \
"https://raw.githubusercontent.com/de-code/layered-vision/develop/example-config/webcam-bodypix-replace-background-to-v4l2loopback.yml" \
--set bg.input_path="https://youtu.be/yswkqEBio2k" \
--set bg.fps=30 \
--set in.input_path="/dev/video0" \
--set out.output_path="/dev/video2"
Note: you may need to specify the fps
Docker Usage
You could also use the Docker image if you prefer.
The entrypoint will by default delegate to the CLI, except for python
or bash
commands.
docker pull de4code/layered-vision
docker run --rm \
--device /dev/video0 \
--device /dev/video2 \
de4code/layered-vision start \
--config-file \
"https://raw.githubusercontent.com/de-code/layered-vision/develop/example-config/webcam-bodypix-replace-background-to-v4l2loopback.yml" \
--set bg.input_path="https://www.dropbox.com/s/4debg4lrgn5g36l/toy-train-3288425.mp4?dl=1" \
--set in.input_path="/dev/video0" \
--set out.output_path="/dev/video2"
(Background: Toy Train)
Acknowledgements
- virtual_webcam_background, a somewhat similar project (more focused on bodypix)
- OBS Studio, conceptually a source of inspiration. (with UI etc)
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for layered_vision-0.0.4-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 9378aa9667d97214f5df9a76cf3a120c953a21f720fdbb33b104738a30108878 |
|
MD5 | cc8eaebc44ca845d63ccf2c2b36af34a |
|
BLAKE2b-256 | 876d5fdab25dbe60165aa3e2516160c2d9ae2404f944f34e8ce91f155459da17 |