Skip to main content

Library to create flexible interactive image processing pipelines and automatically add a graphical user interface without knowing anything about GUI coding!

Project description

Interactive pipe
Quick setup pip install interactive-pipe
Project website

Interactive-pipe code

Concept

  • Develop an algorithm while debugging visually with plots, while checking robustness & continuity to parameters change.
  • Magically create a graphical interface to easily demonstrate a concept or simply tune your algorithm.

:heart: You do not need to learn anything about making a graphical user interface (GUI) :heart:

Examples

Science notebook Toddler DIY Jukebox on a raspberry Pi
science notebook jukebox
Sliders are added automatically in your jupyter notebook. This works on Google Collab and the code takes about 40 lines of code. No Widgets, event handlers or matplotlib knowledge required. Plays some music when you touch the icon. Images can be generated using the OpenAI Dall-E API helpers . Caption added through the title mechanism. Music samples generated by prompting MusicGen
Demo notebook on collab jukebox.py demo code

Local setup

git clone git@github.com:balthazarneveu/interactive_pipe.git
cd interactive-pipe
pip install -e ".[full]"

Who is this for?

:mortar_board: Scientific education

  • Demonstrate concepts by interacting with curves / images.
  • Easy integration in Jupyter Notebooks (popular with Google Collab)

:gift: DIY hobbyist

  • You can also use the declarative nature of interactive pipe to make a graphical interface in a few lines of codes.
  • For instance, it is possible to code a jukebox for a toddler on a RaspberryPi.

:camera: Engineering (computer vision, image/signal processing)

  • While prototyping an algorithm or testing a neural network, you may be interested in making small experiments with visual checks. Instead of making a draft quick & dirty code that you'll never keep, you can use interactive pipe to show your team how your library works. A visual demo is always good, it shows that the algorithm is not buggy if anyone can play with it.
  • Tune your algorithms with a graphical interface and save your parameters for later batch processing.
  • Ready to batch under the hood, the processing engine can be ran without GUI (therefore allowing to use the same code for tuning & batch processing if needed).
  • Do not spoil your production code with a huge amount of graphical interface code, keep your algorithms library untouched and simply decorate it.

:scroll: Terminology

interactive_pipe_concept

:scroll: Features

Version 0.8.6

  • Modular multi-image processing filters
  • Declarative: Easily make graphical user interface without having to learn anything about pyQt or matplotlib
  • Support in jupyter notebooks
  • Tuning sliders & check buttons with a GUI
  • Cache intermediate results in RAM for much faster processing
  • KeyboardControl : no slider on UI but exactly the same internal mechanism, update on key press.
  • Support Curve plots (2D signals)
  • :new: gradio backend (+allows sharing with others).
  • :new: Audio support in Gradio (live audio or display several players by returning 1D numpy arrays)
  • :new: Circular sliders for Qt Backend
  • :new: Text prompt (free_text=("Hello world!", None),)

:keyboard: Keyboard shortcuts

Shortcuts while using the GUI (QT & matplotlib backends)

  • F1 to show the help shortcuts in the terminal
  • F11 toggle fullscreen mode
  • W to write full resolution image to disk
  • R to reset parameters
  • I to print parameters dictionary in the command line
  • E to export parameters dictionary to a yaml file
  • O to import parameters dictionary from a yaml file (sliders will update)
  • G to export a pipeline diagram for your interactive pipe (requires graphviz)

Status

  • supported backends
    • :ok: gui='qt' pyQt/pySide
    • :ok: gui='mpl' matplotlib
    • :ok: gui='nb' ipywidget for jupyter notebooks
    • :test_tube: gui='gradio' gradio wrapping (+use share_gradio_app=True to share your app with others)
  • tested platforms
    • :ok: Linux (Ubuntu / KDE Neon)
    • :ok: RapsberryPi
    • :ok: On google collab (use gui='nb')
:star: PyQt / PySide Matplotlib Jupyter notebooks including Google collab Gradio
Backend name qt mpl nb gradio
Preview qt backend mpl backend nb backend mpl backend
Plot curves :heavy_check_mark: :heavy_check_mark: :heavy_check_mark: :heavy_check_mark:
Auto refreshed layout :heavy_check_mark: :heavy_check_mark: :heavy_check_mark: :heavy_minus_sign:
Keyboard shortcuts / fullscreen :heavy_check_mark: :heavy_check_mark: :heavy_minus_sign: :heavy_minus_sign:
Audio support :heavy_check_mark: :heavy_minus_sign: :heavy_minus_sign: :heavy_check_mark:
Image buttons :heavy_check_mark: :heavy_minus_sign: :heavy_minus_sign: :heavy_minus_sign:
Circular slider :heavy_check_mark: :heavy_minus_sign: :heavy_minus_sign: :heavy_minus_sign:

Tutorials

Main tutorial

tuto

Tutorial on Hugging Face space

Tutorial in a Colab notebook

Learn by examples

Basic image processing (python code sample for PyQT GUI)

GUI Pipeline

Speech exploration notebook (colab, signal processing)

Speech processing exploration in a notebook

:rocket: Ultra short code

Since ipywidgets in notebooks are supported, the tutorial is also available in a google collab notebook

Let's define 3 image processing very basic filters exposure, black_and_white & blend.

By design:

  • image buffers inputs are arguments
  • keyword arguments are the parameters which can be later turned into interactive widgets.
  • output buffers are simply returned like you'd do in a regular function.

We use the @interactive() wrapper which will turn each keyword parameters initialized to a tuple/list into a graphical interactive widgets (slider, tick box, dropdown men).

The syntax to turn keyword arguments into sliders is pretty simple (default, [min, max], name) will turn into a float slider for instance.

Finally, we need to the glue to combo these filters. This is where the sample_pipeline function comes in.

By decorating it with @interactive_pipeline(gui="qt"), calling this function will magically turn into a GUI powered image processing pipeline.

from interactive_pipe import interactive, interactive_pipeline
import numpy as np

@interactive()
def exposure(img, coeff = (1., [0.5, 2.], "exposure"), bias=(0., [-0.2, 0.2])):
    '''Applies a multiplication by coeff & adds a constant bias to the image'''
    # In the GUI, the coeff will be labelled as "exposure". 
    # As the default tuple provided to bias does not end up with a string, 
    # the widget label will be "bias", simply named after the keyword arg. 
    return img*coeff + bias


@interactive()
def black_and_white(img, bnw=(True, "black and white")):
    '''Averages the 3 color channels (Black & White) if bnw=True
    '''
    # Special mention for booleans: using a tuple like (True,) allows creating the tick box.
    return np.repeat(np.expand_dims(np.average(img, axis=-1), -1), img.shape[-1], axis=-1) if bnw else img

@interactive()
def blend(img0, img1, blend_coeff=(0.5, [0., 1.])):
    '''Blends between two image. 
    - when blend_coeff=0 -> image 0  [slider to the left ] 
    - when blend_coeff=1 -> image 1   [slider to the right] 
    '''
    return  (1-blend_coeff)*img0+ blend_coeff*img1

# you can change the backend to mpl instead of Qt here.
@interactive_pipeline(gui="qt", size="fullscreen")
def sample_pipeline(input_image):
    exposed = exposure(input_image)
    bnw_image = black_and_white(input_image)
    blended  = blend(exposed, bnw_image)
    return exposed, blended, bnw_image

if __name__ == '__main__':
    input_image = np.array([0., 0.5, 0.8])*np.ones((256, 512, 3))
    sample_pipeline(input_image)

:heart: This code shall display you a GUI with three images. The middle one is the result of the blend

Notes:

  • If you write def blend(img0, img1, blend_coeff=0.5):, blend_coeff will simply not be a slider on the GUI no more.
  • If you write blend_coeff=[0., 1.] , blend_coeff will be a slider initalized to 0.5
  • If you write bnw=(True, "black and white", "k"), the checkbox will disappear and be replaced by a keypress event (press k to enable/disable black & white)

:bulb: Some more tips

from interactive_pipe import interactive, interactive_pipeline
import numpy as np

COLOR_DICT = {"red": [1., 0., 0.],  "green": [0., 1.,0.], "blue": [0., 0., 1.], "gray": [0.5, 0.5, 0.5]}
@interactive()
def generate_flat_colored_image(color_choice=["red", "green", "blue", "gray"], context={}):
    '''Generate a constant colorful image
    '''
    flat_array =  np.array(COLOR_DICT.get(color_choice)) * np.ones((64, 64, 3))
    context["avg"] = np.average(flat_array)
    return flat_array
  • Note that you can also create filters which take no inputs and simply "generate" images.
  • The color_choice list will be turned into a nice dropdown menu. Default value here will be red as this is the first element of the list!

:bulb: Can filters communicate together? Yes, using the special keyword argument context={}.

  • Check carefully how we stored the image average of the flat image in context.
  • This value will be available to other filters. special_image_slice is going to use that value to set the half bottom image to dark in case the average is high.
@interactive()
def special_image_slice(img, context={}):
    if context["avg"] > 0.4:
        out_img[out_img.shape[0]//2:, ...] = 0.
    return out_img

@interactive()
def switch_image(img1, img2, img3, image_index=(0, [0, 2], None, ["pagedown", "pageup", True])):
    '''Switch between 3 images
    '''
    return [img1, img2, img3][image_index]

Note that you can create a filter to switch between several images. In ["pagedown", "pageup", True], True means that the image_index will wrap around. (it will return to 0 as soon as it goes above the maximum value of 2).

@interactive()
def black_top_image_slice(img, top_slice_black=(True, "special", "k"), context={}):
    out_img = img.copy()
    if top_slice_black:
        out_img[:out_img.shape[0]//2, ...] = 0.
    return out_img


@interactive_pipeline(gui="qt", size="fullscreen")
def sample_pipeline_generated_image():
    flat_img = generate_flat_colored_image()
    top_slice_modified = black_top_image_slice(flat_img)
    bottom_slice_modified_image = special_image_slice(flat_img)
    chosen = switch_image(flat_img, top_slice_modified, bottom_slice_modified_image)
    return chosen

if __name__ == '__main__':
    sample_pipeline_generated_image()

History

  • Interactive pipe was initially developped by Balthazar Neveu as part of the irdrone project based on matplotlib.
  • Later, more contributions were also made by Giuseppe Moschetti and Sylvain Leroy.
  • August 2023: rewriting the whole core and supporting several graphical backends!
  • September 2024: Gradio backend

FAQ

  • :question: Is there a difference between global_params and context ?

No, global_params, global_parameters, global_state, global_context, context, state all mean the same thing and are all supported for legacy reasons. context is the preferred wording.

  • :question: Do I have to remove KeyboardSlider when using gradio or notebook backends?

No, don't worry, these will be mapped back to regular sliders!

  • :question: How do I play audio live?

:sound: Inside a processing block, write the audio file to disk and use context["__set_audio"](audio_file)

  • :question: Do I have to decorate my processing block using the @interactive

If you use the @ decoration style, your function won't be useable in a regular manner (wich may be problematic in a serious development environment)

@interactive(angle=(0., [-360., 360.]))
def processing_block(angle=0.):
    ...

An alternative is to decorate the processing block outside... in a file dedicated to interactivity for instance

# core_filter.py
def processing_block(angle=0.):
    ...
# graphical.py
from core_filter import processing_block

def add_interactivity():
    interactive(angle=(0., [-360., 360.]))(processing_block)
  • :question: Can I call the pipeline in a command line/batch fashion?

Yes, headless mode is supported. :soon: documentation needed.

  • :question: Can I use inplace operations?

Better avoid these in general. To avoid making extra copies, computing hashes everywhere and avoid loosing precious computation time, there are no checks that inputs are not modified in place.

# Don't do that!
def bad_processing_block(inp):
    inp+=1

Roadmap and todos

:bug: Want to contribute or interested in adding new features? Enter a new Github issue

:gift: Want to dig into the code? Take a look at code_architecture.md

Short term roadmap

  • Backport previous features
    • Image class support in interactive pipe (Heatmaps/Float images)

Long term roadmap

  • Advanced feature
    • Webcam based "slider" for dropdown menu (like "elephant" will trigget if an elephant is magically detected on the webcam)
    • Animations/While loops/Video source (Time slider)
  • Exploratory backends
    • Create a textual backend for simplified GUI (probably no images displayed)
    • Create a Kivy backend

Further examples

Minimalistic pytorch based ISP

ISP means image signal processor

:warning: Work in progess (no proper demosaicking, no denoiser, no tone mapping.)

Ultra simplistic ISP

:test_tube: Experimental features

  • Custom events on specific key press
  • Display the execution graph of the pipeline G key
  • thirdparty/music Play audio (Qt backend only). Play songs on spotify (linux only) when the spotify app is running.
  • thirdparty/images_openai_api Generate images from prompt using OpenAI API image generation DALL-E Model (:dollar: paid service ~ 2cents/image)

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

interactive_pipe-0.8.6.tar.gz (67.8 kB view details)

Uploaded Source

Built Distribution

interactive_pipe-0.8.6-py3-none-any.whl (70.0 kB view details)

Uploaded Python 3

File details

Details for the file interactive_pipe-0.8.6.tar.gz.

File metadata

  • Download URL: interactive_pipe-0.8.6.tar.gz
  • Upload date:
  • Size: 67.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.10.12

File hashes

Hashes for interactive_pipe-0.8.6.tar.gz
Algorithm Hash digest
SHA256 62829fe93a6e53ff67d90fd9d30446bc53d12b79e6def2f86c952926bed02ca2
MD5 876fa57adf1a6e2d74bbca074c83ba57
BLAKE2b-256 7be82c3440491847ad52da8057a31609119dbb5adf0d293c82dba9909ce32b54

See more details on using hashes here.

File details

Details for the file interactive_pipe-0.8.6-py3-none-any.whl.

File metadata

File hashes

Hashes for interactive_pipe-0.8.6-py3-none-any.whl
Algorithm Hash digest
SHA256 7dca0a1841b47790d8418c5381001f27e4759e34f3a8ba876ec6d6a7b944d766
MD5 475fc82b4c33b23843a2a641a61e0c7f
BLAKE2b-256 75d9600c06abf1bb11bbb6e78b50f20d0bc0ce854232366819496dbb2874c15b

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page