Skip to main content

A powerful parallel pipelining tool for image processing

Project description

Olympict

coveragestatus

Olympict

Based on olympipe, this project will make image processing pipelines easy to use using the basic multiprocessing module. This module uses type checking to ensure your data process validity from the start.

Basic image processing pipeline

Loading images from a folder and resize them to a new folder

from olympict import ImagePipeline

p0 = ImagePipeline.load_folder("./examples") # path containing the images
p1 = p0.resize((150, 250)) # new width, new height
p2 = p1.save_to_folder("./resized_examples") # path to save the images
p2.wait_for_completion() # the code blocks here until all images are processed

print("Finished resizing")

Loading images from a folder and overwrite them with a new size

from olympict import ImagePipeline

p0 = ImagePipeline.load_folder("./examples") # path containing the images
p1 = p0.resize((150, 250))
p2 = p1.save() # overwrite the images
p2.wait_for_completion()

Loading images from a folder and resize them keeping the aspect ratio using a padding color

from olympict import ImagePipeline
blue = (255, 0, 0) # Colors are BGR to match opencv format

p0 = ImagePipeline.load_folder("./examples")
p1 = p0.resize((150, 250), pad_color=blue)
p2 = p1.save() # overwrite the images
p2.wait_for_completion()

Load image to make a specific operation

from olympict import ImagePipeline, Img

def operation(image: Img) -> Img:
    # set the green channel as a mean of blue and red channels
    img[:, :, 1] = (img[:, :, 0] + img[:, :, 2]) / 2
    return img

p0 = ImagePipeline.load_folder("./examples")
p1 = p0.task_img(operation)
p2 = p1.save() # overwrite the images
p2.wait_for_completion()

Check ongoing operation

from olympict import ImagePipeline, Img

def operation(image: Img) -> Img:
    # set the green channel as a mean of blue and red channels
    img[:, :, 1] = (img[:, :, 0] + img[:, :, 2]) / 2
    return img

p0 = ImagePipeline.load_folder("./examples").debug_window("Raw image")
p1 = p0.task_img(operation).debug_window("Processed image")
p2 = p1.save() # overwrite the images
p2.wait_for_completion()

Load a video and process each individual frame

from olympict import VideoPipeline

p0 = VideoPipeline.load_folder("./examples") # will load .mp4 and .mkv files

p1 = p0.to_sequence() # split each video frame into a basic image

p2 = p1.resize((100, 3), (255, 255, 255)) # resize each image with white padding

p3 = p2.save_to_folder("./sequence") # save images individually

p3.wait_for_completion()

img_paths = glob("./sequence/*.png") # count images

print("Number of images:", len(img_paths))

Complex example with preview windows

import os
from random import randint
import re
import time
from olympict import ImagePipeline
from olympict.files.o_image import OlympImage


def img_simple_order(path: str) -> int:
    number_pattern = r"\d+"
    res = re.findall(number_pattern, os.path.basename(path))

    return int(res[0])


if __name__ == "__main__":

    def wait(x: OlympImage):
        time.sleep(0.1)
        print(x.path)
        return x

    def generator():
        for i in range(96):
            img = np.zeros((256, 256, 3), np.uint8)
            img[i, :, :] = (255, 255, 255)

            o = OlympImage()
            o.path = f'/tmp/{i}.png'
            o.img = img
            yield o
        return

    p = (
        ImagePipeline(generator())
        .task(wait)
        .debug_window("start it")
        .task_img(lambda x: x[::-1, :, :])
        .debug_window("flip it")
        .keep_each_frame_in(1, 3)
        .debug_window("stuttered")
        .draw_bboxes(
            lambda x: [
                (
                    (
                        randint(0, 50),
                        randint(0, 50),
                        randint(100, 200),
                        randint(100, 200),
                        "change",
                        0.5,
                    ),
                    (randint(0, 255), 25, 245),
                )
            ]
        )
        .debug_window("bboxed")
    )

    p.wait_for_completion()

/!\ To use Huggingface Models, you must install this package with [hf] extras

poetry add olympict[hf]
or
pip install olympict[hf]

Use with Huggingface image classification models

from olympict import ImagePipeline
from olympict.files.o_image import OlympImage

def print_metas(x: OlympImage):
    print(x.metadata)
    return x

if __name__ == "__main__":
    # very important, without this processes will get stuck
    from torch.multiprocessing import set_start_method
    set_start_method("spawn")

    (
        ImagePipeline.load_folder("./classif")
        .classify("google/mobilenet_v2_1.0_224")
        .task(print_metas)
    ).wait_for_completion()

This project is still an early version, feedback is very helpful.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

olympict-1.4.0.tar.gz (17.3 kB view details)

Uploaded Source

Built Distribution

olympict-1.4.0-py3-none-any.whl (26.5 kB view details)

Uploaded Python 3

File details

Details for the file olympict-1.4.0.tar.gz.

File metadata

  • Download URL: olympict-1.4.0.tar.gz
  • Upload date:
  • Size: 17.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.8.3 CPython/3.9.20 Linux/5.15.154+

File hashes

Hashes for olympict-1.4.0.tar.gz
Algorithm Hash digest
SHA256 cff45078eb9736858b7b0d0d8961c92cf30fba048db673816ad91345cc43e041
MD5 a3978c00b4c54578837d2c74ab7b667b
BLAKE2b-256 72f9d98a6d6864127c82cb26c2b50bf4dac03ed3506ef502ef3f24a9142c18fc

See more details on using hashes here.

File details

Details for the file olympict-1.4.0-py3-none-any.whl.

File metadata

  • Download URL: olympict-1.4.0-py3-none-any.whl
  • Upload date:
  • Size: 26.5 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.8.3 CPython/3.9.20 Linux/5.15.154+

File hashes

Hashes for olympict-1.4.0-py3-none-any.whl
Algorithm Hash digest
SHA256 2a03fa3755eb174c8328c7ac881ce354f4f473cb6a17267e0619225d127b7892
MD5 7dbe0b2b6ce1504d486301d939332282
BLAKE2b-256 87cdd79a1b3fe31283739191a69475de76261348014b7c13638f7b73601a4fa1

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page