Skip to main content

ultra simple command line tool for docker-scaling batch processing

Project description

Python application Upload Python Package

quick_batch

quick_batch is an ultra-simple command-line tool for large batch python-driven processing and transformation. It was designed to be fast to deploy, transparent, and portable. This allows you to scale any processor function that needs to be run over a large set of input data, enabling batch/parallel processing of the input with minimal setup and teardown.

Getting started

All you need to scale batch transformations with quick_batch is a

  • transformation function(s) in a processor.py file
  • Dockerfile containing a container build appropriate to y our processor
  • an optional requirements.txt file containing required python modules

Document paths to these objects as well as other parameters in a config.yaml config file of the form below

data:
  input_path: /path/to/your/input/data
  output_path: /path/to/your/output/data
  log_path: /path/to/your/log/file

queue:
  feed_rate: <int - number of examples processed per processor instance>
  order_files: <boolean - whether or not to order input files by size>

processor:
  dockerfile_path: /path/to/your/Dockerfile
  requirements_path: /path/to/your/requirements.txt
  processor_path: /path/to/your/processor/processor.py
  num_processors: <int - instances of processor to run in parallel>

quick_batch will point your processor.py at the input_path defined in this config.yaml and process the files listed in it in parallel at a scale given by your choice of num_processors.

Output will be written to the output_path specified in the configuration file.

You can see tests/config_files for examples of valid configs.

Usage

To start processing with your config.yaml use quick_batch's config command at the terminal by typing

quick_batch config /path/to/your/config.yaml

This will start the build and deploy process for processing your data as defined in your config.yaml.

Scaling

Use the scale commoand to manually scale the number of processors / containers running your process

quick_batch scale <num_processors> 

Here <num_processors> is an integer >= 1. For example, to scale to 3 parallel processors / containers: quick_batch scale 3

Installation

To install quick_batch, simply use pip:

pip install quick-batch

The processor.py file

Create a processor.py file with the following basic pattern:

import ...

def processor(todos):
  for file_name in todos.file_paths_to_process:
    # processing code

The todos object will carry in feed_rate number of file names to process in .file_paths_to_process.

Note: the function name processor is mandatory.

Why use quick_batch

quick_batch aims to be

  • dead simple to use: versus standard cloud service batch transformation services that require significant configuration / service understanding

  • ultra fast setup: versus setup of heavier orchestration tools like airflow or mlflow, which may be a hinderance due to time / familiarity / organisational constraints

  • 100% portable: - use quick_batch on any machine, anywhere

  • processor-invariant: quick_batch works with arbitrary processes, not just machine learning or deep learning tasks.

  • transparent and open source: quick_batch uses Docker under the hood and only abstracts away the not-so-fun stuff - including instantiation, scaling, and teardown. you can still monitor your processing using familiar Docker command-line arguments (like docker service ls, docker service logs, etc.).

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

quick_batch-0.1.7.tar.gz (17.0 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

quick_batch-0.1.7-py3-none-any.whl (25.6 kB view details)

Uploaded Python 3

File details

Details for the file quick_batch-0.1.7.tar.gz.

File metadata

  • Download URL: quick_batch-0.1.7.tar.gz
  • Upload date:
  • Size: 17.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.9.16

File hashes

Hashes for quick_batch-0.1.7.tar.gz
Algorithm Hash digest
SHA256 28f765172c1f1d74328575495783789e7952dbedf122c9e7ad4659d3c59d7925
MD5 f46d99d29324221434e5dd9bfc31a089
BLAKE2b-256 53ffdba6b8085aba8f08c9593f802a1d12316665665cb9e9c770b1f40237b198

See more details on using hashes here.

File details

Details for the file quick_batch-0.1.7-py3-none-any.whl.

File metadata

  • Download URL: quick_batch-0.1.7-py3-none-any.whl
  • Upload date:
  • Size: 25.6 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.9.16

File hashes

Hashes for quick_batch-0.1.7-py3-none-any.whl
Algorithm Hash digest
SHA256 365ce82dd680af522dc00840494061d709039dfc799c5186445bfe002c50882f
MD5 3350907d526c3326e536ce71f51b9174
BLAKE2b-256 6262ab118656545ac4e4d8e30e4116e58ae1c7a237230ef907836e1fd7a9863c

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page