Skip to main content

an easy parallel queueing system

Project description

Read this on GitHub or my site.

Overview

ezpq implements a parallel queueing system consisting of:

  1. a priority “waiting” queue in.

  2. a lookup table of “working” jobs.

  3. a priority “completed” queue out.

The queueing system uses multiprocessing.Process by default and can also run jobs with threading.Thread.

Features

  • Simple interface; pure Python.

  • No required dependencies outside of standard library.

  • Optional integration with `tqdm <https://github.com/tqdm/tqdm>`__ progress bars.

  • Compatible with Python 2 & 3.

  • Cross platform with MacOS, Linux, and Windows.

  • Data remains in-memory.

  • Priority Queueing, both in and out and within lanes.

  • Synchronous lanes allow dependent jobs to execute in the desired order.

  • Easily switch from processes to threads.

  • Automatic handling of output.

  • Rich job details, easily viewed as pandas dataframe.

  • Built-in logging to CSV.

  • Customizable visualizations of queue operations.

How to get it

Install from PyPI with:

pip install ezpq

Optional packages:

pip install pandas    # required for plots
pip install plotnine  # required for plots
pip install tqdm      # required for progress bars

Quickstart

Suppose you wanted to speed up the following code, which runs 60 operations that take anywhere from 0s to 2s. With an average job time of ~1s, this operation should take ~60s.

import time
import random
def random_sleep(x):
  random.seed(x)
  n = random.uniform(0.5, 1.5)
  time.sleep(n)
  return n
start = time.time()
output = [random_sleep(x) for x in range(60)]
end = time.time()
print('> Runtime: ' + str(end - start))
## '> Runtime: 58.932034969329834'

Here is the function ran in parallel with an ezpq Queue of 6 workers. Thus, the runtime of the above operation will be reduced from ~60s to ~10s.

start = time.time()
with ezpq.Queue(6) as Q:
  output = Q.map(random_sleep, range(60))
end = time.time()
print('> Runtime: ' + str(end - start))

Here is the same scenario, using the @ezpq.Queue decorator.

@ezpq.Queue(6)
def random_sleep(x):
    random.seed(x)
    n = random.uniform(0.5, 1.5)
    time.sleep(n)
    return n
output = random_sleep(iterable=range(60))

While map() and the decorator are useful for quick-n-simple parallization, the essential functions of an ezpq Queue include put(), wait(), and get() (or collect()).

with ezpq.Queue(6) as Q:
    for x in range(60):
        Q.put(random_sleep, args=x)
    Q.wait()
    output = Q.collect()

The output is a list of dicts containing verbose information about each job, along with its output, and exit code.

print( output[0] )
## {'args': [0],
##  'callback': None,
##  'cancelled': False,
##  'ended': datetime.datetime(2019, 1, 28, 17, 45, 29, 943860),
##  'exception': None,
##  'exitcode': 0,
##  'function': 'random_sleep',
##  'id': 1,
##  'kwargs': None,
##  'lane': None,
##  'name': 1,
##  'output': 1.3444218515250481,
##  'priority': 100,
##  'processed': datetime.datetime(2019, 1, 28, 17, 45, 29, 998175),
##  'qid': 'd6eaaf93',
##  'runtime': 1.3502492904663086,
##  'started': datetime.datetime(2019, 1, 28, 17, 45, 28, 593611),
##  'submitted': datetime.datetime(2019, 1, 28, 17, 45, 28, 489300),
##  'timeout': 0}

Easily convert output to a pandas dataframe:

import pandas as pd
df = pd.DataFrame(output)
print( df.head()[['id', 'output', 'runtime', 'exitcode']] )
##    id    output   runtime  exitcode
## 0   1  1.344422  1.350249         0
## 1   2  0.634364  0.638975         0
## 2   3  1.456034  1.460431         0
## 3   4  0.737965  0.742028         0
## 4   5  0.736048  0.740672         0

Use ezpq.Plot to generate a Gannt chart of the job timings.

plt = ezpq.Plot(output).build(show_legend=False)
plt.save('docs/imgs/quickstart.png')

ezpq.Queue

The Queue class implements the queueing system, which is itself a 3-part system composed of the:

  1. waiting queue

  2. working table

  3. completed queue

    <!– –>

## Help on function __init__ in module ezpq.Queue:
##
## __init__(self, n_workers=8, max_size=0, job_runner=<class 'multiprocessing.context.Process'>, auto_remove=False, auto_start=True, auto_stop=False, callback=None, log_file=None, poll=0.1, show_progress=False, qid=None)
##     Implements a parallel queueing system.
##
##     Args:
##         n_workers: the max number of concurrent jobs.
##             - Accepts: int
##             - Default: cpu_count()
##         max_size: when > 0, will throw an exception the number of enqueued jobs exceeds this value. Otherwise, no limit.
##             - Accepts: int
##             - Default: 0 (unlimited)
##         job_runner: the class to use to invoke new jobs.
##             - Accepts: multiprocessing.Process, threading.Thread
##             - Default: multiprocessing.Process
##         auto_remove: controls whether jobs are discarded of after completion.
##             - Accepts: bool
##             - Default: False
##         auto_start: controls whether the queue system "pulse" is started upon instantiation (default), or manually.
##             - Accepts: bool
##             - Default: True
##         auto_stop: controls whether the queue system "pulse" stops itself after all jobs are complete.
##             - Accepts: bool
##             - Default: False
##         callback: optional function to execute synchronously immediately after a job completes.
##             - Accepts: function object
##             - Default: None
##         log_file: if file path is specified, job data is written to this path in CSV format.
##             - Accepts: str
##             - Default: None
##         poll: controls the pulse frequency; the amount of time slept between operations.
##             - Accepts: float
##             - Default: 0.1
##
##     Returns:
##         ezpq.Queue object.
##
## None

ezpq.Job

A ezpq job defines the function to run. It is passed to an ezpq queue with a call to submit().

## Help on function __init__ in module ezpq.Job:
##
## __init__(self, function, args=None, kwargs=None, name=None, priority=100, lane=None, timeout=0, suppress_errors=False)
##     Defines what to run within a `ezpq.Queue`, and how to run it.
##
##     Args:
##         function: the function to run.
##             - Accepts: function object
##         args: optional positional arguments to pass to the function.
##             - Accepts: list, tuple
##             - Default: None
##         kwargs: optional keyword arguments to pass to the function.
##             - Accepts: dict
##             - Default: None
##         name: optional name to give to the job. Does not have to be unique.
##             - Accepts: str
##             - Default: None; assumes same name as job id.
##         priority: priority value to assign. Lower values get processed sooner.
##             - Accepts: int
##             - Default: 100
##         lane: a sequential lane to place the job in. if it does not already exist, it will be created.
##             - Accepts: int, str; any hashable object
##             - Default: None; no lanes.
##         timeout: When > 0, if this value (in seconds) is exceeded, the job is terminated. Otherwise, no limit enforced.
##             - Accepts: float
##             - Default: 0 (unlimited)
##
##     Returns:
##         ezpq.Job object
##
## None
with ezpq.Queue(6) as Q:
  for x in range(60):
    priority = x % 2 # give even numbers higher priority.
    job = ezpq.Job(random_sleep, args=x, priority=priority)
    Q.submit(job)
  Q.wait()
  output = Q.collect()

put

The put method creates a job and submits it to an ezpq queue. All of its arguments are passed to ezpq.Job().

with ezpq.Queue(6) as Q:
    for x in range(60):
        Q.put(random_sleep, args=x)
    Q.wait()
    output = Q.collect()

size

size() returns a count of all items across all three queue components. It accepts three boolean parameters, waiting, working, and completed. If all of these are False (default), all jobs are counted. If any combination of these is True, only the corresponding queue(s) will be counted. For example:

def print_sizes(Q):
    msg = 'Total: {0}; Waiting: {1}; Working: {2}; Completed: {3}'.format(
        Q.size(),
        Q.size(waiting=True),
        Q.size(working=True),
        Q.size(completed=True)
    )
    print(msg)
with ezpq.Queue(6) as Q:
    # enqueue jobs
    for x in range(60):
        Q.put(random_sleep, x)
    # repeatedly print sizes until complete.
    while Q.has_work():
        print_sizes(Q)
        time.sleep(1)
    print_sizes(Q)
## 'Total: 60; Waiting: 60; Working: 0; Completed: 0'
## 'Total: 60; Waiting: 51; Working: 6; Completed: 3'
## 'Total: 60; Waiting: 46; Working: 6; Completed: 8'
## 'Total: 60; Waiting: 39; Working: 6; Completed: 15'
## 'Total: 60; Waiting: 34; Working: 6; Completed: 20'
## 'Total: 60; Waiting: 31; Working: 6; Completed: 23'
## 'Total: 60; Waiting: 24; Working: 6; Completed: 30'
## 'Total: 60; Waiting: 17; Working: 6; Completed: 37'
## 'Total: 60; Waiting: 11; Working: 6; Completed: 43'
## 'Total: 60; Waiting: 6; Working: 6; Completed: 48'
## 'Total: 60; Waiting: 0; Working: 5; Completed: 55'
## 'Total: 60; Waiting: 0; Working: 1; Completed: 59'
## 'Total: 60; Waiting: 0; Working: 0; Completed: 60'

wait

The wait() method will block execution until all jobs complete. It also accepts a timeout parameter, given in seconds. The return value is the count of jobs that did not complete. Thus, a return value greater than 0 indicates the timeout was exceeded. The parameter poll can be used to adjust how frequently (in seconds) the operation checks for completed jobs (default=0.1).

New in v0.2.0, include show_progress=True to show a progress bar while waiting. This is equivalent to a call to waitpb().

get

get() retrieves and deletes (“pop”) the highest priority job from the completed queue, if one is available. If the completed queue is empty, get() returns None. However, get() will wait for a completed job if the poll frequency is greater than 0. If the timeout is exceeded, None is returned.

with ezpq.Queue(6) as Q:
    n_inputs = 60
    output = [None] * n_inputs
    # enqueue jobs
    for x in range(n_inputs):
        Q.put(random_sleep, args=x)

    # repeatedly `get()` until queue is empty.
    for i in range(n_inputs):
        output[i] = Q.get(poll=0.1)

collect

collect() is similar to get(), but it will return a list of all completed jobs and clear the completed queue. It does not support the poll or timeout parameters, but you can call wait() before collect() if desired.

with ezpq.Queue(6) as Q:
    # enqueue jobs
    for x in range(60):
        Q.put(random_sleep, x)
    # wait and collect all jobs
    print('Queue size before: {0}'.format(Q.size()))
    Q.wait()
    output = Q.collect()
    print('Queue size after: {0}'.format(Q.size()))
    print('Output size: {0}'.format(len(output)))
## 'Queue size before: 60'
## 'Queue size after: 0'
## 'Output size: 60'

map

map encapsulates the logic of put, wait, and collect in one call. Include show_progress=True to get output tqdm progress bar.

dispose

The queueing operations performed by ezpq.Queue are performed on a periodic basis. By default, the poll parameter for a Queue is 0.1 seconds. This “pulse” thread will continue firing until the Queue is disposed of.

In the previous examples, use of the context manager (with ezpq.Queue() as Q:) results in automatic disposal. If not using the context manager (or decorator), clean up after yourself with dispose().

Synchronous Lanes

When you have jobs that are dependent upon another, you can use “lanes” to execute them in sequence. All that is required is an arbitrary lane name/id passed to the lane parameter of put. Empty lanes are automatically removed.

In the above graphic, notice how same-colored bars never overlap. These bars represent jobs that are in the same lane, which executed synchronously.

ezpq.Plot

The Plot class is used to visualize the wait, start, and end times for each job that entered the queueing system. The class is initialized with a list of dicts; exactly what is returned from a call to collect() or map().

Arguments given to build() control various aspects of the plot, from coloring, to faceting,

## Help on function build in module ezpq.Plot:
##
## build(self, color_by='qid', facet_by='qid', facet_scale='fixed', show_legend=True, bar_width=1, title=None, color_pal=None, theme='bw')
##     Produces a plot based on the data and options provided to a `ezpq.Plot()` object.
##
##     Args:
##         color_by: controls the column to use for coloring the bars.
##             - Accepts: one of 'qid', 'priority', 'lane', 'cancelled', 'exitcode', 'name', 'output'
##             - Default: 'qid'
##         facet_by: controls the column to use for facetting the plot.
##             - Accepts: one of 'qid', 'priority', 'lane', 'cancelled', 'exitcode', 'name', 'output'
##             - Default: 'qid'
##         facet_scale: controls the scale of the x/y axis across facets.
##             - Accepts: one of 'fixed', 'free', 'free_x', 'free_y'
##             - Default: 'fixed'
##         show_legend: controls whether the legend is drawn.
##             - Accepts: bool
##             - Default: True
##         bar_width: controls the bar width
##             - Accepts: float
##             - Default: 1
##         title: optional title to be drawn above the plot.
##             - Accepts: str, None
##             - Default: None
##         theme:
##             - Accepts: 'bw', 'classic', 'gray', 'grey', 'seaborn', '538', 'dark', 'matplotlib', 'minimal', 'xkcd', 'light'
##             - Default: 'bw'
##     Returns:
##         The plot produced from plotnine.ggplot().
##
## None
with ezpq.Queue(6) as Q:
  for x in range(60):
    lane = x % 5
    Q.put(random_sleep, x, timeout=1, lane=lane)
  Q.wait()
  output = Q.collect()
plt = ezpq.Plot(output).build(facet_by='lane', show_legend=False)
plt.save('docs/imgs/lanes2.png')

Each horizontal bar represents an independent job id. The start of the gray bar indicates when the job entered the queuing system. The start of the colored bar indicates when the job started running, and when it ended. The gray bar that follows (if any) reflects how long it took for the queue operations to recognize the finished job, join the job data with its output, remove it from the working table, and place it in the completed queue.

More Examples

Many more examples can be found in docs/examples.ipynb.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

ezpq-0.2.0.tar.gz (16.5 kB view details)

Uploaded Source

Built Distribution

ezpq-0.2.0-py3-none-any.whl (17.0 kB view details)

Uploaded Python 3

File details

Details for the file ezpq-0.2.0.tar.gz.

File metadata

  • Download URL: ezpq-0.2.0.tar.gz
  • Upload date:
  • Size: 16.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/1.12.1 pkginfo/1.5.0.1 requests/2.21.0 setuptools/40.6.3 requests-toolbelt/0.8.0 tqdm/4.29.1 CPython/3.6.7

File hashes

Hashes for ezpq-0.2.0.tar.gz
Algorithm Hash digest
SHA256 fc662431549ba7ba0a240253284a0e7b02f19a52425a42e3460feac9c5faac16
MD5 ea5678a8268aeef9f85fa012cd269a88
BLAKE2b-256 c911526ce5c7ecf66814c988d45b3d29b41e1bd252abf65f39730283c353decc

See more details on using hashes here.

File details

Details for the file ezpq-0.2.0-py3-none-any.whl.

File metadata

  • Download URL: ezpq-0.2.0-py3-none-any.whl
  • Upload date:
  • Size: 17.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/1.12.1 pkginfo/1.5.0.1 requests/2.21.0 setuptools/40.6.3 requests-toolbelt/0.8.0 tqdm/4.29.1 CPython/3.6.7

File hashes

Hashes for ezpq-0.2.0-py3-none-any.whl
Algorithm Hash digest
SHA256 59c4707aa9b21a27762d2ca26b8b477c6b1f012e682b0ea576bf89445c531b42
MD5 1492bc4b919b51caeda7331dae077a6f
BLAKE2b-256 611fb5932dbc6b90d4f0f084da7145c09807f2b880b423628a214436d68c94f7

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page