Skip to main content

Dynamic mutiprocess preprocessing task loader and dispatcher

Project description

Akasaka

Dynamic mutiprocess preprocessing task loader and dispatcher.

To be brief, akasaka enables you to write a task class and run it in parallel with a simple command. It features

  • Argument parsing through argparse
  • Cache management, i.e. you can override is_executed to check if the task is already executed
  • Automatic parallelization
  • Some helper functions to make your life easier (to be added later)

Install

pip install akasaka

Usage

$ akasaka -H
usage: akasaka [--num_process NUM_PROCESS] [--chunksize CHUNKSIZE] [--devices DEVICES [DEVICES ...]] [-h] [-H] module_path

Dynamically load a Python class from a module.

positional arguments:
  module_path           The module path in the format "module.submodule.ClassName"

optional arguments:
  --num_process NUM_PROCESS
                        Number of processes to use, for normal task (default: number of CPU cores)
  --chunksize CHUNKSIZE
                        Number of tasks to be sent to a worker process at a time, for normal task (default: 1)
  --devices DEVICES [DEVICES ...]
                        The devices to run the task on, for torch task (default: [])
  -h, --help            Show help message for task and exit
  -H, --hel-akasaka     Show Akasaka help message and exit

Write a task class of the subclass of AsakasaTask and implement needed methods. Take a look at the examples in the examples directory.

For PyTorch CUDA tasks, you would need to use AsakasaTorchTask as the base class. The number of processes is determined by the number of CUDA devices provided.

With akasaka installed, to run the examples, try following commands:

cd examples
# print.py
akasaka print.PrintTaskTest --test test_string
# directory_example/print.py
akasaka directory_example.print.PrintTaskTest --test test_string

Development

To install from source, clone the repository and run

pip install -e .

Known Issues

For torch (CUDA) task, you might need to wait for unexpected long time after model loaded before the task can start. Reason to be figured out.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

akasaka-0.1.6.tar.gz (7.1 kB view hashes)

Uploaded Source

Built Distribution

akasaka-0.1.6-py3-none-any.whl (8.8 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page