Skip to main content

Description

Project description

Simply start oar job array on nef cluster

Install

pip install --upgrade OarLauncher

Usage

import treefiles as tf
from OarLauncher import ArrayJob


# Choose a directory where script and logs are dumped
out_dir = tf.Tree.new(__file__, "generated").dump(clean=True)

# Create parameters array
nb_jobs, data = 10, ArrayJob.Data
for i in range(nb_jobs):
    data["simu_dir"].append(f"d_{i}")
    data["infos"].append(f"this is job {i}")

# Path of the script that will be called by each job of the array
# Each line of data will be sent to this script as json command line argument
job_script = tf.curDirs(__file__, "job.py")

# Create the job array
jobs = ArrayJob(out_dir, data, job_script)
# Setup jobs conf
jobs.build_oar_command(minutes=100, queue=tf.oar.Queue.BESTEFFORT)
# Write scripts
jobs.dump()
# Start the job array (blocking operation)
shell_out = jobs.run()
print(shell_out)

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

OarLauncher-0.1.10.tar.gz (3.9 kB view hashes)

Uploaded Source

Built Distribution

OarLauncher-0.1.10-py3-none-any.whl (4.8 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page