Skip to main content

A lightweight machine learning experiments scheduler in a few lines of simple Python

Project description

ml_scheduler

PyPI version Test Status Lint Status codecov Join the chat at https://gitter.im/huyiwen/ml_scheduler License Downloads Code style: black Imports: isort


Browse GitHub Code Repository


ml_scheduler A lightweight machine learning experiments scheduler in a few lines of simple Python

Quick Start

  1. Install ml_scheduler
pip install ml_scheduler
  1. Create a Python script:
cuda = ml_scheduler.pools.CUDAPool([0, 2], 90)
disk = ml_scheduler.pools.DiskPool('/one-fs')


@ml_scheduler.exp_func
async def mmlu(exp: ml_scheduler.Exp, model, checkpoint):

    source_dir = f"/another-fs/model/{model}/checkpoint-{checkpoint}"
    target_dir = f"/one-fs/model/{model}-{checkpoint}"

    # resources will be cleaned up after exiting the function
    disk_resource = await exp.get(
        disk.copy_folder,
        source_dir,
        target_dir,
        cleanup_target=True,
    )
    cuda_resource = await exp.get(cuda.allocate, 1)

    # run inference
    args = [
        "python", "inference.py", "--model", target_dir, "--dataset", "mmlu", "--cuda",  str(cuda_resource[0])
    ]
    stdout = await exp.run(args=args)
    await exp.report({'Accuracy', stdout})


mmlu.run_csv("experiments.csv", ['Accuracy'])

Mark the function with @ml_scheduler.exp_func and async to make it an experiment function. The function should take an exp argument as the first argument.

Then use await exp.get to get resources (non-blocking) and await exp.run to run the experiment (also non-blocking). Non-blocking means that when you can run multiple experiments concurrently.

  1. Create a CSV file experiments.csv with your arguments (model and checkpoint in this case):
model,checkpoint
alpacaflan-packing,200
alpacaflan-packing,400
alpacaflan-qlora,200-merged
alpacaflan-qlora,400-merged
  1. Run the script:
python run.py

The results (Accuracy in this case) and some other information will be saved in results.csv.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

ml_scheduler-1.0.0.tar.gz (14.8 kB view details)

Uploaded Source

Built Distribution

ml_scheduler-1.0.0-py3-none-any.whl (11.9 kB view details)

Uploaded Python 3

File details

Details for the file ml_scheduler-1.0.0.tar.gz.

File metadata

  • Download URL: ml_scheduler-1.0.0.tar.gz
  • Upload date:
  • Size: 14.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.0.0 CPython/3.8.18

File hashes

Hashes for ml_scheduler-1.0.0.tar.gz
Algorithm Hash digest
SHA256 a4fb95473ef85bb6557fc700f0d2dadf6850dd26124bdee50543adfbf82712ab
MD5 0fefe36739b3e564a92578120c9453ca
BLAKE2b-256 03c80d5b370fbf0e5c0e8e741b2a07cdb42275cc92728646a8ac8b70c3d1cb96

See more details on using hashes here.

File details

Details for the file ml_scheduler-1.0.0-py3-none-any.whl.

File metadata

  • Download URL: ml_scheduler-1.0.0-py3-none-any.whl
  • Upload date:
  • Size: 11.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.0.0 CPython/3.8.18

File hashes

Hashes for ml_scheduler-1.0.0-py3-none-any.whl
Algorithm Hash digest
SHA256 52bf27cb7a76a0aed61b21f8208ed69a0617f4734f8cb11fbf6ee5fb2d5b42f9
MD5 abcaac6a5d6ede7559942cf26ab51003
BLAKE2b-256 19edea0f77e67a93833055a93b7cccd76724a2e58ea1130db01ff5cb081fd6ce

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page