Skip to main content

A lightweight machine learning experiment scheduler that automates resource management (e.g., GPUs and models) and batch runs experiments with just a few lines of Python code.

Project description

ml_scheduler

PyPI version License Code style: black Imports: isort

ML Scheduler is a lightweight machine learning experiment scheduler that automates resource management (e.g., GPUs and models) and batch runs experiments with just a few lines of Python code.

Quick Start

  1. Install ml-scheduler
pip install ml-scheduler

or install from the github repository:

git clone https://github.com/huyiwen/ml_scheduler
cd ml_scheduler
pip install -e .
  1. Create a Python script:
cuda = ml_scheduler.pools.CUDAPool([0, 2], 90)
disk = ml_scheduler.pools.DiskPool('/one-fs')


@ml_scheduler.exp_func
async def mmlu(exp: ml_scheduler.Exp, model, checkpoint):

    source_dir = f"/another-fs/model/{model}/checkpoint-{checkpoint}"
    target_dir = f"/one-fs/model/{model}-{checkpoint}"

    # resources will be cleaned up after exiting the function
    disk_resource = await exp.get(
        disk.copy_folder,
        source_dir,
        target_dir,
        cleanup_target=True,
    )
    cuda_resource = await exp.get(cuda.allocate, 1)

    # run inference
    args = [
        "python", "inference.py", "--model", target_dir, "--dataset", "mmlu", "--cuda",  str(cuda_resource[0])
    ]
    stdout = await exp.run(args=args)
    await exp.report({'Accuracy', stdout})


mmlu.run_csv("experiments.csv", ['Accuracy'])

Mark the function with @ml_scheduler.exp_func and async to make it an experiment function. The function should take an exp argument as the first argument.

Then use await exp.get to get resources (non-blocking) and await exp.run to run the experiment (also non-blocking). Non-blocking means that when you can run multiple experiments concurrently.

  1. Create a CSV file experiments.csv with your arguments (model and checkpoint in this case):
model,checkpoint
alpacaflan-packing,200
alpacaflan-packing,400
alpacaflan-qlora,200-merged
alpacaflan-qlora,400-merged
  1. Run the script:
python run.py

The results (Accuracy in this case) and some other information will be saved in results.csv.

More Examples

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

ml_scheduler-1.1.0.tar.gz (131.7 kB view details)

Uploaded Source

Built Distribution

ml_scheduler-1.1.0-py3-none-any.whl (12.4 kB view details)

Uploaded Python 3

File details

Details for the file ml_scheduler-1.1.0.tar.gz.

File metadata

  • Download URL: ml_scheduler-1.1.0.tar.gz
  • Upload date:
  • Size: 131.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.0.0 CPython/3.8.18

File hashes

Hashes for ml_scheduler-1.1.0.tar.gz
Algorithm Hash digest
SHA256 7368fb2393c100317752b1928102bd536829a40c9bf4684e7970f1e6c154fca5
MD5 da16eef9cfd69fe6d3b087048fe90823
BLAKE2b-256 3e7ef5c1e9b4bf75563688b4a3cff3a8a4f05b915f86ed777afd8f3cfa7a616b

See more details on using hashes here.

File details

Details for the file ml_scheduler-1.1.0-py3-none-any.whl.

File metadata

  • Download URL: ml_scheduler-1.1.0-py3-none-any.whl
  • Upload date:
  • Size: 12.4 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.0.0 CPython/3.8.18

File hashes

Hashes for ml_scheduler-1.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 6880fe82a3fa45db71ab6fa0f6f79d339a99e8939570a77ab5b0828c5607d6b1
MD5 bca2ba3d1cc3630f490a4dfdeee05716
BLAKE2b-256 98b2c6702e75659998723757b09ab4a9678e00ea55504d9296b049607d2a5566

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page