Skip to main content

A simple tool for automatic parameter tuning.

Project description

LightTuner

PyPI PyPI - Python Version Loc Comments

Docs Deploy Code Test Badge Creation Package Release codecov

GitHub stars GitHub forks GitHub commit activity GitHub issues GitHub pulls Contributors GitHub license

A simple hyper-parameter optimization toolkit:

  • hpo: automatic hyper-parameter tuning
  • scheduler: automatic task resource scheduler

Installation

You can simply install it with pip command line from the official PyPI site.

pip install lighttuner

Or install from latest source code as follows:

git clone https://github.com/opendilab/LightTuner.git
cd LightTuner
pip install . --user

Quick Start for HPO

Here is a simple example:

import random
import time

from ditk import logging

from lighttuner.hpo import hpo, R, M, uniform, randint


@hpo
def opt_func(v):  # this function is still usable after decorating
    x, y = v['x'], v['y']
    time.sleep(5.0)
    logging.info(f"This time's config: {v!r}")  # log will be captured
    if random.random() < 0.5:  # randomly raise exception
        raise ValueError('Fxxk this shxt')  # retry is supported

    return {
        'result': x * y,
        'sum': x + y,
    }


if __name__ == '__main__':
    logging.try_init_root(logging.DEBUG)
    print(opt_func.bayes()  # random algorithm
          .max_steps(50)  # max steps
          .minimize(R['result'])  # the maximize/minimize target you need to optimize,
          .concern(M['time'], 'time_cost')  # extra concerned values (from metrics)
          .concern(R['sum'], 'sum')  # extra concerned values (from return value of function)
          .stop_when(R['result'] <= -800)  # conditional stop is supported
          .spaces(  # search spaces
        {
            'x': uniform(-10, 110),  # continuous space
            'y': randint(-10, 20),  # integer based space
            'z': {
                # 't': choice(['a', 'b', 'c', 'd', 'e']),  # enumerate space
                't': uniform(0, 10),  # enumerate space is not supported in bayesian optimization
            },
        }
    ).run())

This optimization progress is parallel, which has n (number of cpus) workers in default. If you need to customize the count of workers, just use max_workers(n) method.

Quick Start for Scheduler

You can refer to lighttuner/scheduler/README.md for more details.

Contributing

We appreciate all contributions to improve LightTuner, both logic and system designs. Please refer to CONTRIBUTING.md for more guides.

License

LightTuner released under the Apache 2.0 license.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

lighttuner-0.0.1.tar.gz (38.4 kB view hashes)

Uploaded Source

Built Distribution

lighttuner-0.0.1-py3-none-any.whl (48.2 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page