Skip to main content

A benchmarking framework for time series

Project description

TSBenchmark

Python Versions Downloads PyPI Version

中文

What is TSBenchmark

TSBenchmark is a distributed benchmark framework specified for time series forecasting tasks using automated machine learning (AutoML) algorithms.

Overview

TSBenchmark supports both time series and AutoML characteristics.

As for time series forecasting, it supports univariate forecasting, multivariate forecasting, as well as covariate benchmark. During operation, it collects the information of optimal parameter combinations, performance indicators and other key parameters, supporting the analysis and evaluation of the AutoML framework.

This benchmark framework supports distributed operation mode and shows high scores in efficiency ranking. It integrates the lightweight distributed scheduling framework in hypernets and can be executed in both Python and CONDA virtual environments. For the purpose of environment isolation, it is recommended to use CONDA as the environment manager to support different algorithms.

Installation

Pip

Basically, use 'pip' command to install tsbenchmark:

pip install tsbechmark

Examples

Define your player.

  • tsbenchmark.yaml: the global Benchmark configuration
  • players
    • am_navie_player: the specific algorithm directory.
    • exec.py: (Required), the algorithm to be tested.
    • player.yaml: (Required), metadata settings of the algorithm.

tsbenchmark.yaml and Examples.

exec.py

Integrate the forecasting tasks for evaluation through API interface, including task reading, model training, prediction and evaluation.

import tsbenchmark as tsb

task = tsb.api.get_task()
# Navie model see also players/plain_navie_player/exec.py
snavie = Navie().fit(task.get_train(), task.series_name)
df_forecast = snavie.predict(task.horizon)
tsb.api.send_report_data(task, df_forecast)

player.yaml

Use customized settings to specify the operating environment of the algorithm.

env:
  venv:
    kind: custom_python
    config:
      py_executable: /usr/anaconda3/envs/tsb-hyperts/bin/python

For more examples, please refer to Quick Start and Examples.

Run TSBenchmark with Command Line Tools

tsb run --config benchmark_example_remote.yaml
tsb -h

usage: tsb [-h] [--log-level LOG_LEVEL] [-error] [-warn] [-info] [-debug]
           {run,compare} ...

tsb command is used to manage benchmarks

positional arguments:
  {run,compare}
    run                 run benchmark
    compare             compare benchmark reports

optional arguments:
  -h, --help            show this help message and exit

Console outputs:
  --log-level LOG_LEVEL
                        logging level, default is INFO
  -error                alias of "--log-level=ERROR"
  -warn                 alias of "--log-level=WARN"
  -info                 alias of "--log-level=INFO"
  -debug                alias of "--log-level=DEBUG"          

DataSets reference

data_desc

TSBenchmark related projects

  • Hypernets: A general automated machine learning (AutoML) framework.
  • HyperGBM: A full pipeline AutoML tool integrated various GBM models.
  • HyperDT/DeepTables: An AutoDL tool for tabular data.
  • HyperTS: A full pipeline AutoML&AutoDL tool for time series datasets.
  • HyperKeras: An AutoDL tool for Neural Architecture Search and Hyperparameter Optimization on Tensorflow and Keras.
  • HyperBoard: A visualization tool for Hypernets.
  • Cooka: Lightweight interactive AutoML system.

Documents

DataCanvas

TSBenchmark is an open source project created by DataCanvas.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

tsbenchmark-0.1.0.tar.gz (65.4 kB view details)

Uploaded Source

Built Distribution

tsbenchmark-0.1.0-py3-none-any.whl (62.2 kB view details)

Uploaded Python 3

File details

Details for the file tsbenchmark-0.1.0.tar.gz.

File metadata

  • Download URL: tsbenchmark-0.1.0.tar.gz
  • Upload date:
  • Size: 65.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.0 CPython/3.8.13

File hashes

Hashes for tsbenchmark-0.1.0.tar.gz
Algorithm Hash digest
SHA256 7b748707f75a32ad459838bf5088ed2ec47d79df42e4f20443c4633b3e3fd561
MD5 24b593e1a2f93865daa629fa3259155d
BLAKE2b-256 47a02292e7779a7bc2298968a3d4705aefb3982d38ffdc9fc55145eaf9e59249

See more details on using hashes here.

File details

Details for the file tsbenchmark-0.1.0-py3-none-any.whl.

File metadata

  • Download URL: tsbenchmark-0.1.0-py3-none-any.whl
  • Upload date:
  • Size: 62.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.0 CPython/3.8.13

File hashes

Hashes for tsbenchmark-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 64e93b6e36c51348d300bafe7ac5952f3d2256f5e5052140eb7bc4a43a8f2fd0
MD5 9768417ff32529ab6fff90cb05d2b15d
BLAKE2b-256 630d544a82f980a1ead6684317c9589099d3fc2b22fb93e3df8e36c953840df9

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page