Skip to main content

Machine Learning Performance Testing Framework

Project description

Build Status MIT license pypi badge


tempeh is a framework to


Machine learning



which includes tracking memory usage and run time. This is particularly useful as a pluggable tool for your repository's performance tests. Typically, people want to run them periodically over various datasets and/or with a number of models to catch regressions with respect to run time or memory consumption. This should be as easy as

import pytest
from time import time
from tempeh.configurations import datasets, models

@pytest.mark.parametrize('Dataset', datasets.values())
@pytest.mark.parametrize('Model', models.values())
def test_fit_predict_regression(Dataset, Model):
    dataset = Dataset()
    X_train, X_test = dataset.get_X()
    y_train, y_test = dataset.get_y()
    model = Model()
    max_execution_time = get_max_execution_time(dataset, model)
    if model.compatible_with_dataset(dataset):
        start_time = time(), y_train)
        duration = time() - start_time

        assert duration < max_execution_time


tempeh depends on various packages to provide models, including tensorflow, torch, xgboost, lightgbm. To install a release version of tempeh just run

pip install tempeh
Common issues
  • If you're using a 32-bit Python version you might need to switch to a 64-bit Python version first to successfully install tensorflow.
  • If the installation of torch fails try using the recommendation from the pytorch website for stable versions without CUDA for your python version on your operating system.
  • If the installation of lightgbm or xgboost fails try to use a pip version less than 20.0 until their bug is resolved.



Datasets (located in the datasets/ directory) encapsulate different datasets used for testing.

To add a new one

  • Create a python file in the datasets/ directory with naming convention [name]
  • Subclass BasePerformanceDatasetWrapper. The naming convention is [dataset_name]PerformanceDatasetWrapper
  • In __init__ load the dataset and call super().__init__(data, targets, size)
  • Add the class to
  • Make sure the class contains class variables task, data_type, size
  • Add an entry to the datasets dictionary in


Models (models/ directory) wrap different machine learning models.

To add a new one

  • Create a python file in the models/ directory with naming convention [name]
  • Subclass BaseModelWrapper and name the class [name]ModelWrapper
  • In __init__ train the model; we expect format __init__(self, ...)
  • Models must contain tasks and algorithm
  • Add an entry to the models dictionary in


In alphabetical order:


To contribute please check our Contributing Guide.


Regular (non-Security) Issues

Please submit a report through Github issues. A maintainer will respond within a reasonable period of time to handle the issue as follows:

  • bug: triage as bug and provide estimated timeline based on severity
  • feature request: triage as feature request and provide estimated timeline
  • question or discussion: triage as question and respond or notify/identify a suitable expert to respond

Maintainers are supposed to link duplicate issues when possible.

Reporting Security Issues

Please take a look at our guidelines for reporting security issues.

Project details

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

tempeh-0.1.12.tar.gz (22.9 kB view hashes)

Uploaded source

Built Distribution

tempeh-0.1.12-py3-none-any.whl (39.4 kB view hashes)

Uploaded py3

Supported by

AWS AWS Cloud computing Datadog Datadog Monitoring Facebook / Instagram Facebook / Instagram PSF Sponsor Fastly Fastly CDN Google Google Object Storage and Download Analytics Huawei Huawei PSF Sponsor Microsoft Microsoft PSF Sponsor NVIDIA NVIDIA PSF Sponsor Pingdom Pingdom Monitoring Salesforce Salesforce PSF Sponsor Sentry Sentry Error logging StatusPage StatusPage Status page