Skip to main content

Machine Learning Operations Toolkit

Project description

GitHub GitHub GitHub

⏳ Tempo: The MLOps Software Development Kit


Enable data scientists to see a productionised machine learning model within moments, not months. Easy to work with locally and also in kubernetes, whatever your preferred data science tools


Tempo provides a unified interface to multiple MLOps projects that enable data scientists to deploy and productionise machine learning systems.

  • Package your trained model artifacts to optimized server runtimes (Tensorflow, PyTorch, Sklearn, XGBoost etc)
  • Package custom business logic to production servers.
  • Build a inference pipeline of models and orchestration steps.
  • Include any custom python components as needed. Examples:
    • Outlier detectors with Alibi-Detect.
    • Explainers with Alibi-Explain.
  • Deploy locally to Docker to test with Docker runtimes.
  • Deploy to production on Kubernetes with configurable runtimes.
  • Seldon customers can deploy with Seldon Deploy runtime.
  • Run with local unit tests.
  • Create stateful services. Examples:
    • Multi-Armed Bandits.
  • Extract declarative Kubernetes yaml to follow GitOps workflows.


  1. Develop locally.
  2. Test locally on Docker with production artifacts.
  3. Push artifacts to remote bucket store and launch remotely (on Kubernetes).


Motivating Example

Tempo allows you to interact with scalable orchestration engines like Seldon Core and KFServing, and leverage a broad range of machine learning services like TFserving, Triton, MLFlow, etc.

sklearn_model = Model(

sklearn_model = Model(

          models=[sklearn_model, xgboost_model])
class MyPipeline(object):

    def predict(self, payload: np.ndarray) -> np.ndarray:
        res1 = sklearn_model(payload)
        if res1[0][0] > 0.7:
            return res1
            return xgboost_model(payload)

my_pipeline = MyPipeline()

# Deploy only the models into kubernetes

# Run the request using the local pipeline function but reaching to remote models
my_pipeline.predict(np.array([[4.9, 3.1, 1.5, 0.2]]))

Productionisation Workflows

Declarative Interface

Even though Tempo provides a dynamic imperative interface, it is possible to convert into a declarative representation of components.

yaml = my_pipeline.to_k8s_yaml()


Environment Packaging

You can also manage the environments of your pipelines to introduce reproducibility of local and production environments.

          models=[sklearn_model, xgboost_model])
class MyPipeline(object):
    # ...

my_pipeline = MyPipeline()

# Save the full conda environment of the pipeline
# Upload the full conda environment

# Deploy the full pipeline remotely

# Run the request to the remote deployed pipeline
my_pipeline.remote(np.array([[4.9, 3.1, 1.5, 0.2]]))


Project details

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Files for mlops-tempo, version 0.1.0.dev7
Filename, size File type Python version Upload date Hashes
Filename, size mlops_tempo-0.1.0.dev7-py3-none-any.whl (40.6 kB) File type Wheel Python version py3 Upload date Hashes View
Filename, size mlops-tempo-0.1.0.dev7.tar.gz (25.2 kB) File type Source Python version None Upload date Hashes View

Supported by

AWS AWS Cloud computing Datadog Datadog Monitoring DigiCert DigiCert EV certificate Facebook / Instagram Facebook / Instagram PSF Sponsor Fastly Fastly CDN Google Google Object Storage and Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Salesforce Salesforce PSF Sponsor Sentry Sentry Error logging StatusPage StatusPage Status page