Skip to main content

A seamless bridge from model development to model delivery

Project description

Truss

The simplest way to serve AI/ML models in production

PyPI version ci_status

Why Truss?

  • Write once, run anywhere: Package and test model code, weights, and dependencies with a model server that behaves the same in development and production.
  • Fast developer loop: Implement your model with fast feedback from a live reload server, and skip Docker and Kubernetes configuration with a batteries-included model serving environment.
  • Support for all Python frameworks: From transformers and diffusers to PyTorch and TensorFlow to TensorRT and Triton, Truss supports models created and served with any framework.

See Trusses for popular models including:

and dozens more examples.

Installation

Install Truss with:

pip install --upgrade truss

Quickstart

As a quick example, we'll package a text classification pipeline from the open-source transformers package.

Create a Truss

To get started, create a Truss with the following terminal command:

truss init text-classification

When prompted, give your Truss a name like Text classification.

Then, navigate to the newly created directory:

cd text-classification

Implement the model

One of the two essential files in a Truss is model/model.py. In this file, you write a Model class: an interface between the ML model that you're packaging and the model server that you're running it on.

There are two member functions that you must implement in the Model class:

  • load() loads the model onto the model server. It runs exactly once when the model server is spun up or patched.
  • predict() handles model inference. It runs every time the model server is called.

Here's the complete model/model.py for the text classification model:

from transformers import pipeline


class Model:
    def __init__(self, **kwargs):
        self._model = None

    def load(self):
        self._model = pipeline("text-classification")

    def predict(self, model_input):
        return self._model(model_input)

Add model dependencies

The other essential file in a Truss is config.yaml, which configures the model serving environment. For a complete list of the config options, see the config reference.

The pipeline model relies on Transformers and PyTorch. These dependencies must be specified in the Truss config.

In config.yaml, find the line requirements. Replace the empty list with:

requirements:
  - torch==2.0.1
  - transformers==4.30.0

No other configuration is needed.

Deployment

Truss is maintained by Baseten, which provides infrastructure for running ML models in production. We'll use Baseten as the remote host for your model.

Other remotes are coming soon, starting with AWS SageMaker.

Get an API key

To set up the Baseten remote, you'll need a Baseten API key. If you don't have a Baseten account, no worries, just sign up for an account and you'll be issued plenty of free credits to get you started.

Run truss push

With your Baseten API key ready to paste when prompted, you can deploy your model:

truss push

You can monitor your model deployment from your model dashboard on Baseten.

Invoke the model

After the model has finished deploying, you can invoke it from the terminal.

Invocation

truss predict -d '"Truss is awesome!"'

Response

[
  {
    "label": "POSITIVE",
    "score": 0.999873161315918
  }
]

Truss contributors

Truss is backed by Baseten and built in collaboration with ML engineers worldwide. Special thanks to Stephan Auerhahn @ stability.ai and Daniel Sarfati @ Salad Technologies for their contributions.

We enthusiastically welcome contributions in accordance with our contributors' guide and code of conduct.

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

truss-0.13.2.tar.gz (456.2 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

truss-0.13.2-py3-none-any.whl (564.4 kB view details)

Uploaded Python 3

File details

Details for the file truss-0.13.2.tar.gz.

File metadata

  • Download URL: truss-0.13.2.tar.gz
  • Upload date:
  • Size: 456.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for truss-0.13.2.tar.gz
Algorithm Hash digest
SHA256 041d9034efb20fbd141be8cb859dfa3880979558d2f46d876902095f54123e41
MD5 3767df25a9831a3d182e94cb1285625e
BLAKE2b-256 bc7af1d85071bec8d8898f80415b4632be3c328b6634958b66fc8e127e2d546c

See more details on using hashes here.

Provenance

The following attestation bundles were made for truss-0.13.2.tar.gz:

Publisher: release.yml on basetenlabs/truss

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file truss-0.13.2-py3-none-any.whl.

File metadata

  • Download URL: truss-0.13.2-py3-none-any.whl
  • Upload date:
  • Size: 564.4 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for truss-0.13.2-py3-none-any.whl
Algorithm Hash digest
SHA256 69ace331c0a7a9ac0b3d96ca75ef34d478d40139e8ffb53ad5e864239a1a3b57
MD5 a5c159f52c183512912c22189f8627ee
BLAKE2b-256 dd579df0bf74c8ad3d722dc5243ce55995fee6c739f9e7ca8583bcd4542defba

See more details on using hashes here.

Provenance

The following attestation bundles were made for truss-0.13.2-py3-none-any.whl:

Publisher: release.yml on basetenlabs/truss

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page