Skip to main content

A seamless bridge from model development to model delivery

Project description

Truss

The simplest way to serve AI/ML models in production

PyPI version ci_status

Truss is the CLI for deploying and serving ML models on Baseten. Package your model's serving logic in Python, launch training jobs, and deploy to production—Truss handles containerization, dependency management, and GPU configuration.

Deploy models from any framework: transformers, diffusers, PyTorch, TensorFlow, vLLM, SGLang, TensorRT-LLM, and more:

Get started | 100+ examples | Documentation

Why Truss?

  • Write once, run anywhere: Package model code, weights, and dependencies with a model server that behaves the same in development and production.
  • Fast developer loop: Iterate with live reload, skip Docker and Kubernetes configuration, and use a batteries-included serving environment.
  • Support for all Python frameworks: From transformers and diffusers to PyTorch and TensorFlow to vLLM, SGLang, and TensorRT-LLM, Truss supports models created and served with any framework.
  • Production-ready: Built-in support for GPUs, secrets, caching, and autoscaling when deployed to Baseten or your own infrastructure.

Installation

Install Truss with:

pip install --upgrade truss

Quickstart

As a quick example, we'll package a text classification pipeline from the open-source transformers package.

Create a Truss

To get started, create a Truss with the following terminal command:

truss init text-classification

When prompted, give your Truss a name like Text classification.

Then, navigate to the newly created directory:

cd text-classification

Implement the model

One of the two essential files in a Truss is model/model.py. In this file, you write a Model class: an interface between the ML model that you're packaging and the model server that you're running it on.

There are two member functions that you must implement in the Model class:

  • load() loads the model onto the model server. It runs exactly once when the model server is spun up or patched.
  • predict() handles model inference. It runs every time the model server is called.

Here's the complete model/model.py for the text classification model:

from transformers import pipeline


class Model:
    def __init__(self, **kwargs):
        self._model = None

    def load(self):
        self._model = pipeline("text-classification")

    def predict(self, model_input):
        return self._model(model_input)

Add model dependencies

The other essential file in a Truss is config.yaml, which configures the model serving environment. For a complete list of the config options, see the config reference.

The pipeline model relies on Transformers and PyTorch. These dependencies must be specified in the Truss config.

In config.yaml, find the line requirements. Replace the empty list with:

requirements:
  - torch==2.0.1
  - transformers==4.30.0

No other configuration is needed.

Deployment

Truss is maintained by Baseten and deploys to the Baseten Inference Stack, which combines optimized inference runtimes with production infrastructure for autoscaling, multi-cloud reliability, and fast cold starts.

Get an API key

To set up the Baseten remote, you'll need a Baseten API key. If you don't have a Baseten account, no worries, just sign up for an account and you'll be issued plenty of free credits to get you started.

Run truss push

With your Baseten API key ready to paste when prompted, you can deploy your model:

truss push

You can monitor your model deployment from your model dashboard on Baseten.

Invoke the model

After the model has finished deploying, you can invoke it from the terminal.

Invocation

truss predict -d '"Truss is awesome!"'

Response

[
  {
    "label": "POSITIVE",
    "score": 0.999873161315918
  }
]

Truss contributors

We enthusiastically welcome contributions in accordance with our contributors' guide and code of conduct.

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

truss-0.13.5rc501.tar.gz (457.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

truss-0.13.5rc501-py3-none-any.whl (565.5 kB view details)

Uploaded Python 3

File details

Details for the file truss-0.13.5rc501.tar.gz.

File metadata

  • Download URL: truss-0.13.5rc501.tar.gz
  • Upload date:
  • Size: 457.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.7.13

File hashes

Hashes for truss-0.13.5rc501.tar.gz
Algorithm Hash digest
SHA256 a37cceab91a1ef02f3e37d483b02169f9b81b3e2376d5c354c7653189245123d
MD5 e2cdcb57a5d3b3404909bb677745c22f
BLAKE2b-256 e307bfecaa720605f536ada86d5f4a3ffb52e355b10f16788701d4d72c26ab04

See more details on using hashes here.

File details

Details for the file truss-0.13.5rc501-py3-none-any.whl.

File metadata

File hashes

Hashes for truss-0.13.5rc501-py3-none-any.whl
Algorithm Hash digest
SHA256 57f7a1f7bc5ea5293b409397fae30292a134d390fa112423276cbfb8f23ad9f2
MD5 0a6e8d446ca067b8ff2f40f8407d239c
BLAKE2b-256 95d8c27d5a65757fd60393ad3f242c2258d00c943bdd0ba2afb522ecfa7d4e0c

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page