Skip to main content

ML container made simple

Project description

Tungstenkit: ML container made simple

Version License Downloads Supported Python versions

Introduction | Installation | Getting Started | Documentation

Tungstenkit is ML containerization tool with a focus on developer productivity and versatility.

The key features are:

Take the tour

Requires only a few lines of python code

Building a Tungsten model is easy. All you have to do is write a simple tungsten_model.py like:

from typing import List
import torch
from tungstenkit import BaseIO, Image, define_model


class Input(BaseIO):
    prompt: str


class Output(BaseIO):
    image: Image


@define_model(
    input=Input,
    output=Output,
    gpu=True,
    python_packages=["torch", "torchvision"],
    batch_size=4,
    gpu_mem_gb=24,
)
class TextToImageModel:
    def setup(self):
        weights = torch.load("./weights.pth")
        self.model = load_torch_model(weights)

    def predict(self, inputs: List[Input]) -> List[Output]:
        input_tensor = preprocess(inputs)
        output_tensor = self.model(input_tensor)
        outputs = postprocess(output_tensor)
        return outputs

Start a build process:

$ tungsten build . -n text-to-image

✅ Successfully built tungsten model: 'text-to-image:e3a5de56' (also tagged as 'text-to-image:latest')

Check the built image:

$ tungsten models

Repository        Tag       Create Time          Docker Image ID
----------------  --------  -------------------  ---------------
text-to-image     latest    2023-04-26 05:23:58  830eb82f0fcd
text-to-image     e3a5de56  2023-04-26 05:23:58  830eb82f0fcd

Build once, use everywhere

REST API server

Start a server:

$ tungsten serve text-to-image -p 3000

INFO:     Uvicorn running on http://0.0.0.0:3000 (Press CTRL+C to quit)

Send a prediction request with a JSON payload:

$ curl -X 'POST' 'http://localhost:3000/predict' \
  -H 'accept: application/json' \
  -H 'Content-Type: application/json' \
  -d '[{"prompt": "a professional photograph of an astronaut riding a horse"}]'

{
    "outputs": [{"image": "data:image/png;base64,..."}]
}

GUI application

If you need a more user-friendly way to make predictions, start a GUI app with the following command:

$ tungsten demo text-to-image -p 8080

INFO:     Uvicorn running on http://localhost:8080 (Press CTRL+C to quit)

tungsten-dashboard

CLI application

Run a prediction in a terminal:

$ tungsten predict text-to-image \
   -i prompt="a professional photograph of an astronaut riding a horse"

{
  "image": "./output.png"
}

Python function

If you want to run a model in your Python application, use the Python API:

>>> from tungstenkit import models
>>> model = models.get("text-to-image")
>>> model.predict(
    {"prompt": "a professional photograph of an astronaut riding a horse"}
)
{"image": PosixPath("./output.png")}

Framework-agnostic and lightweight

Tungstenkit doesn't restrict you to use specific ML libraries. Just use any library you want, and declare dependencies:

# The latest cpu-only build of Tensorflow will be included
@define_model(input=Input, output=Output, gpu=False, python_packages=["tensorflow"])
class Model:
    def predict(self, inputs):
        """Run a batch prediction"""
        # ...ops using tensorflow...
        return outputs

Batched prediction

Tungstenkit supports both server-side and client-side batching.

  • Server-side batching

    A server groups inputs across multiple requests and processes them together. You can configure the max batch size:

    @define_model(input=Input, output=Output, gpu=True, batch_size=32)
    

    The max batch size can be changed when running a server:

    $ docker run -p 3000:3000 --gpus all model:latest --batch-size 64 
    
  • Client-side batching
    Also, you can reduce traffic volume by putting multiple inputs in a single prediction request:

    $ curl -X 'POST' 'http://localhost:3000/predict' \
      -H 'accept: application/json' \
      -H 'Content-Type: application/json' \
      -d '[{"field": "input1"}, {"field": "input2"}, {"field": "input3"}]'
    
    {
      "outputs": [
        {"field": "output1"},
        {"field": "output2"},
        {"field": "output3"}
      ]
    }
    

Pydantic input/output definitions with convenient file handling

Let's look at the example below:

from tungstenkit import BaseIO, Image, define_model


class Input(BaseIO):
    image: Image


class Output(BaseIO):
    image: Image


@define_model(input=Input, output=Output)
class StyleTransferModel:
    ...

As you see, input/output types are defined as subclasses of the BaseIO class. The BaseIO class is a simple wrapper of the BaseModel class of Pydantic, and Tungstenkit validates JSON requests utilizing functionalities Pydantic provides.

Also, you can see that the Image class is used. Tungstenkit provides four file classes for easing file handling - Image, Audio, Video, and Binary. They have useful methods for writing a model's predict method:

class StyleTransferModel:
    def predict(self, inputs: List[Input]) -> List[Output]:
        # Preprocessing
        input_pil_images = [inp.image.to_pil_image() for inp in inputs]
        # Inference
        output_pil_images = do_inference(input_pil_images)
        # Postprocessing
        output_images = [Image.from_pil_image(pil_image) for pil_image in output_pil_images]
        outputs = [Output(image=image) for image in output_images]
        return outputs

Prerequisites

Installation

pip install tungstenkit

Documentation

Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

tungstenkit-0.1.2.post1.tar.gz (727.5 kB view details)

Uploaded Source

Built Distribution

tungstenkit-0.1.2.post1-py3-none-any.whl (782.2 kB view details)

Uploaded Python 3

File details

Details for the file tungstenkit-0.1.2.post1.tar.gz.

File metadata

  • Download URL: tungstenkit-0.1.2.post1.tar.gz
  • Upload date:
  • Size: 727.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.5.1 CPython/3.10.6 Linux/5.19.0-46-generic

File hashes

Hashes for tungstenkit-0.1.2.post1.tar.gz
Algorithm Hash digest
SHA256 8a4fe5dce9d9f3fc8fd034fc488950736ba92b15d7eaeb5e1ea10e88912a6dd1
MD5 ae02cfdceef1234ac1b9fec3ca7c339a
BLAKE2b-256 2950b2426037c39b2cdd89927e2ad91da30999b879cce90f9b03ce3db7ee15db

See more details on using hashes here.

File details

Details for the file tungstenkit-0.1.2.post1-py3-none-any.whl.

File metadata

  • Download URL: tungstenkit-0.1.2.post1-py3-none-any.whl
  • Upload date:
  • Size: 782.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.5.1 CPython/3.10.6 Linux/5.19.0-46-generic

File hashes

Hashes for tungstenkit-0.1.2.post1-py3-none-any.whl
Algorithm Hash digest
SHA256 0d90c2ed5db2238dc30824f8d84266dd37a7fcc7aa8045b7479f0962e96237a8
MD5 1e8ac57c63b3816dbd351ccfcdb14a03
BLAKE2b-256 dec2bb727c738dc54ee5048993459cbdb76a41ce4ce8f050d3cb6be67b2d6232

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page