Skip to main content

ML container made simple

Project description

Tungstenkit: ML container made simple

Version License Downloads Supported Python versions

Introduction | Installation | Documentation | Community

Tungstenkit is ML containerization tool with a focus on developer productivity and versatility.

Have you ever struggled to use models from github? You may have repeated tedious steps like: cuda/dependency problems, file handling, and scripting for testing.

Standing on the shoulder of Docker, this project aims to make using ML models less painful by adding functionalities for typical use cases - REST API server, GUI, CLI, and Python script.

With Tungstenkit, sharing and consuming ML models can be quick and enjoyable.

Features

Take the tour

Requires only a few lines of python code

Building a Tungsten model is easy. All you have to do is write a simple tungsten_model.py like:

from typing import List
import torch
from tungstenkit import BaseIO, Image, define_model


class Input(BaseIO):
    prompt: str


class Output(BaseIO):
    image: Image


@define_model(
    input=Input,
    output=Output,
    gpu=True,
    python_packages=["torch", "torchvision"],
    batch_size=4,
    gpu_mem_gb=16,
)
class TextToImageModel:
    def setup(self):
        weights = torch.load("./weights.pth")
        self.model = load_torch_model(weights)

    def predict(self, inputs: List[Input]) -> List[Output]:
        input_tensor = preprocess(inputs)
        output_tensor = self.model(input_tensor)
        outputs = postprocess(output_tensor)
        return outputs

Start a build process:

$ tungsten build . -n text-to-image

✅ Successfully built tungsten model: 'text-to-image:e3a5de56'

Check the built image:

$ tungsten models

Repository        Tag       Create Time          Docker Image ID
----------------  --------  -------------------  ---------------
text-to-image     latest    2023-04-26 05:23:58  830eb82f0fcd
text-to-image     e3a5de56  2023-04-26 05:23:58  830eb82f0fcd

Build once, use everywhere

REST API server

Start a server:

$ tungsten serve text-to-image -p 3000

INFO:     Uvicorn running on http://0.0.0.0:3000 (Press CTRL+C to quit)

Send a prediction request with a JSON payload:

$ curl -X 'POST' 'http://localhost:3000/predictions' \
  -H 'Accept: application/json' \
  -H 'Content-Type: application/json' \
  -d '[{"prompt": "a professional photograph of an astronaut riding a horse"}]'

{
    "prediction_id": "39c9eb6b"
}

Get the result:

$ curl -X 'GET' 'http://localhost:3000/predictions/39c9eb6b' \
  -H 'Accept: application/json'

{
    "outputs": [{"image": "data:image/png;base64,..."}],
    "status": "success"
}

GUI application

If you need a more user-friendly way to make predictions, start a GUI app with the following command:

$ tungsten demo text-to-image -p 8080

INFO:     Uvicorn running on http://localhost:8080 (Press CTRL+C to quit)

tungsten-dashboard

CLI application

Run a prediction in a terminal:

$ tungsten predict text-to-image \
   -i prompt="a professional photograph of an astronaut riding a horse"

{
  "image": "./output.png"
}

Python function

If you want to run a model in your Python application, use the Python API:

>>> from tungstenkit import models
>>> model = models.get("text-to-image")
>>> model.predict(
    {"prompt": "a professional photograph of an astronaut riding a horse"}
)
{"image": PosixPath("./output.png")}

Framework-agnostic and lightweight

Tungstenkit doesn't restrict you to use specific ML libraries. Just use any library you want, and declare dependencies:

# The latest cpu-only build of Tensorflow will be included
@define_model(gpu=False, python_packages=["tensorflow"])
class TensorflowModel:
    def predict(self, inputs):
        """Run a batch prediction"""
        # ...ops using tensorflow...
        return outputs

Pydantic input/output definitions with convenient file handling

Let's look at the example below:

from tungstenkit import BaseIO, Image, define_model


class Input(BaseIO):
    image: Image


class Output(BaseIO):
    image: Image


@define_model(input=Input, output=Output)
class StyleTransferModel:
    ...

As you see, input/output types are defined as subclasses of the BaseIO class. The BaseIO class is a simple wrapper of the BaseModel class of Pydantic, and Tungstenkit validates JSON requests utilizing functionalities Pydantic provides.

Also, you can see that the Image class is used. Tungstenkit provides four file classes for easing file handling - Image, Audio, Video, and Binary. They have useful methods for writing a model's predict method:

class StyleTransferModel:
    def predict(self, inputs: List[Input]) -> List[Output]:
        # Preprocessing
        input_pil_images = [inp.image.to_pil_image() for inp in inputs]
        # Inference
        output_pil_images = do_inference(input_pil_images)
        # Postprocessing
        output_images = [Image.from_pil_image(pil_image) for pil_image in output_pil_images]
        outputs = [Output(image=image) for image in output_images]
        return outputs

Supports batched prediction

Tungstenkit supports both server-side and client-side batching.

  • Server-side batching

    A server groups inputs across multiple requests and processes them together. You can configure the max batch size:

    @define_model(input=Input, output=Output, gpu=True, batch_size=32)
    

    The max batch size can be changed when running a server:

    $ tungsten serve mymodel -p 3000 --batch-size 16
    
  • Client-side batching
    Also, you can reduce traffic volume by putting multiple inputs in a single prediction request:

    $ curl -X 'POST' 'http://localhost:3000/predictions' \
      -H 'accept: application/json' \
      -H 'Content-Type: application/json' \
      -d '[{"field": "input1"}, {"field": "input2"}, {"field": "input3"}]'
    

Prerequisites

Installation

pip install tungstenkit

Documentation

Join our community

If you have questions about anything related to Tungstenkit, you're always welcome to ask our community on Discord.

Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

tungstenkit-0.2.18.post2.tar.gz (733.7 kB view details)

Uploaded Source

Built Distribution

tungstenkit-0.2.18.post2-py3-none-any.whl (791.3 kB view details)

Uploaded Python 3

File details

Details for the file tungstenkit-0.2.18.post2.tar.gz.

File metadata

  • Download URL: tungstenkit-0.2.18.post2.tar.gz
  • Upload date:
  • Size: 733.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.7.1 CPython/3.10.6 Linux/6.2.0-37-generic

File hashes

Hashes for tungstenkit-0.2.18.post2.tar.gz
Algorithm Hash digest
SHA256 0eabe8c38530f43e23ae9dafd720baa33943adea0a2ec42ad2f9a71e3195b263
MD5 d248c2473427c4e38adced4a0cc458e1
BLAKE2b-256 473983374d91d9c4d44d6305c525621781a46d106aae14a1591c13aa2c110ce7

See more details on using hashes here.

File details

Details for the file tungstenkit-0.2.18.post2-py3-none-any.whl.

File metadata

  • Download URL: tungstenkit-0.2.18.post2-py3-none-any.whl
  • Upload date:
  • Size: 791.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.7.1 CPython/3.10.6 Linux/6.2.0-37-generic

File hashes

Hashes for tungstenkit-0.2.18.post2-py3-none-any.whl
Algorithm Hash digest
SHA256 134f845668be22c60e0721901309988e0c38d38aa2d44f8be3aab99f532e3a27
MD5 56d9da3b3b77337f85af4be7ac36ffa2
BLAKE2b-256 d2ff9b66a24bfea2604f44cb7a94379b5641c33915a3cde0229b4be9abb27449

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page