ML container made simple
Project description
Tungstenkit: ML container made simple
Introduction | Installation | Documentation | Community
Tungstenkit is ML containerization tool with a focus on developer productivity and versatility.
Have you ever struggled to use models from github? You may have repeated tedious steps like: cuda/dependency problems, file handling, and scripting for testing.
Standing on the shoulder of Docker, this project aims to make using ML models less painful by adding functionalities for typical use cases - REST API server, GUI, CLI, and Python script.
With Tungstenkit, sharing and consuming ML models can be quick and enjoyable.
Features
- Requires only a few lines of Python code
- Build once, use everywhere:
- Framework-agnostic and lightweight
- Pydantic input/output definitions with convenient file handling
- Supports batched prediction
- Supports clustering with distributed machines (coming soon)
Take the tour
Requires only a few lines of python code
Building a Tungsten model is easy. All you have to do is write a simple tungsten_model.py
like:
from typing import List
import torch
from tungstenkit import BaseIO, Image, define_model
class Input(BaseIO):
prompt: str
class Output(BaseIO):
image: Image
@define_model(
input=Input,
output=Output,
gpu=True,
python_packages=["torch", "torchvision"],
batch_size=4,
gpu_mem_gb=16,
)
class TextToImageModel:
def setup(self):
weights = torch.load("./weights.pth")
self.model = load_torch_model(weights)
def predict(self, inputs: List[Input]) -> List[Output]:
input_tensor = preprocess(inputs)
output_tensor = self.model(input_tensor)
outputs = postprocess(output_tensor)
return outputs
Start a build process:
$ tungsten build . -n text-to-image
✅ Successfully built tungsten model: 'text-to-image:e3a5de56'
Check the built image:
$ tungsten models
Repository Tag Create Time Docker Image ID
---------------- -------- ------------------- ---------------
text-to-image latest 2023-04-26 05:23:58 830eb82f0fcd
text-to-image e3a5de56 2023-04-26 05:23:58 830eb82f0fcd
Build once, use everywhere
REST API server
Start a server:
$ tungsten serve text-to-image -p 3000
INFO: Uvicorn running on http://0.0.0.0:3000 (Press CTRL+C to quit)
Send a prediction request with a JSON payload:
$ curl -X 'POST' 'http://localhost:3000/predictions' \
-H 'Accept: application/json' \
-H 'Content-Type: application/json' \
-d '[{"prompt": "a professional photograph of an astronaut riding a horse"}]'
{
"prediction_id": "39c9eb6b"
}
Get the result:
$ curl -X 'GET' 'http://localhost:3000/predictions/39c9eb6b' \
-H 'Accept: application/json'
{
"outputs": [{"image": "data:image/png;base64,..."}],
"status": "success"
}
GUI application
If you need a more user-friendly way to make predictions, start a GUI app with the following command:
$ tungsten demo text-to-image -p 8080
INFO: Uvicorn running on http://localhost:8080 (Press CTRL+C to quit)
CLI application
Run a prediction in a terminal:
$ tungsten predict text-to-image \
-i prompt="a professional photograph of an astronaut riding a horse"
{
"image": "./output.png"
}
Python function
If you want to run a model in your Python application, use the Python API:
>>> from tungstenkit import models
>>> model = models.get("text-to-image")
>>> model.predict(
{"prompt": "a professional photograph of an astronaut riding a horse"}
)
{"image": PosixPath("./output.png")}
Framework-agnostic and lightweight
Tungstenkit doesn't restrict you to use specific ML libraries. Just use any library you want, and declare dependencies:
# The latest cpu-only build of Tensorflow will be included
@define_model(gpu=False, python_packages=["tensorflow"])
class TensorflowModel:
def predict(self, inputs):
"""Run a batch prediction"""
# ...ops using tensorflow...
return outputs
Pydantic input/output definitions with convenient file handling
Let's look at the example below:
from tungstenkit import BaseIO, Image, define_model
class Input(BaseIO):
image: Image
class Output(BaseIO):
image: Image
@define_model(input=Input, output=Output)
class StyleTransferModel:
...
As you see, input/output types are defined as subclasses of the BaseIO
class. The BaseIO
class is a simple wrapper of the BaseModel
class of Pydantic, and Tungstenkit validates JSON requests utilizing functionalities Pydantic provides.
Also, you can see that the Image
class is used. Tungstenkit provides four file classes for easing file handling - Image
, Audio
, Video
, and Binary
. They have useful methods for writing a model's predict
method:
class StyleTransferModel:
def predict(self, inputs: List[Input]) -> List[Output]:
# Preprocessing
input_pil_images = [inp.image.to_pil_image() for inp in inputs]
# Inference
output_pil_images = do_inference(input_pil_images)
# Postprocessing
output_images = [Image.from_pil_image(pil_image) for pil_image in output_pil_images]
outputs = [Output(image=image) for image in output_images]
return outputs
Supports batched prediction
Tungstenkit supports both server-side and client-side batching.
-
Server-side batching
A server groups inputs across multiple requests and processes them together. You can configure the max batch size:
@define_model(input=Input, output=Output, gpu=True, batch_size=32)
The max batch size can be changed when running a server:
$ tungsten serve mymodel -p 3000 --batch-size 16
-
Client-side batching
Also, you can reduce traffic volume by putting multiple inputs in a single prediction request:$ curl -X 'POST' 'http://localhost:3000/predictions' \ -H 'accept: application/json' \ -H 'Content-Type: application/json' \ -d '[{"field": "input1"}, {"field": "input2"}, {"field": "input3"}]'
Prerequisites
- Python 3.7+
- Docker
Installation
pip install tungstenkit
Documentation
- Getting Started
- Building Your Model
- Running Models
- Pushing and Pulling Models
- CLI Reference
- REST API Reference
- Examples
Join our community
If you have questions about anything related to Tungstenkit, you're always welcome to ask our community on Discord.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file tungstenkit-0.2.18.post1.tar.gz
.
File metadata
- Download URL: tungstenkit-0.2.18.post1.tar.gz
- Upload date:
- Size: 733.5 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/1.5.1 CPython/3.10.6 Linux/6.2.0-36-generic
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | ec17db88c2af8c597510e4c7ceb16de74e49c277ea0f7a45a7cffccb5e820bc8 |
|
MD5 | 15a5b5314216a4abb32f0e3b0fff60b3 |
|
BLAKE2b-256 | 086b854c79cae901fb00cd75f4e3debd70bba4322041d60613f007af9062811b |
File details
Details for the file tungstenkit-0.2.18.post1-py3-none-any.whl
.
File metadata
- Download URL: tungstenkit-0.2.18.post1-py3-none-any.whl
- Upload date:
- Size: 791.4 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/1.5.1 CPython/3.10.6 Linux/6.2.0-36-generic
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 04832bb209e0a49a67c461dc0c515ce2ad16ba852fd8d733835d51030f4bba43 |
|
MD5 | 42712b4b9b5b98b1a094755b56b40693 |
|
BLAKE2b-256 | a24299605bc5b92ee7b860f20c859035e28ea55081cd3c20423b9c37a4a4d18e |