Tungstenkit is an open-source tool for building and using versatile and standardized ML model containers, Tungsten models.
Project description
Tungstenkit
Tungstenkit is an open-source tool for building and using versatile and standardized ML model containers, Tungsten models. Key features of Tungsten models are:
- Easy: Require only a few lines of Python code.
- Versatile: Support multiple usages:
- RESTful API server
- GUI application
- Serverless function
- CLI application (coming soon)
- Python function (coming soon)
- Abstracted: User-defined JSON input/output.
- Standardized: Support advanced workflows.
- Scalable: Support adaptive batching and clustering (coming soon).
Learn More
Take the tour
Build a Tungsten model
Building a Tungsten model is easy. All you have to do is write a simple tungsten_model.py
like below:
from typing import List
import torch
from tungstenkit import io, model
class Input(io.BaseIO):
prompt: str
class Output(io.BaseIO):
image: io.Image
@model.config(
gpu=True,
python_packages=["torch", "torchvision"],
batch_size=4,
description="Text to image"
)
class Model(model.TungstenModel[Input, Output]):
def setup(self):
weights = torch.load("./weights.pth")
self.model = load_torch_model(weights)
def predict(self, inputs: List[Input]) -> List[Output]:
input_tensor = preprocess(inputs)
output_tensor = self.model(input_tensor)
outputs = postprocess(output_tensor)
return outputs
Now, you can start a build process with the following command:
$ tungsten build
✅ Successfully built tungsten model: 'text-to-image:latest'
Run it as a RESTful API server
You can start a prediction with a REST API call.
Start a server:
$ docker run -p 3000:3000 --gpus all text-to-image:latest
INFO: Setting up the model
INFO: Getting inputs from the input queue
INFO: Starting the prediction service
INFO: Started server process [1]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: Uvicorn running on http://0.0.0.0:3000 (Press CTRL+C to quit)
Send a prediction request with a JSON payload:
$ curl -X 'POST' 'http://localhost:3000/predict' \
-H 'accept: application/json' \
-H 'Content-Type: application/json' \
-d '[{"prompt": "a professional photograph of an astronaut riding a horse"}]'
{
"status": "success",
"outputs": [{"image": "data:image/png;base64,..."}],
"error_message": null
}
Run it as a GUI application
If you need a more user-friendly way to make predictions, start a GUI app with the following command:
$ tungsten demo text-to-image:latest -p 8080
INFO: Uvicorn running on http://localhost:8080 (Press CTRL+C to quit)
Run it as a serverless function
We support remote, serverless executions via a Tungsten server.
Push a model:
$ tungsten push exampleuser/exampleproject -n text-to-image:latest
✅ Successfully pushed to 'https://server.tungsten-ai.com'
Now, you can start a remote prediction in the Tungsten server:
Prerequisites
- Python 3.7+
- Docker
- (Optional) nvidia-docker for running GPU models locally.
Installation
pip install tungstenkit
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for tungstenkit-0.0.1a7-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 4ae7a3baf21cb16bbe7ad619f0d93d1f90ea259a75354e40d57baf084a678e3d |
|
MD5 | fddad3d0ae2ee949787ae6f4c1d3bf39 |
|
BLAKE2b-256 | 3067775e279120975461e72ec47faa6c75faa76df19f65600b4ba954b5b61aa1 |