Skip to main content

Aims to be the Simplest Machine Learning Model Inference Server

Reason this release was yanked:

not tested

Project description

Pinferencia

Language grade: Python

Simple, but Powerful.


English Doc | Seriously, Doc | 中文文档 | 正襟危坐版文档

Help wanted. Translation, rap lyrics, all wanted. Feel free to create an issue.


Pinferencia tries to be the simplest AI model inference server ever!

Serving a model with REST API has never been so easy.

If you want to

  • find a simple but robust way to serve your model
  • write minimal codes while maintain controls over you service
  • avoid any heavy-weight solutions
  • easily to integrate with your CICD
  • make your model and service portable and runnable across machines

You're at the right place.

Features

Pinferencia features include:

  • Fast to code, fast to go alive. Minimal codes needed, minimal transformation needed. Just based on what you have.
  • 100% Test Coverage: Both statement and branch coverages, no kidding.
  • Easy to use, easy to understand.
  • Automatic API documentation page. All API explained in details with online try-out feature.
  • Serve any model, even a single function can be served.

Install

pip install "pinferencia[uvicorn]"

Quick Start

Serve Any Model

from pinferencia import Server


class MyModel:
    def predict(self, data):
        return sum(data)

model = MyModel()

service = Server()
service.register(
    model_name="mymodel",
    model=model,
    entrypoint="predict",
)

Just run:

uvicorn app:service --reload

Hooray, your service is alive. Go to http://127.0.0.1/ and have fun.

Any Deep Learning Models? Just as easy. Simple train or load your model, and register it with the service. Go alive immediately.

Pytorch

import torch

from pinferencia import Server


# train your models
model = "..."

# or load your models (1)
# from state_dict
model = TheModelClass(*args, **kwargs)
model.load_state_dict(torch.load(PATH))

# entire model
model = torch.load(PATH)

# torchscript
model = torch.jit.load('model_scripted.pt')

model.eval()

service = Server()
service.register(
    model_name="mymodel",
    model=model,
)

Tensorflow

import tensorflow as tf

from pinferencia import Server


# train your models
model = "..."

# or load your models (1)
# saved_model
model = tf.keras.models.load_model('saved_model/model')

# HDF5
model = tf.keras.models.load_model('model.h5')

# from weights
model = create_model()
model.load_weights('./checkpoints/my_checkpoint')
loss, acc = model.evaluate(test_images, test_labels, verbose=2)

service = Server()
service.register(
    model_name="mymodel",
    model=model,
    entrypoint="predict",
)

Any model of any framework will just work the same way. Now run uvicorn app:service --reload and enjoy!

Contributing

If you'd like to contribute, details are here

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

pinference-0.1.0b1.tar.gz (16.1 kB view hashes)

Uploaded Source

Built Distribution

pinference-0.1.0b1-py3-none-any.whl (23.3 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page