Skip to main content

Aims to be the Simplest Machine Learning Model Inference Server

Reason this release was yanked:

not tested

Project description

Pinferencia

Language grade: Python

Simple, but Powerful.


English Doc | Seriously, Doc | 中文文档 | 正襟危坐版文档

Help wanted. Translation, rap lyrics, all wanted. Feel free to create an issue.


Pinferencia tries to be the simplest AI model inference server ever!

Serving a model with REST API has never been so easy.

If you want to

  • find a simple but robust way to serve your model
  • write minimal codes while maintain controls over you service
  • avoid any heavy-weight solutions
  • easily to integrate with your CICD
  • make your model and service portable and runnable across machines

You're at the right place.

Features

Pinferencia features include:

  • Fast to code, fast to go alive. Minimal codes needed, minimal transformation needed. Just based on what you have.
  • 100% Test Coverage: Both statement and branch coverages, no kidding.
  • Easy to use, easy to understand.
  • Automatic API documentation page. All API explained in details with online try-out feature.
  • Serve any model, even a single function can be served.

Install

pip install "pinferencia[uvicorn]"

Quick Start

Serve Any Model

from pinferencia import Server


class MyModel:
    def predict(self, data):
        return sum(data)

model = MyModel()

service = Server()
service.register(
    model_name="mymodel",
    model=model,
    entrypoint="predict",
)

Just run:

uvicorn app:service --reload

Hooray, your service is alive. Go to http://127.0.0.1/ and have fun.

Any Deep Learning Models? Just as easy. Simple train or load your model, and register it with the service. Go alive immediately.

Pytorch

import torch

from pinferencia import Server


# train your models
model = "..."

# or load your models (1)
# from state_dict
model = TheModelClass(*args, **kwargs)
model.load_state_dict(torch.load(PATH))

# entire model
model = torch.load(PATH)

# torchscript
model = torch.jit.load('model_scripted.pt')

model.eval()

service = Server()
service.register(
    model_name="mymodel",
    model=model,
)

Tensorflow

import tensorflow as tf

from pinferencia import Server


# train your models
model = "..."

# or load your models (1)
# saved_model
model = tf.keras.models.load_model('saved_model/model')

# HDF5
model = tf.keras.models.load_model('model.h5')

# from weights
model = create_model()
model.load_weights('./checkpoints/my_checkpoint')
loss, acc = model.evaluate(test_images, test_labels, verbose=2)

service = Server()
service.register(
    model_name="mymodel",
    model=model,
    entrypoint="predict",
)

Any model of any framework will just work the same way. Now run uvicorn app:service --reload and enjoy!

Contributing

If you'd like to contribute, details are here

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

pinference-0.1.0b1.tar.gz (16.1 kB view details)

Uploaded Source

Built Distribution

pinference-0.1.0b1-py3-none-any.whl (23.3 kB view details)

Uploaded Python 3

File details

Details for the file pinference-0.1.0b1.tar.gz.

File metadata

  • Download URL: pinference-0.1.0b1.tar.gz
  • Upload date:
  • Size: 16.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.1.13 CPython/3.8.9 Darwin/21.1.0

File hashes

Hashes for pinference-0.1.0b1.tar.gz
Algorithm Hash digest
SHA256 75e383f943f5b0dcbf36b7bae4c9bdd266a91a0e1a277a455c84c06cef3ab395
MD5 4604b5a3fc61c196839f9cf8a9109d01
BLAKE2b-256 275d2aa820e18f74a1abd64516703d3e8e21c172bea7573f044ddf285f5638ac

See more details on using hashes here.

File details

Details for the file pinference-0.1.0b1-py3-none-any.whl.

File metadata

  • Download URL: pinference-0.1.0b1-py3-none-any.whl
  • Upload date:
  • Size: 23.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.1.13 CPython/3.8.9 Darwin/21.1.0

File hashes

Hashes for pinference-0.1.0b1-py3-none-any.whl
Algorithm Hash digest
SHA256 09e7fa81b416b9c05dc2dfec8f50faafcea7a0bb0a7476d178686d0643def06e
MD5 46581b53db83ff14f5be6a318352681d
BLAKE2b-256 3d4f90f9bf76c59ba6238b6653e128e7381861f00713ec6f173fc5a32bbb08dc

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page