Serving pytorch models on an API in one line.
Project description
pytorch-deploy
Usage
from torch_deploy import deploy
deploy(your_model)
deploy Function
deploy(model: nn.Module, pre: Union[List[Callable], Callable] = None, post: Union[List[Callable], Callable] = None, host: str = "0.0.0.0", port: int = 8000, logfile: str = None)
Easily converts a pytorch model to API for production usage.
model
: A PyTorch model which subclasses nn.Module and is callable. Model used for the API.pre
: A function or list of functions to be applied to the input.post
: Function or list of functions applied to model output before being sent as a response.host
: The address for serving the model.port
: The port for serving the model.logfile
: filename to create a file that stores date, ip address, and size of input for each access of the API. IfNone
, no file will be created.
Sample Response Format
Sample Code
Testing
Run python test_server.py
first and then python test_client.py
in another window to test.
Dependencies
torch, torchvision, fastapi[all], requests, numpy, pydantic
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
pytorch-deploy-0.0.1.tar.gz
(3.2 kB
view hashes)
Built Distribution
Close
Hashes for pytorch_deploy-0.0.1-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | a4cd506d06f6230d4fc9c8088613a9756c1f0bef95af2c174662a29242cd33ea |
|
MD5 | b7c6fda6065870386f6c4c071e15e089 |
|
BLAKE2b-256 | e27130d29f911f5c2071f3b785d80e417bfa407fbb62e97c197a895d2dffcd63 |