TorchServe is a tool for serving neural net models for inference
Project description
TorchServe is a flexible and easy to use tool for serving PyTorch models in production.
Use the TorchServe CLI, or the pre-configured Docker images, to start a service that sets up HTTP endpoints to handle model inference requests.
Installation
Full installation instructions are in the project repo: https://github.com/pytorch/serve/blob/master/README.md
Source code
You can check the latest source code as follows:
git clone https://github.com/pytorch/serve.git
Citation
If you use torchserve in a publication or project, please cite torchserve: https://github.com/pytorch/serve
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distributions
No source distribution files available for this release.See tutorial on generating distribution archives.
Built Distribution
File details
Details for the file torchserve_nightly-2024.12.5-py3-none-any.whl
.
File metadata
- Download URL: torchserve_nightly-2024.12.5-py3-none-any.whl
- Upload date:
- Size: 42.2 MB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.1.1 CPython/3.9.20
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 7e69e92df1aa292ee5d342a30e1c70dded42f4bb94ddf0409661c62a152f6a04 |
|
MD5 | 05a12e8d9a8ee065f7d0e927466aac9d |
|
BLAKE2b-256 | 85c8570a1c3192ce81ce047614da6491eba202b57fd818180cffb94f6d3ae78b |