Skip to main content

TorchServe is a tool for serving neural net models for inference

Project description

TorchServe is a flexible and easy to use tool for serving PyTorch models in production.

Use the TorchServe CLI, or the pre-configured Docker images, to start a service that sets up HTTP endpoints to handle model inference requests.

Installation

Full installation instructions are in the project repo: https://github.com/pytorch/serve/blob/master/README.md

Source code

You can check the latest source code as follows:

git clone https://github.com/pytorch/serve.git

Citation

If you use torchserve in a publication or project, please cite torchserve: https://github.com/pytorch/serve

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

torchserve_nightly-2024.4.13-py3-none-any.whl (24.3 MB view details)

Uploaded Python 3

File details

Details for the file torchserve_nightly-2024.4.13-py3-none-any.whl.

File metadata

File hashes

Hashes for torchserve_nightly-2024.4.13-py3-none-any.whl
Algorithm Hash digest
SHA256 ffbdb6aa9123bd5900992025a1e83d1824c11c90de2fc090a51a4ebb0ae91e9f
MD5 650ce43b2364c99d36b9231fe1219487
BLAKE2b-256 bdf1f78e51338cd6c080ebd0fce6d98a70f1ab29c5b3085e1a46e4631eb8cca2

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page