Skip to main content

Deploy your AI/ML model to Amazon SageMaker for Real-Time Inference and Batch Transform using your own Docker container image.

Project description

inference-server

Deploy your AI/ML model to Amazon SageMaker for Real-Time Inference and Batch Transform using your own Docker container image.

:blue_book: Documentation: https://inference-server.readthedocs.io

Installing

python -m pip install inference-server

Developing

To setup a scratch/development virtual environment (under .venv/), first install Tox. Then run:

tox -e dev

The inference-server package is installed in editable mode inside the .venv/ environment.

Run tests by simply calling tox.

Install code quality Git hooks using pre-commit install --install-hooks.

Terms & Conditions

Copyright 2023 J.P. Morgan Chase & Co.

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

Contributing

See CONTRIBUTING.md

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

inference_server-1.2.3.tar.gz (30.9 kB view details)

Uploaded Source

Built Distribution

inference_server-1.2.3-py3-none-any.whl (19.0 kB view details)

Uploaded Python 3

File details

Details for the file inference_server-1.2.3.tar.gz.

File metadata

  • Download URL: inference_server-1.2.3.tar.gz
  • Upload date:
  • Size: 30.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.0 CPython/3.12.4

File hashes

Hashes for inference_server-1.2.3.tar.gz
Algorithm Hash digest
SHA256 f30a7b75fef854f958c22aba724b0aee0a0fea595857b4639d0acef0dfd48a84
MD5 e3742f40edf898f918d43abe4e39a9f3
BLAKE2b-256 b3f3b86a37013009bfbe3da7fd15b08aa14dec5c1d07d8a1ab6153b87ecef028

See more details on using hashes here.

File details

Details for the file inference_server-1.2.3-py3-none-any.whl.

File metadata

File hashes

Hashes for inference_server-1.2.3-py3-none-any.whl
Algorithm Hash digest
SHA256 426a9ea6259767b7b12f06e59d642579991522b5130f3c826077d2f6e6ebd7cd
MD5 f475bf69585e3614c09ddd648b69110a
BLAKE2b-256 f26033c975ddead104bbc4e7098b8c920a9562a01a00cfc7ceb08209c0834a88

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page