The official Python client library for Launch, the Data Platform for AI
Project description
Launch Python Client
██╗ █████╗ ██╗ ██╗███╗ ██╗ ██████╗██╗ ██╗
██║ ██╔══██╗██║ ██║████╗ ██║██╔════╝██║ ██║
██║ ███████║██║ ██║██╔██╗ ██║██║ ███████║
██║ ██╔══██║██║ ██║██║╚██╗██║██║ ██╔══██║
███████╗██║ ██║╚██████╔╝██║ ╚████║╚██████╗██║ ██║
╚══════╝╚═╝ ╚═╝ ╚═════╝ ╚═╝ ╚═══╝ ╚═════╝╚═╝ ╚═╝
Moving an ML model from experiment to production requires significant engineering lift. Scale Launch provides ML engineers a simple Python interface for turning a local code snippet into a production service. A ML engineer needs to call a few functions from Scale's SDK, which quickly spins up a production-ready service. The service efficiently utilizes compute resources and automatically scales according to traffic.
Latest API/SDK reference can be found here.
Deploying your model via Scale Launch
Central to Scale Launch are the notions of a ModelBundle
and a ModelEndpoint
.
A ModelBundle
consists of a trained model as well as the surrounding preprocessing and postprocessing code.
A ModelEndpoint
is the compute layer that takes in a ModelBundle
, and is able to carry out inference requests
by using the ModelBundle
to carry out predictions. The ModelEndpoint
also knows infrastructure-level details,
such as how many GPUs are needed, what type they are, how much memory, etc. The ModelEndpoint
automatically handles
infrastructure level details such as autoscaling and task queueing.
Steps to deploy your model via Scale Launch:
-
First, you create and upload a
ModelBundle
. -
Then, you create a
ModelEndpoint
. -
Lastly, you make requests to the
ModelEndpoint
.
TODO: link some example colab notebook
For Developers
Clone from github and install as editable
git clone git@github.com:scaleapi/launch-python-client.git
cd launch-python-client
pip3 install poetry
poetry install
Please install the pre-commit hooks by running the following command:
poetry run pre-commit install
The tests can be run with:
poetry run pytest
Documentation
Updating documentation: We use Sphinx to autogenerate our API Reference from docstrings.
To test your local docstring changes, run the following commands from the repository's root directory:
poetry shell
cd src_docs
sphinx-autobuild . ../docs --watch ../launch
sphinx-autobuild
will spin up a server on localhost (port 8000 by default) that will watch for and automatically rebuild a version of the API reference based on your local docstring changes.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file scale-launch-0.3.3.tar.gz
.
File metadata
- Download URL: scale-launch-0.3.3.tar.gz
- Upload date:
- Size: 51.7 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/1.1.7 CPython/3.9.13 Darwin/20.6.0
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | dc74b8b8c578102a6011c059e985653bd09e116f63d2c6faa1567cacebb499db |
|
MD5 | ab3ecade25320c86661dc8664b355ee5 |
|
BLAKE2b-256 | 42ad02df5c8875fbbecedff6a779b724b481a223b02f6933edd5bc90039a8651 |
File details
Details for the file scale_launch-0.3.3-py3-none-any.whl
.
File metadata
- Download URL: scale_launch-0.3.3-py3-none-any.whl
- Upload date:
- Size: 64.5 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/1.1.7 CPython/3.9.13 Darwin/20.6.0
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 830c2bd22129d33f897167ef90f9777c061dea0b66dac64a86fcb451bfabbb5e |
|
MD5 | 616a74c5a2984725663eb44c77ccc7db |
|
BLAKE2b-256 | f7b21de375aa1e7f1d5d55e57abecdd6350cf63341660cee529966e57eb243f6 |