Minimalistic & easy deployment of PyTorch models on AWS Lambda with C++
Using statically compiled dependencies whole package is shrunk to only
:heavy_check_mark: Why should I use
- Lightweight & latest dependencies - compiled source code weights only
30MB. Previous approach to PyTorch network deployment on AWS Lambda (fastai) uses outdated PyTorch (
1.1.0) as dependency layer and requires AWS S3 to host your model. Now you can only use AWS Lambda and host your model as layer and PyTorch
masterand latest stable release are supported on a daily basis.
- Cheaper and less resource hungry - available solutions run server hosting incoming requests all the time. AWS Lambda (and torchlambda) runs only when the request comes.
- Easy automated scaling usually autoscaling is done with Kubernetes or similar tools (see KubeFlow). This approach requires knowledge of another tool, setting up appropriate services (e.g. Amazon EKS). In AWS Lambda case you just push your neural network inference code and you are done.
- Easy to use - no need to learn new tool.
torchlambdahas at most
4commands and deployment is done via YAML settings. No need to modify your PyTorch code.
- Do one thing and do it well - most deployment tools are complex solutions
including multiple frameworks and multiple services.
torchlambdafocuses solely on inference of PyTorch models on AWS Lambda.
- Write programs to work together - This tool does not repeat PyTorch & AWS's functionalities (like
aws-cli). You can also use your favorite third party tools (say saws, Terraform with AWS and MLFlow, PyTorch-Lightning to train your model).
- Test locally, run in the cloud -
torchlambdauses Amazon Linux 2 Docker images under the hood & allows you to use lambci/docker-lambda to test your deployment on
localhostbefore pushing deployment to the cloud (see Test Lambda deployment locally tutorial).
- Extensible when you need it - All you usually need are a few lines of YAML settings, but if you wish to fine-tune your deployment you can use
--flags(changing various properties of PyTorch and AWS dependencies themselves). You can also write your own C++ deployment code (generate template via
- Small is beautiful -
3000LOC (most being convenience wrapper creating this tool) make it easy to jump into source code and check what's going on under the hood.
:house: Table Of Contents
- YAML settings file reference
- C++ code
Benchmarks can be seen in
BENCHMARKS.md file and are comprised of around ~30000 test cases.
Results are divided based on settings used, model type, payload, AWS Lambda timing etc. Below is an example of how inference performance changes due to higher resolution images and type of encoding:
Clearly the bigger image, the more important it is to use
base64 encoding. For all results and description click here.
Release history Release notifications | RSS feed
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
|Filename, size||File type||Python version||Upload date||Hashes|
|Filename, size torchlambda-1602697918-py3-none-any.whl (37.2 kB)||File type Wheel||Python version py3||Upload date||Hashes View|
|Filename, size torchlambda-1602697918.tar.gz (28.3 kB)||File type Source||Python version None||Upload date||Hashes View|
Hashes for torchlambda-1602697918-py3-none-any.whl