Azure Machine Learning Parallel Run Step - Deprecated
Project description
Note
This package has been deprecated and moved to azureml-pipeline-steps. Please refer to this documentation for more information.
Azure Machine Learning Batch Inference
Azure Machine Learning Batch Inference targets large inference jobs that are not time-sensitive. Batch Inference provides cost-effective inference compute scaling, with unparalleled throughput for asynchronous applications. It is optimized for high-throughput, fire-and-forget inference over large collections of data.
Getting Started with Batch Inference Public Preview
Batch inference public preview offers a platform in which to do large inference or generic parallel map-style operations. Please visit Azure Machine Learning Notebooks to find tutorials on how to leverage this service.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distributions
Built Distribution
Hashes for azureml_contrib_pipeline_steps-1.54.0-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | bfcd04ce2e44c77bddd5742e9e4fda4654d8e79451f588f3f16001e0a83eac39 |
|
MD5 | 768158a199cfc86310dd1303efa1196b |
|
BLAKE2b-256 | 7394175a12f3daa9cc55bbd500fcfa5a2fa34bdd0b1702d26fc5380788330ae6 |