Azure Machine Learning Parallel Run Step
Project description
# Azure Machine Learning Batch Inference
Azure Machine Learning Batch Inference targets large inference jobs that are not time-sensitive. Batch Inference provides cost-effective inference compute scaling, with unparalleled throughput for asynchronous applications. It is optimized for high-throughput, fire-and-forget inference over large collections of data.
# Getting Started with Batch Inference Public Preview
Batch inference public preview offers a platform in which to do large inference or generic parallel map-style operations. Please visit [Azure Machine Learning Notebooks](https://github.com/Azure/MachineLearningNotebooks) to find tutorials on how to leverage this service.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distributions
Built Distribution
Hashes for azureml_contrib_pipeline_steps-1.0.83-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 688f0e6cbef2a800ff0926a5d94735339ba09ad1d98e5f9287ab2562439875fb |
|
MD5 | 60f3aa5da6e2d74fa1b533f682007d58 |
|
BLAKE2b-256 | 863db8a0a7ca1750a24bcef71f9cdbc1cd8f457e153cb9bf246f88d31594c11b |