Azure Machine Learning Parallel Run Step - Deprecated
Project description
# Note This package has been deprecated and moved to [azureml-pipeline-steps](https://pypi.org/project/azureml-pipeline-steps/). Please refer to this [documentation](https://docs.microsoft.com/python/api/azureml-pipeline-steps) for more information.
# Azure Machine Learning Batch Inference
Azure Machine Learning Batch Inference targets large inference jobs that are not time-sensitive. Batch Inference provides cost-effective inference compute scaling, with unparalleled throughput for asynchronous applications. It is optimized for high-throughput, fire-and-forget inference over large collections of data.
# Getting Started with Batch Inference Public Preview
Batch inference public preview offers a platform in which to do large inference or generic parallel map-style operations. Please visit [Azure Machine Learning Notebooks](https://github.com/Azure/MachineLearningNotebooks) to find tutorials on how to leverage this service.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distributions
Built Distribution
Hashes for azureml_contrib_pipeline_steps-1.35.0-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | a6bfa80e0a15dd9f5ab875bcd42f64aa65e21c3a77c20e73bd5c01f1c022d0e5 |
|
MD5 | 85a1775a19a313a4afff053aca3962f1 |
|
BLAKE2b-256 | 71c14a34293fd039aaa1f7328ce3ed4af9d47b9095b15a3b80393e85d9366fac |