Microsoft Corporation Azure Batch Client Library for Python
Project description
Microsoft Azure SDK for Python
Batch allows users to run large-scale parallel and high-performance computing (HPC) batch jobs efficiently in Azure. To learn more about Azure Batch, please see the Azure Batch Overview documentation.
Source code | Batch package (PyPI) | API reference documentation | Product documentation
Note: v15.x and above is a newer package that has significant changes and improvements from v14.x and below. Please see our migration guide for guidance.
Getting Started
Install the package
Install the azure-batch package v15.x or above for the most modern version of the package and azure-identity with pip:
pip install azure-batch azure-identity
azure-identity is used for authentication and is mentioned in the authentication section below.
Prerequisites
- An Azure subscription. If you don't have one, create an account for free
- A Batch account with a linked Storage account
- Python 3.9 or later.
Authenticate the client
Note: For an asynchronous client, import
azure.batch.aio'sBatchClient
Authenticate with Entra ID
We strongly recommend using Microsoft Entra ID for Batch account authentication. Some Batch capabilities require this method of authentication, including many of the security-related features discussed here. The service API authentication mechanism for a Batch account can be restricted to only Microsoft Entra ID using the allowedAuthenticationModes property. When this property is set, API calls using Shared Key authentication will be rejected.
Azure Batch provides integration with Microsoft Entra ID for identity-based authentication of requests. With Azure AD, you can use role-based access control (RBAC) to grant access to your Azure Batch resources to users, groups, or applications. The Azure Identity library provides easy Microsoft Entra ID support for authentication.
from azure.identity import DefaultAzureCredential
from azure.batch import BatchClient
credentials = DefaultAzureCredential()
client = BatchClient(
endpoint='https://<your account>.eastus.batch.azure.com',
credential=credentials
)
Authenticate with Shared Key Credentials
You can also use Shared Key authentication to sign into your Batch account. This method uses your Batch account access keys to authenticate Azure commands for the Batch service.
from azure.core.credentials import AzureNamedKeyCredential
from azure.batch import BatchClient
credentials = AzureNamedKeyCredential(account_name, account_key)
client = BatchClient(
endpoint='https://<your account>.eastus.batch.azure.com',
credential=credentials
)
Examples
This section contains code snippets covering common Azure Batch operations:
- Pool Operations
- Job Operations
- Job Schedule Operations
- Task Operations
- Retrieve output file from task
- Node Operations
Pool Operations
A pool is the collection of nodes that your application runs on.
Azure Batch pools build on top of the core Azure compute platform. They provide large-scale allocation, application installation, data distribution, health monitoring, and flexible adjustment (scaling) of the number of compute nodes within a pool. For more information, see Pools in Azure Batch.
Create a pool
Azure batch has two SDKs, azure-batch which interacts directly the Azure Batch service, and azure.mgmt.batch which interacts with the Azure Resource Manager (otherwise known as ARM). Both of these SDKs support batch pool operations such as create/get/update/list but only the azure.mgmt.batch SDK can create a pool with managed identities and for that reason it is the recommend way to create a pool.
This first snippet is an example of using azure.mgmt.batch to create a pool with managed identity. A more detailed usage of this method of creating a pool can be found in this create pool sample.
pool = batch_client.pool.create(
GROUP_NAME,
ACCOUNT,
POOL,
{
"properties": {
"vmSize": "STANDARD_D4",
"deploymentConfiguration": {
"virtualMachineConfiguration": {
"imageReference": {
"publisher": "Canonical",
"offer": "UbuntuServer",
"sku": "18.04-LTS",
"version": "latest"
},
"nodeAgentSkuId": "batch.node.ubuntu 18.04"
}
},
"scaleSettings": {
"autoScale": {
"formula": "$TargetDedicatedNodes=1",
"evaluationInterval": "PT5M"
}
}
},
"identity": {
"type": "UserAssigned",
"userAssignedIdentities": {
"/subscriptions/"+SUBSCRIPTION_ID+"/resourceGroups/"+GROUP_NAME+"/providers/Microsoft.ManagedIdentity/userAssignedIdentities/"+"Your Identity Name": {}
}
}
}
)
This second snippet is using azure-batch for pool creation, without any support for managed identities. This example demonstrates creating a client using credentials and then creating the pool with BatchPoolCreateOptions.
from azure.batch import BatchClient, models
from azure.core.credentials import AzureNamedKeyCredential
credentials = AzureNamedKeyCredential(account_name, account_key)
client = BatchClient(endpoint=batch_account_endpoint, credential=credentials)
vm_config = models.VirtualMachineConfiguration(
image_reference=models.BatchVmImageReference(
publisher="MicrosoftWindowsServer",
offer="WindowsServer",
sku="2016-Datacenter-smalldisk"
),
node_agent_sku_id="batch.node.windows amd64"
)
pool_spec = models.BatchPoolCreateOptions(
id="my-pool",
vm_size="standard_d2_v2",
target_dedicated_nodes=1,
virtual_machine_configuration=vm_config
)
client.create_pool(pool=pool_spec)
Get pool
The get_pool method can be used to retrieve an already created pool.
my_pool = client.get_pool(my_pool.id)
List pool
The list_pool method can be used to list all the pools under a Batch account.
pools = client.list_pools()
This method can also be used with filters to specify specific pools that you are looking for:
pools = client.list_pools(
filter="startswith(id,'batch_abc_')",
select=["id,state"],
expand=["stats"],
)
Delete pool
The begin_delete_pool method can be used to delete a pool. This begin keyword is an example of our Long Running Operations where an operation that usually executes synchronously will execute asynchronously to avoid connection and load-balancer timeouts. This requires the client to poll the service repeatedly to track progress and completion.
Synchronous approach - Wait for the operation to complete:
poller = client.begin_delete_pool(pool_id="my-pool")
result = poller.result()
print(f"Pool deleted successfully")
Asynchronous approach - Start the operation and check status later:
poller = client.begin_delete_pool(pool_id="my-pool", polling_interval=5)
if poller.done():
print("Pool deletion completed")
else:
print("Pool deletion still in progress")
poller.wait()
Job Operations
A job is a collection of tasks. It manages how computation is performed by its tasks on the compute nodes in a pool.
A job specifies the pool in which the work is to be run. You can create a new pool for each job, or use one pool for many jobs. You can create a pool for each job that is associated with a job schedule, or one pool for all jobs that are associated with a job schedule. For more information see Job and Tasks in Azure Batch.
Create a Job
Create a job that will contain and manage your tasks. Jobs are associated with a specific pool.
from azure.batch import BatchClient, models
from azure.core.credentials import AzureNamedKeyCredential
credentials = AzureNamedKeyCredential(account_name, account_key)
client = BatchClient(endpoint=batch_account_endpoint, credential=credentials)
job_spec = models.BatchJobCreateOptions(
id="my-job",
pool_info=models.BatchPoolInfo(pool_id="my-pool")
)
client.create_job(job=job_spec)
Get job
The get_job method retrieves details about a specific job that has already been created.
my_job = client.get_job(job_id="my-job")
List job
The list_jobs method lists all jobs under a Batch account.
jobs = client.list_jobs()
Delete job
The begin_delete_job method is a Long Running Operation (LRO) that deletes a job asynchronously to avoid connection and load-balancer timeouts.
Synchronous approach - Wait for the operation to complete:
poller = client.begin_delete_job(job_id="my-job")
result = poller.result()
print("Job deleted successfully")
Asynchronous approach - Start the operation and check status later:
poller = client.begin_delete_job(job_id="my-job", polling_interval=5)
if poller.done():
print("Job deletion completed")
else:
print("Job deletion still in progress")
poller.wait()
Job Schedule Operations
Job schedules enable you to create recurring jobs within the Batch service. A job schedule specifies when to run jobs and includes the specifications for the jobs to be run. You can specify the duration of the schedule (how long and when the schedule is in effect) and how frequently jobs are created during the scheduled period. For more information, see Scheduled Jobs in Azure Batch.
Create job schedule
The create_job_schedule method creates a new job schedule that automatically creates jobs based on the specified schedule.
from azure.batch import BatchClient, models
from azure.core.credentials import AzureNamedKeyCredential
credentials = AzureNamedKeyCredential(account_name, account_key)
client = BatchClient(endpoint=batch_account_endpoint, credential=credentials)
schedule_spec = models.BatchJobScheduleCreateOptions(
id="my-job-schedule",
schedule = models.BatchJobScheduleConfiguration(
start_window=datetime.timedelta(hours=1),
recurrence_interval=datetime.timedelta(days=1),
)
job_specification=models.BatchJobSpecification(
pool_info=models.BatchPoolInfo(pool_id="my-pool")
)
)
client.create_job_schedule(job_schedule=schedule_spec)
Get job schedule
The get_job_schedule method retrieves details about a specific job schedule.
my_schedule = client.get_job_schedule(job_schedule_id="my-job-schedule")
List job schedule
The list_job_schedules method lists all job schedules under a Batch account.
schedules = client.list_job_schedules()
Replace job schedule
The replace_job_schedule method replaces the entire job schedule configuration with new values.
schedule_replace = models.BatchJobSchedule(
schedule=models.BatchJobScheduleConfiguration(
recurrence_interval=datetime.timedelta(hours=10)
),
job_specification=models.BatchJobSpecification(
pool_info=models.BatchPoolInfo(pool_id="my-pool")
)
)
client.replace_job_schedule(
job_schedule_id="my-job-schedule",
job_schedule=schedule_replace
)
Update job schedule
The update_job_schedule method updates specific properties of an existing job schedule without replacing the entire configuration.
schedule_update = models.BatchJobScheduleUpdateOptions(
schedule=models.BatchJobScheduleConfiguration(
recurrence_interval=datetime.timedelta(hours=5)
)
)
client.update_job_schedule(
job_schedule_id="my-job-schedule",
job_schedule=schedule_update
)
Get job task count
The get_job_task_counts method provides summary counts of tasks in different states for a specific job, including active, running, and completed tasks.
task_counts = client.get_job_task_counts(job_id="my-job")
print(f"Completed tasks: {task_counts.completed}")
print(f"Succeeded tasks: {task_counts.succeeded}")
print(f"Failed tasks: {task_counts.failed}")
Task Operations
A task is a unit of computation that is associated with a job. It runs on a node. Tasks are assigned to a node for execution, or are queued until a node becomes free. Put simply, a task runs one or more programs or scripts on a compute node to perform the work you need done. For more information, see Jobs and Tasks in Azure Batch.
Create a task
There are three ways that a task can be created using this package. This first example shows how to create a single task on a job using create_task with the parameter type BatchTaskCreateOptions.
from azure.batch import BatchClient, models
from azure.core.credentials import AzureNamedKeyCredential
credentials = AzureNamedKeyCredential(account_name, account_key)
client = BatchClient(endpoint=batch_account_endpoint, credential=credentials)
task_spec = models.BatchTaskCreateOptions(
id="my-task",
command_line='cmd /c "echo Hello World"'
)
client.create_task(job_id="my-job", task=task_spec)
This second example demonstrates creating multiple tasks in a group using BatchTaskGroup with the create_task_collection method. A BatchTaskGroup can contain up to 100 tasks.
task1 = models.BatchTaskCreateOptions(id="task1", command_line='cmd /c "echo hello world"')
task2 = models.BatchTaskCreateOptions(id="task2", command_line='cmd /c "echo hello world"')
task3 = models.BatchTaskCreateOptions(id="task3", command_line='cmd /c "echo hello world"')
task_group = models.BatchTaskGroup(task_values=[task1, task2, task3])
result = client.create_task_collection(job_id="my-job", task_collection=task_group)
Finally, you can use create_tasks for creating multiple tasks with no limit. This method will package up the list of BatchTaskCreateOptions tasks passed in and repeatly call the create_task_collection method with groups of tasks bundled into BatchTaskGroup objects. This utility method allows you to select the number of parallel calls to the create_task_collection method.
tasks_to_add = []
for i in range(1000):
task = models.BatchTaskCreateOptions(
id=task_id + str(i),
command_line='cmd /c "echo hello world"',
)
tasks_to_add.append(task)
result = client.create_tasks(job_id="my-job", task_collection=tasks_to_add)
Get task
The get_task method retrieves details about a specific task.
my_task = client.get_task(job_id="my-job", task_id="my-task")
List tasks
The list_tasks method lists all tasks associated with a specific job.
tasks = client.list_tasks(job_id="my-job")
Delete task
The delete_task method deletes a task from a job.
client.delete_task(job_id="my-job", task_id="my-task")
Retrieve output file from task
In Azure Batch, each task has a working directory under which it can create files and directories. This working directory can be used for storing the program that is run by the task, the data that it processes, and the output of the processing it performs. All files and directories of a task are owned by the task user.
The Batch service exposes a portion of the file system on a node as the root directory. This root directory is located on the temporary storage drive of the VM, not directly on the OS drive. For more information, see Files and Directories in Azure Batch.
List task files
List all files available in a task's directory using the list_task_files method:
all_files = client.list_task_files(job_id="my-job", task_id="my-task")
only_files = [f for f in all_files if not f.is_directory]
for file in only_files:
print(f"File: {file.name}")
Node Operations
A node is an Azure virtual machine (VM) or cloud service VM that is dedicated to processing a portion of your application's workload. The size of a node determines the number of CPU cores, memory capacity, and local file system size that is allocated to the node. For more information, please see Nodes and Pools in Azure Batch.
Get node
The get_node method retrieves details about a specific compute node in a pool.
node = client.get_node(pool_id="my-pool", node_id="node1")
print(f"Node state: {node.state}")
print(f"Scheduling state: {node.scheduling_state}")
print(f"Node agent version: {node.node_agent_info.version}")
List nodes
The list_nodes method lists all compute nodes in a specific pool.
nodes = client.list_nodes(pool_id="my-pool")
for node in nodes:
print(f"Node ID: {node.id}, State: {node.state}")
Reboot node
The begin_reboot_node method is a Long Running Operation (LRO) that reboots a compute node in a pool. You can specify the reboot kind to control how running tasks are handled during the reboot.
Synchronous approach - Wait for the reboot to complete:
poller = client.begin_reboot_node(
pool_id="my-pool",
node_id="node1",
models.BatchNodeRebootOptions(
node_reboot_kind=models.BatchNodeRebootKind.TERMINATE
)
)
result = poller.result()
print("Node rebooted successfully")
Asynchronous approach - Start the reboot and check status later:
poller = client.begin_reboot_node(
pool_id="my-pool",
node_id="node1",
models.BatchNodeRebootOptions(
node_reboot_kind=models.BatchNodeRebootKind.REQUEUE
),
polling_interval=5
)
if poller.done():
print("Node reboot completed")
else:
print("Node reboot still in progress")
poller.wait()
Error Handling
We adopted the Azure Core exception framework, which provides a variety of exception types that map directly to HTTP status codes and common error scenarios. The base HttpResponseError is the foundation, with specialized exceptions like ClientAuthenticationError, ResourceNotFoundError, ResourceExistsError, and more providing specific error categorization. This system also provides direct access to HTTP status codes, response headers, and request information.
from azure.batch import BatchClient
from azure.core.exceptions import (
HttpResponseError,
ResourceNotFoundError,
)
try:
client = BatchClient(endpoint, credentials)
pools = client.list_pools()
except ResourceNotFoundError as not_found_error:
print(f"Service could not find resource {not_found_error.status_code}: {not_found_error.error.message.value}")
create_missing_resource()
except HttpResponseError as error:
print(f"HTTP Status: {error.status_code}")
print(f"Error Code: {error.error.code}")
print(f"Message: {error.error.message.value}")
Usage
Note: Comprehensive code samples for the v15.x package are currently in development and will be available soon. In the meantime, the examples provided throughout this README demonstrate common operations for the latest version.
For code examples using the v14.x package, see the Batch samples repo on GitHub or see Batch on docs.microsoft.com.
Provide Feedback
If you encounter any bugs or have suggestions, please file an issue in the Issues section of the project.
Release History
15.1.0 (2026-03-06)
Other Changes
- This is the GA release of the features introduced in the 15.0.0 and 15.1.0 beta versions, including LRO support, job-level FIFO scheduling, CMK support on pools, IPv6 support, metadata security protocol support, IP tag support, and confidential VM enhancements.
Breaking Changes
-
Renamed
BatchNodeUserUpdateOptionstoBatchNodeUserReplaceOptions. -
Renamed
OutputFileUploadConfigtoOutputFileUploadConfiguration. -
Removed Models:
- Removed
AuthenticationTokenSettings
- Removed
-
NameSpace changed
azure.batch.models._models:BatchJobTerminateOptionsBatchNodeDeallocateOptionsBatchNodeRebootOptionsBatchNodeReimageOptions
-
Removed Enums:
- Removed
BatchAccessScope
- Removed
-
NameSpace changed
azure.batch.models._enums:BatchNodeDeallocateOptionBatchNodeRebootKindBatchNodeReimageOption
-
Renamed public methods:
list_sub_tasks->list_subtasksget_task_file->download_task_fileget_node_file->download_node_file
-
Renamed parameters across all operation methods:
timeout->service_timeoutocpdate->ocp_datestarttime->start_timeendtime->end_timeconcurrencies->max_concurrency
-
Renamed properties in models:
e_tag->etaginBatchJob,BatchJobSchedule,BatchPool,BatchTask, andBatchTaskCreateResultvalues_property->error_valuesinAutoScaleRunError,BatchError, andResizeErrorvalues_property->result_valuesinCollectionResultvalues_property->task_valuesinBatchTaskGroupavg_memory_gi_b->avg_memory_gib,peak_memory_gi_b->peak_memory_gib,avg_disk_gi_b->avg_disk_gib,peak_disk_gi_b->peak_disk_gib,disk_read_gi_b->disk_read_gib,disk_write_gi_b->disk_write_gib,network_read_gi_b->network_read_gib,network_write_gi_b->network_write_gibinBatchPoolResourceStatistics
-
Removed Properties:
- Removed
authentication_token_settingsfromBatchJobManagerTask,BatchStartTask, andBatchTask - Removed
accessfromAuthenticationTokenSettings
- Removed
15.1.0b3 (2026-02-05)
Other Changes
- Minor parameter renaming:
read_io_gi_btoread_io_gib,write_io_gi_btowrite_io_gib, andv_tpm_enabledtovtpm_enabled.
15.1.0b2 (2025-11-20)
Features Added
-
Job level FIFO
- Added
BatchJobDefaultOrdertypes. - Extended
BatchTaskSchedulingPolicywith a newjobDefaultOrderproperty to support job-level FIFO scheduling.
- Added
-
CMK support on Pools
- Added
DiskCustomerManagedKeyandDiskEncryptionSetParametersfor customer-managed key (CMK) support on pools. - Extended
DiskEncryptionConfigurationwith a newcustomerManagedKeyproperty. - Extended
ManagedDiskwith a newdiskEncryptionSetproperty. - Added
BatchPoolIdentityReferencefor referencing managed identities in disk encryption scenarios.
- Added
-
IPv6 support on Pools
- Added
ipv6AddresstoBatchNode. - Added
ipv6RemoteLoginIPAddressandipv6RemoteLoginPorttoBatchNodeRemoteLoginSettings.
- Added
-
Metadata Security Protocol Support on Pools
- Added
HostEndpointSettingsandHostEndpointSettingsModeTypes. - Added
ProxyAgentSettings. - Extended
SecurityProfilewith a newproxyAgentSettingsproperty for metadata security protocol support.
- Added
-
IP Tag Support
- Added
IPFamilyandIPTagtypes. - Extended
BatchPublicIpAddressConfigurationwith newipFamiliesandipTagsproperties for IP tag support.
- Added
Breaking Changes
-
Removed all Certifcate API's
- Removed
create_certificate - Removed
get_certificate - Removed
list_certificates - Removed
cancel_certificate_deletion - Removed
begin_delete_certificate
- Removed
-
Removed Models:
- Removed
BatchCertificate - Removed
BatchCertificateDeleteError - Removed
BatchCertificateFormat - Removed
BatchCertificateReference - Removed
BatchCertificateState - Removed
BatchCertificateStoreLocation - Removed
BatchCertificateVisibility - Removed
BatchNodeCommunicationMode
- Removed
-
Removed Properties:
- Removed
CertificateReferencesfromBatchNode - Removed
ResourceTagsandCertificateReferencesfromBatchPool - Removed
CertificateReferences,ResourceTags, andTargetNodeCommunicationModefromBatchPoolCreateOptions - Removed
CertificateReferencesandTargetNodeCommunicationModefromBatchPoolReplaceOptions - Removed
CertificateReferences,ResourceTags, andTargetNodeCommunicationModefromBatchPoolSpecifications - Removed
CertificateReferences,ResourceTags, andTargetNodeCommunicationModefromBatchPoolUpdateOptions - Removed
CertificateReferences,ResourceTags, andTargetNodeCommunicationModefromComputeBatchModelFactory
- Removed
15.1.0b1 (2025-10-01)
Features Added
- Added Long Running Operation (LRO) support for the following operation methods:
delete_job->begin_delete_jobdisable_job->begin_disable_jobenable_job->begin_enable_jobdelete_job_schedule->begin_delete_job_scheduledelete_pool->begin_delete_pooldelete_certificate->begin_delete_certificatedeallocate_node->begin_deallocate_nodereboot_node->begin_reboot_nodereimage_node->begin_reimage_noderemove_nodes->begin_remove_nodesresize_pool->begin_resize_poolstart_node->begin_start_nodestop_pool_resize->begin_stop_pool_resizeterminate_job->begin_terminate_jobterminate_job_schedule->begin_terminate_job_schedule
Breaking Changes
-
Renamed the following models. These name changes include several models with the suffix
Contentbeing renamed to have the suffixOptions.AccessScope->BatchAccessScopeAffinityInfo->BatchAffinityInfoBatchJobAction->BatchJobActionKindBatchJobCreateContent->BatchJobCreateOptionsBatchJobDisableContent->BatchJobDisableOptionsBatchJobScheduleCreateContent->BatchJobScheduleCreateOptionsBatchJobScheduleUpdateContent->BatchJobScheduleUpdateOptionsBatchJobTerminateContent->BatchJobTerminateOptionsBatchJobUpdateContent->BatchJobUpdateOptionsBatchNodeDeallocateContent->BatchNodeDeallocateOptionsBatchNodeDisableSchedulingContent->BatchNodeDisableSchedulingOptionsBatchNodeRebootContent->BatchNodeRebootOptionsBatchNodeRebootOption->BatchNodeRebootKindBatchNodeReimageContent->BatchNodeReimageOptionsBatchNodeRemoveContent->BatchNodeRemoveOptionsBatchNodeUserCreateContent->BatchNodeUserCreateOptionsBatchNodeUserUpdateContent->BatchNodeUserUpdateOptionsBatchPoolCreateContent->BatchPoolCreateOptionsBatchPoolEnableAutoScaleContent->BatchPoolEnableAutoScaleOptionsBatchPoolEvaluateAutoScaleContent->BatchPoolEvaluateAutoScaleOptionsBatchPoolReplaceContent->BatchPoolReplaceOptionsBatchPoolResizeContent->BatchPoolResizeOptionsBatchPoolUpdateContent->BatchPoolUpdateOptionsBatchTaskCreateContent->BatchTaskCreateOptionsContainerConfiguration->BatchContainerConfigurationContainerConfigurationUpdate->BatchContainerConfigurationUpdateDeleteBatchCertificateError->BatchCertificateDeleteErrorDiffDiskSettings->BatchDiffDiskSettingsErrorCategory->BatchErrorSourceCategoryHttpHeader->OutputFileUploadHeaderImageReference->BatchVmImageReferenceOSDisk->BatchOsDiskOnAllBatchTasksComplete->BatchAllTasksCompleteModeOnBatchTaskFailure->BatchAllTasksCompleteModePublicIpAddressConfiguration->BatchPublicIpAddressConfigurationUefiSettings->BatchUefiSettingsUploadBatchServiceLogsContent->UploadBatchServiceLogsOptionsVMDiskSecurityProfile->BatchVMDiskSecurityProfile
-
Renamed parameters in the following operation methods:
begin_disable_jobchangedcontentparameter todisable_optionsbegin_deallocate_nodechangedparametersparameter tooptionsbegin_remove_nodeschangedcontentparameter toremove_options.begin_resize_poolchangedcontentparameter toresize_options.begin_terminate_jobchangedparametersparameter tooptions.begin_reboot_nodechangedparametersparameter tooptions.begin_reimage_nodechangedparametersparameter tooptions.disable_node_schedulingchangedparametersparameter tooptions.enable_pool_auto_scalechangedcontentparameter toenable_auto_scale_options.evaluate_pool_auto_scalechangedcontentparameter toevaluate_auto_scale_options.upload_node_logschangedcontentparameter toupload_options.replace_node_userchangedcontentparameter toupdate_options.
15.0.0b2 (2025-03-01)
Features Added
-
Force delete/terminate job or job schedule:
- Added
forceparameter of type Boolean todelete_job,terminate_job,delete_job_schedule, andterminate_job_schedule
- Added
-
Support for compute node start/deallocate operations:
- Added
start_node,deallocate_nodemethods toBatchClientandAsyncBatchClient
- Added
-
Container task data mount isolation:
- Added
containerHostBatchBindMountsof typeList<ContainerHostBatchBindMountEntry>toBatchTaskContainerSettings
- Added
-
Patch improvements for pool and job:
- Added
displayName,vmSize,taskSlotsPerNode,taskSchedulingPolicy,enableInterNodeCommunication,virtualMachineConfiguration,networkConfiguration,userAccounts,mountConfiguration,upgradePolicy, andresourceTagstoBatchPoolUpdateContent - Added
networkConfigurationtoBatchJobUpdateContent
- Added
-
Confidential VM support:
- Added
confidentialVMtoSecurityTypes. - Added
securityProfileof typeVMDiskSecurityProfiletoManagedDisk
- Added
-
Support for shared and community gallery images:
- Added
sharedGalleryImageIdandcommunityGalleryImageIdtoImageReference
- Added
-
Re-add support for
BatchCertificate(temporary since this feature is deprecated):- Added
create_certificate,list_certificates,cancel_certificate_deletion,delete_certificate, andget_certificatemethods toBatchClientandAsyncBatchClient
- Added
Breaking Changes
- Removed
get_remote_desktopmethod fromBatchClient. Useget_node_remote_login_settingsinstead to remotely login to a compute node - Removed
CloudServiceConfigurationfrom pool models and operations. UseVirtualMachineConfigurationwhen creating pools - Removed
ApplicationLicensesfrom pool models and operations
15.0.0b1 (2024-09-01)
- Version (15.0.0b1) is the first preview of our efforts to create a user-friendly and Pythonic client library for Azure Batch. For more information about this, and preview releases of other Azure SDK libraries, please visit https://azure.github.io/azure-sdk/releases/latest/python.html.
Breaking Changes
- Remove certificates
- Remove render licenses
- Remove
CloudServiceConfigurationfor pool models and operations.VirtualMachineConfigurationis supported for pool configurations moving forward.
14.2.0 (2024-02-01)
Features Added
- Added
UpgradePolicytoCloudPooldefinition for pool creation- Added
AutomaticOSUpgradePolicyto include configuration parameters for automatic OS upgrades - Added
RollingUpgradePolicyto include configuration parameters for rolling upgrades
- Added
14.1.0 (2023-11-01)
Features Added
-
Added ResourceTags support to Pool Creation so users are able to specify resource tags for a pool. This feature is currently only supported for pool creation but will be updatable in the future.
- Added
resourceTagsproperty toPoolSpecificationdefinition - Added
resourceTagsproperty toCloudPooldefinition
- Added
-
Added
SecurityProfilesupport to Pool Creation. Trusted Launch provides advanced security to Guest OS preventing boot-kits/rootkits (like un-signed driver or kernel modification) to be introduced into boot-chain.- Added
serviceArtifactReferenceandsecurityProfileproperty toVirtualMachineConfigurationdefinition
- Added
-
Added
ServiceArtifactReferenceandOSDisksupport to Pool Creation- Added
standardssd_lrsvalue toStorageAccountTypeenum - Added
caching,managedDisk,diskSizeGB, andwriteAcceleratorEnabledproperty toNodePlacementPolicyTypedefinition - Added
scaleSetVmResourceIDproperty toVirtualMachineInfodefinition
- Added
14.0.0 (2023-05-01)
Features Added
- Added boolean property
enableAcceleratedNetworkingtoNetworkConfiguration.- This property determines whether this pool should enable accelerated networking, with default value as False.
- Whether this feature can be enabled is also related to whether an operating system/VM instance is supported, which should align with AcceleratedNetworking Policy (AcceleratedNetworking Limitations and constraints).
- Added boolean property
enableAutomaticUpgradetoVMExtension.- This property determines whether the extension should be automatically upgraded by the platform if there is a newer version of the extension available.
- Added a new property
typetoContainerConfiguration. Possible values include:dockerCompatibleandcriCompatible.
Breaking Changes
- Removed lifetime statistics API. This API is no longer supported.
- Removed
job.get_all_lifetime_statisticsAPI. - Removed
pool.get_all_lifetime_statisticsAPI.
- Removed
Other Changes
- Deprecating
CertificateOperationsrelated methods.- This operation is deprecating and will be removed after February 2024. Please use Azure KeyVault Extension instead.
13.0.0 (2022-11-08)
Features Added
- Added new custom enum type
NodeCommunicationMode.- This property determines how a pool communicates with the Batch service.
- Possible values: Default, Classic, Simplified.
- Added properties
current_node_communication_modeandtarget_node_communication_modeof typeNodeCommunicationModetoCloudPool. - Added property
target_node_communication_modeof typeNodeCommunicationModetoPoolSpecification,PoolAddParameter,PoolPatchParameter, andPoolUpdatePropertiesParameter.
12.0.0 (2022-02-01)
Features
- Added property uploadHeaders to
OutputFileBlobContainerDestination.- Allows users to set custom HTTP headers on resource file uploads.
- Array of type HttpHeader (also being added).
- Added boolean property
allow_task_preemptiontoJobSpecification,CloudJob,JobAddParameter,JobPatchParameter,JobUpdateParameter- Mark Tasks as preemptible for higher priority Tasks (requires Comms-Enabled or Single Tenant Pool).
- Replaced comment (title, description, etc.) references of "low-priority" with "Spot/Low-Priority", to reflect new service behavior.
- No API change required.
- Low-Priority Compute Nodes (VMs) will continue to be used for User Subscription pools (and only User Subscription pools), as before.
- Spot Compute Nodes (VMs) will now be used for Batch Managed (and only Batch Managed pools) pools.
- Relevant docs:
11.0.0 (2021-07-30)
Features
- Add ability to assign user-assigned managed identities to
CloudPool. These identities will be made available on each node in the pool, and can be used to access various resources. - Added
identity_referenceproperty to the following models to support accessing resources via managed identity:AzureBlobFileSystemConfigurationOutputFileBlobContainerDestinationContainerRegistryResourceFileUploadBatchServiceLogsConfiguration
- Added new
compute_node_extensionoperations toBatchServiceClientfor getting/listing VM extensions on a node - Added new
extensionsproperty toVirtualMachineConfigurationonCloudPoolto specify virtual machine extensions for nodes - Added the ability to specify availability zones using a new property
node_placement_configurationonVirtualMachineConfiguration - Added new
os_diskproperty toVirtualMachineConfiguration, which contains settings for the operating system disk of the Virtual Machine.- The
placementproperty onDiffDiskSettingsspecifies the ephemeral disk placement for operating system disks for all VMs in the pool. Setting it to "CacheDisk" will store the ephemeral OS disk on the VM cache.
- The
- Added
max_parallel_tasksproperty onCloudJobto control the maximum allowed tasks per job (defaults to -1, meaning unlimited). - Added
virtual_machine_infoproperty onComputeNodewhich contains information about the current state of the virtual machine, including the exact version of the marketplace image the VM is using.
10.0.0 (2020-09-01)
Features
- [Breaking] Replaced property
maxTasksPerNodewithtaskSlotsPerNodeon the pool. Using this property tasks in a job can consume a dynamic amount of slots allowing for more fine-grained control over resource consumption. - [Breaking] Changed the response type of
GetTaskCountsto returnTaskCountsResult, which is a complex object containing the previousTaskCountsobject and a newTaskSlotCountsobject providing similar information in the context of slots being used. - Added property
requiredSlotsto the task allowing user to specify how many slots on a node it should take up.
9.0.0 (2020-03-24)
Features
- Added ability to encrypt
ComputeNodedisk drives using the newdisk_encryption_configurationproperty ofVirtualMachineConfiguration. - [Breaking] The
virtual_machine_idproperty ofImageReferencecan now only refer to a Shared Image Gallery image. - [Breaking] Pools can now be provisioned without a public IP using the new
public_ip_address_configurationproperty ofNetworkConfiguration.- The
public_ipsproperty ofNetworkConfigurationhas moved in topublic_ip_address_configurationas well. This property can only be specified ifip_provisioning_typeisUserManaged.
- The
REST API version
This version of the Batch .NET client library targets version 2020-03-01.11.0 of the Azure Batch REST API.
8.0.0 (2019-8-5)
- Using REST API version 2019-08-01.10.0.
- Added ability to specify a collection of public IPs on
NetworkConfigurationvia the newpublic_ipsproperty. This guarantees nodes in the Pool will have an IP from the list user provided IPs. - Added ability to mount remote file-systems on each node of a pool via the
mount_configurationproperty onCloudPool. - Shared Image Gallery images can now be specified on the
virtual_machine_image_idproperty ofImageReferenceby referencing the image via its ARM ID. - Breaking When not specified, the default value for
wait_for_successonStartTaskis nowTrue(wasFalse). - Breaking When not specified, the default value for
scopeonAutoUserSpecificationis now alwaysPool(wasTaskon Windows nodes,Poolon Linux nodes).
- Added ability to specify a collection of public IPs on
7.0.0 (2019-6-11)
- Using REST API version 2019-06-01.9.0.
- Breaking Replaced
AccountOperations.list_node_agent_skuswithAccountOperations.list_supported_images.list_supported_imagescontains all of the same information originally available inlist_node_agent_skusbut in a clearer format. New non-verified images are also now returned. Additional information aboutcapabilitiesandbatch_support_end_of_lifeis accessible on theImageInformationobject returned bylist_supported_images. - Now support network security rules blocking network access to a
CloudPoolbased on the source port of the traffic. This is done via thesource_port_rangesproperty onnetwork_security_group_rules. - When running a container, Batch now supports executing the task in the container working directory or in the Batch task working directory. This is controlled by the
working_directoryproperty onTaskContainerSettings.
- Breaking Replaced
6.0.1 (2019-2-26)
- Fix bug in TaskOperations.add_collection methods exception handling
6.0.0 (2018-12-14)
- Using REST API version 2018-12-01.8.0.
- Breaking Removed support for the
upgrade_osAPI onCloudServiceConfigurationpools.- Removed
PoolOperations.upgrade_osAPI. - Renamed
target_os_versiontoos_versionand removedcurrent_os_versiononCloudServiceConfiguration. - Removed
upgradingstate fromPoolStateenum.
- Removed
- Breaking Removed
data_egress_gi_banddata_ingress_gi_bfromPoolUsageMetrics. These properties are no longer supported. - Breaking ResourceFile improvements
- Added the ability specify an entire Azure Storage container in
ResourceFile. There are now three supported modes forResourceFile:http_urlcreates aResourceFilepointing to a single HTTP URL.storage_container_urlcreates aResourceFilepointing to the blobs under an Azure Blob Storage container.auto_storage_container_namecreates aResourceFilepointing to the blobs under an Azure Blob Storage container in the Batch registered auto-storage account.
- URLs provided to
ResourceFilevia thehttp_urlproperty can now be any HTTP URL. Previously, these had to be an Azure Blob Storage URL. - The blobs under the Azure Blob Storage container can be filtered by
blob_prefixproperty.
- Added the ability specify an entire Azure Storage container in
- Breaking Removed
os_diskproperty fromVirtualMachineConfiguration. This property is no longer supported. - Pools which set the
dynamic_vnet_assignment_scopeonNetworkConfigurationto beDynamicVNetAssignmentScope.jobcan now dynamically assign a Virtual Network to each node the job's tasks run on. The specific Virtual Network to join the nodes to is specified in the newnetwork_configurationproperty onCloudJobandJobSpecification.- Note: This feature is in public preview. It is disabled for all Batch accounts except for those which have contacted us and requested to be in the pilot.
- The maximum lifetime of a task is now 180 days (previously it was 7).
- Added support on Windows pools for creating users with a specific login mode (either
batchorinteractive) viaWindowsUserConfiguration.login_mode. - The default task retention time for all tasks is now 7 days, previously it was infinite.
- Breaking Removed support for the
- Breaking Renamed the
base_urlparameter tobatch_urlonBatchServiceClientclass, and it is required now.
5.1.1 (2018-10-16)
Bugfixes
- Fix authentication class to allow HTTP session to be re-used
Note
- azure-nspkg is not installed anymore on Python 3 (PEP420-based namespace package)
5.1.0 (2018-08-28)
- Update operation TaskOperations.add_collection with the following added functionality:
- Retry server side errors.
- Automatically chunk lists of more than 100 tasks to multiple requests.
- If tasks are too large to be submitted in chunks of 100, reduces number of tasks per request.
- Add a parameter to specify number of threads to use when submitting tasks.
5.0.0 (2018-08-24)
- Using REST API version 2018-08-01.7.0.
- Added
node_agent_infoin ComputeNode to return the node agent information - Breaking Removed the
validation_statusproperty fromTaskCounts. - Breaking The default caching type for
DataDiskandOSDiskis nowread_writeinstead ofnone.
- Added
BatchServiceClientcan be used as a context manager to keep the underlying HTTP session open for performance.- Breaking Model signatures are now using only keywords-arguments syntax. Each positional argument must be rewritten as a keyword argument.
- Breaking The following operations signatures are changed:
- Operation PoolOperations.enable_auto_scale
- Operation TaskOperations.update
- Operation ComputeNodeOperations.reimage
- Operation ComputeNodeOperations.disable_scheduling
- Operation ComputeNodeOperations.reboot
- Operation JobOperations.terminate
- Enum types now use the "str" mixin (class AzureEnum(str, Enum)) to improve the behavior when unrecognized enum values are encountered.
4.1.3 (2018-04-24)
- Update some APIs' comments
- New property
leaving_poolinnode_countstype.
4.1.2 (2018-04-23)
Bugfixes
- Compatibility of the sdist with wheel 0.31.0
- Compatibility with msrestazure 0.4.28
4.1.1 (2018-03-26)
- Fix regression on method
enable_auto_scale.
4.1.0 (2018-03-07)
- Using REST API version 2018-03-01.6.1.
- Added the ability to query pool node counts by state, via the new
list_pool_node_countsmethod. - Added the ability to upload Azure Batch node agent logs from a particular node, via the
upload_batch_service_logsmethod.- This is intended for use in debugging by Microsoft support when there are problems on a node.
4.0.0 (2017-09-25)
- Using REST API version 2017-09-01.6.0.
- Added the ability to get a discount on Windows VM pricing if you have on-premises licenses for the OS SKUs you are deploying, via
license_typeonVirtualMachineConfiguration. - Added support for attaching empty data drives to
VirtualMachineConfigurationbased pools, via the newdata_disksattribute onVirtualMachineConfiguration. - Breaking Custom images must now be deployed using a reference to an ARM Image, instead of pointing to .vhd files in blobs directly.
- The new
virtual_machine_image_idproperty onImageReferencecontains the reference to the ARM Image, andOSDisk.image_urisno longer exists. - Because of this,
image_referenceis now a required attribute ofVirtualMachineConfiguration.
- The new
- Breaking Multi-instance tasks (created using
MultiInstanceSettings) must now specify acoordination_commandLine, andnumber_of_instancesis now optional and defaults to 1. - Added support for tasks run using Docker containers. To run a task using a Docker container you must specify a
container_configurationon theVirtualMachineConfigurationfor a pool, and then addcontainer_settingson the Task.
3.1.0 (2017-07-24)
- Added a new operation
job.get_task_countsto retrieve the number of tasks in each state. - Added suuport for inbound endpoint configuration on a pool - there is a new
pool_endpoint_configurationattribute onNetworkConfiguration. This property is only supported on pools that usevirtual_machine_configuration. - A
ComputeNodenow also has anendpoint_configurationattribute with the details of the applied endpoint configuration for that node.
3.0.0 (2017-05-10)
- Added support for the new low-priority node type;
AddPoolParameterandPoolSpecificationnow have an additional propertytarget_low_priority_nodes. target_dedicatedandcurrent_dedicatedonCloudPool,AddPoolParameterandPoolSpecificationhave been renamed totarget_dedicated_nodesandcurrent_dedicated_nodes.resize_erroronCloudPoolis now a collection calledresize_errors.- Added a new
is_dedicatedproperty onComputeNode, which isfalsefor low-priority nodes. - Added a new
allow_low_priority_nodeproperty toJobManagerTask, which iftrueallows theJobManagerTaskto run on a low-priority compute node. PoolResizeParameternow takes two optional parameters,target_dedicated_nodesandtarget_low_priority_nodes, instead of one required parametertarget_dedicated. At least one of these two parameters must be specified.- Added support for uploading task output files to persistent storage, via the
OutputFilesproperty onCloudTaskandJobManagerTask. - Added support for specifying actions to take based on a task's output file upload status, via the
file_upload_errorproperty onExitConditions. - Added support for determining if a task was a success or a failure via the new
resultproperty on all task execution information objects. - Renamed
scheduling_erroron all task execution information objects tofailure_information.TaskFailureInformationreplacesTaskSchedulingErrorand is returned any time there is a task failure. This includes all previous scheduling error cases, as well as nonzero task exit codes, and file upload failures from the new output files feature. - Renamed
SchedulingErrorCategoryenum toErrorCategory. - Renamed
scheduling_erroronExitConditionstopre_processing_errorto more clearly clarify when the error took place in the task life-cycle. - Added support for provisioning application licenses to your pool, via a new
application_licensesproperty onPoolAddParameter,CloudPoolandPoolSpecification. Please note that this feature is in gated public preview, and you must request access to it via a support ticket. - The
ssh_private_keyattribute of aUserAccountobject has been replaced with an expandedLinuxUserConfigurationobject with additional settings for a user ID and group ID of the user account. - Removed
unmappedenum state fromAddTaskStatus,CertificateFormat,CertificateVisibility,CertStoreLocation,ComputeNodeFillType,OSType, andPoolLifetimeOptionas they were not ever used. - Improved and clarified documentation.
2.0.1 (2017-04-19)
- This wheel package is now built with the azure wheel extension
2.0.0 (2017-02-23)
- AAD token authentication now supported.
- Some operation names have changed (along with their associated parameter model classes):
- pool.list_pool_usage_metrics -> pool.list_usage_metrics
- pool.get_all_pools_lifetime_statistics -> pool.get_all_lifetime_statistics
- job.get_all_jobs_lifetime_statistics -> job.get_all_lifetime_statistics
- file.get_node_file_properties_from_task -> file.get_properties_from_task
- file.get_node_file_properties_from_compute_node -> file.get_properties_from_compute_node
- The attribute 'file_name' in relation to file operations has been renamed to 'file_path'.
- Change in naming convention for enum values to use underscores: e.g. StartTaskState.waitingforstarttask -> StartTaskState.waiting_for_start_task.
- Support for running tasks under a predefined or automatic user account. This includes tasks, job manager tasks, job preparation and release tasks and pool start tasks. This feature replaces the previous 'run_elevated' option on a task.
- Tasks now have an optional scoped authentication token (only applies to tasks and job manager tasks).
- Support for creating pools with a list of user accounts.
- Support for creating pools using a custom VM image (only supported on accounts created with a "User Subscription" pool allocation mode).
1.1.0 (2016-09-15)
- Added support for task reactivation
1.0.0 (2016-08-09)
- Added support for joining a CloudPool to a virtual network on using the network_configuration property.
- Added support for application package references on CloudTask and JobManagerTask.
- Added support for automatically terminating jobs when all tasks complete or when a task fails, via the on_all_tasks_complete property and the CloudTask exit_conditions property.
0.30.0rc5
- Initial Release
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file azure_batch-15.1.0.tar.gz.
File metadata
- Download URL: azure_batch-15.1.0.tar.gz
- Upload date:
- Size: 298.1 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: RestSharp/106.13.0.0
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
2f684d176b457b7213535e145180a914ed6844b2d1066fcc22ac88c3ac9e3fa5
|
|
| MD5 |
331ad001e0a3987a9cda781c8c6b0c83
|
|
| BLAKE2b-256 |
e9ec9c38379dea2a4817a5c5b44282eff56476326cefa09da9e804809b397656
|
File details
Details for the file azure_batch-15.1.0-py3-none-any.whl.
File metadata
- Download URL: azure_batch-15.1.0-py3-none-any.whl
- Upload date:
- Size: 260.9 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: RestSharp/106.13.0.0
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
6001232ba67a4051f4f1b35d434232757b78baa14e4964ec3bfb91afa1d10ed4
|
|
| MD5 |
c798bbfa8c52ecbde9edc81b05ee4379
|
|
| BLAKE2b-256 |
9b35358a421ebbb0d6fa929333dd71b94922d34445aa02d9d2fa7c0a277dcf53
|