Skip to main content

Microsoft Corporation Azure Batch Client Library for Python

Project description

Microsoft Azure SDK for Python

Batch allows users to run large-scale parallel and high-performance computing (HPC) batch jobs efficiently in Azure. To learn more about Azure Batch, please see the Azure Batch Overview documentation.

Source code | Batch package (PyPI) | API reference documentation | Product documentation

Note: v15.x and above is a newer package that has significant changes and improvements from v14.x and below. Please see our migration guide for guidance.

Getting Started

Install the package

Install the azure-batch package v15.x or above for the most modern version of the package and azure-identity with pip:

pip install azure-batch azure-identity

azure-identity is used for authentication and is mentioned in the authentication section below.

Prerequisites

Authenticate the client

Note: For an asynchronous client, import azure.batch.aio's BatchClient

Authenticate with Entra ID

We strongly recommend using Microsoft Entra ID for Batch account authentication. Some Batch capabilities require this method of authentication, including many of the security-related features discussed here. The service API authentication mechanism for a Batch account can be restricted to only Microsoft Entra ID using the allowedAuthenticationModes property. When this property is set, API calls using Shared Key authentication will be rejected.

Azure Batch provides integration with Microsoft Entra ID for identity-based authentication of requests. With Azure AD, you can use role-based access control (RBAC) to grant access to your Azure Batch resources to users, groups, or applications. The Azure Identity library provides easy Microsoft Entra ID support for authentication.

from azure.identity import DefaultAzureCredential
from azure.batch import BatchClient

credentials = DefaultAzureCredential()
client = BatchClient(
    endpoint='https://<your account>.eastus.batch.azure.com',
    credential=credentials
)

Authenticate with Shared Key Credentials

You can also use Shared Key authentication to sign into your Batch account. This method uses your Batch account access keys to authenticate Azure commands for the Batch service.

from azure.core.credentials import AzureNamedKeyCredential
from azure.batch import BatchClient

credentials = AzureNamedKeyCredential(account_name, account_key)
client = BatchClient(
    endpoint='https://<your account>.eastus.batch.azure.com',
    credential=credentials
)

Examples

This section contains code snippets covering common Azure Batch operations:

Pool Operations

A pool is the collection of nodes that your application runs on.

Azure Batch pools build on top of the core Azure compute platform. They provide large-scale allocation, application installation, data distribution, health monitoring, and flexible adjustment (scaling) of the number of compute nodes within a pool. For more information, see Pools in Azure Batch.

Create a pool

Azure batch has two SDKs, azure-batch which interacts directly the Azure Batch service, and azure.mgmt.batch which interacts with the Azure Resource Manager (otherwise known as ARM). Both of these SDKs support batch pool operations such as create/get/update/list but only the azure.mgmt.batch SDK can create a pool with managed identities and for that reason it is the recommend way to create a pool.

This first snippet is an example of using azure.mgmt.batch to create a pool with managed identity. A more detailed usage of this method of creating a pool can be found in this create pool sample.

pool = batch_client.pool.create(
        GROUP_NAME,
        ACCOUNT,
        POOL,
        {
            "properties": {
                "vmSize": "STANDARD_D4",
                "deploymentConfiguration": {
                    "virtualMachineConfiguration": {
                        "imageReference": {
                            "publisher": "Canonical",
                            "offer": "UbuntuServer",
                            "sku": "18.04-LTS",
                            "version": "latest"
                        },
                        "nodeAgentSkuId": "batch.node.ubuntu 18.04"
                    }
                },
                "scaleSettings": {
                    "autoScale": {
                        "formula": "$TargetDedicatedNodes=1",
                        "evaluationInterval": "PT5M"
                    }
                }
            },

            "identity": {
                "type": "UserAssigned",
                "userAssignedIdentities": {
                    "/subscriptions/"+SUBSCRIPTION_ID+"/resourceGroups/"+GROUP_NAME+"/providers/Microsoft.ManagedIdentity/userAssignedIdentities/"+"Your Identity Name": {}
                }

            }
        }
)

This second snippet is using azure-batch for pool creation, without any support for managed identities. This example demonstrates creating a client using credentials and then creating the pool with BatchPoolCreateOptions.

from azure.batch import BatchClient, models
from azure.core.credentials import AzureNamedKeyCredential

credentials = AzureNamedKeyCredential(account_name, account_key)
client = BatchClient(endpoint=batch_account_endpoint, credential=credentials)

vm_config = models.VirtualMachineConfiguration(
    image_reference=models.BatchVmImageReference(
        publisher="MicrosoftWindowsServer",
        offer="WindowsServer",
        sku="2016-Datacenter-smalldisk"
    ),
    node_agent_sku_id="batch.node.windows amd64"
)

pool_spec = models.BatchPoolCreateOptions(
    id="my-pool",
    vm_size="standard_d2_v2",
    target_dedicated_nodes=1,
    virtual_machine_configuration=vm_config
)

client.create_pool(pool=pool_spec)

Get pool

The get_pool method can be used to retrieve an already created pool.

my_pool = client.get_pool(my_pool.id)

List pool

The list_pool method can be used to list all the pools under a Batch account.

pools = client.list_pools()

This method can also be used with filters to specify specific pools that you are looking for:

pools = client.list_pools(
            filter="startswith(id,'batch_abc_')",
            select=["id,state"],
            expand=["stats"],
        )

Delete pool

The begin_delete_pool method can be used to delete a pool. This begin keyword is an example of our Long Running Operations where an operation that usually executes synchronously will execute asynchronously to avoid connection and load-balancer timeouts. This requires the client to poll the service repeatedly to track progress and completion.

Synchronous approach - Wait for the operation to complete:

poller = client.begin_delete_pool(pool_id="my-pool")
result = poller.result()
print(f"Pool deleted successfully")

Asynchronous approach - Start the operation and check status later:

poller = client.begin_delete_pool(pool_id="my-pool", polling_interval=5)

if poller.done():
    print("Pool deletion completed")
else:
    print("Pool deletion still in progress")
    poller.wait()

Job Operations

A job is a collection of tasks. It manages how computation is performed by its tasks on the compute nodes in a pool.

A job specifies the pool in which the work is to be run. You can create a new pool for each job, or use one pool for many jobs. You can create a pool for each job that is associated with a job schedule, or one pool for all jobs that are associated with a job schedule. For more information see Job and Tasks in Azure Batch.

Create a Job

Create a job that will contain and manage your tasks. Jobs are associated with a specific pool.

from azure.batch import BatchClient, models
from azure.core.credentials import AzureNamedKeyCredential

credentials = AzureNamedKeyCredential(account_name, account_key)
client = BatchClient(endpoint=batch_account_endpoint, credential=credentials)

job_spec = models.BatchJobCreateOptions(
    id="my-job",
    pool_info=models.BatchPoolInfo(pool_id="my-pool")
)

client.create_job(job=job_spec)

Get job

The get_job method retrieves details about a specific job that has already been created.

my_job = client.get_job(job_id="my-job")

List job

The list_jobs method lists all jobs under a Batch account.

jobs = client.list_jobs()

Delete job

The begin_delete_job method is a Long Running Operation (LRO) that deletes a job asynchronously to avoid connection and load-balancer timeouts.

Synchronous approach - Wait for the operation to complete:

poller = client.begin_delete_job(job_id="my-job")
result = poller.result()
print("Job deleted successfully")

Asynchronous approach - Start the operation and check status later:

poller = client.begin_delete_job(job_id="my-job", polling_interval=5)

if poller.done():
    print("Job deletion completed")
else:
    print("Job deletion still in progress")
    poller.wait()

Job Schedule Operations

Job schedules enable you to create recurring jobs within the Batch service. A job schedule specifies when to run jobs and includes the specifications for the jobs to be run. You can specify the duration of the schedule (how long and when the schedule is in effect) and how frequently jobs are created during the scheduled period. For more information, see Scheduled Jobs in Azure Batch.

Create job schedule

The create_job_schedule method creates a new job schedule that automatically creates jobs based on the specified schedule.

from azure.batch import BatchClient, models
from azure.core.credentials import AzureNamedKeyCredential

credentials = AzureNamedKeyCredential(account_name, account_key)
client = BatchClient(endpoint=batch_account_endpoint, credential=credentials)

schedule_spec = models.BatchJobScheduleCreateOptions(
    id="my-job-schedule",
    schedule = models.BatchJobScheduleConfiguration(
        start_window=datetime.timedelta(hours=1),
        recurrence_interval=datetime.timedelta(days=1),
    )
    job_specification=models.BatchJobSpecification(
        pool_info=models.BatchPoolInfo(pool_id="my-pool")
    )
)

client.create_job_schedule(job_schedule=schedule_spec)

Get job schedule

The get_job_schedule method retrieves details about a specific job schedule.

my_schedule = client.get_job_schedule(job_schedule_id="my-job-schedule")

List job schedule

The list_job_schedules method lists all job schedules under a Batch account.

schedules = client.list_job_schedules()

Replace job schedule

The replace_job_schedule method replaces the entire job schedule configuration with new values.

schedule_replace = models.BatchJobSchedule(
    schedule=models.BatchJobScheduleConfiguration(
        recurrence_interval=datetime.timedelta(hours=10)
    ),
    job_specification=models.BatchJobSpecification(
        pool_info=models.BatchPoolInfo(pool_id="my-pool")
    )
)

client.replace_job_schedule(
    job_schedule_id="my-job-schedule",
    job_schedule=schedule_replace
)

Update job schedule

The update_job_schedule method updates specific properties of an existing job schedule without replacing the entire configuration.

schedule_update = models.BatchJobScheduleUpdateOptions(
    schedule=models.BatchJobScheduleConfiguration(
        recurrence_interval=datetime.timedelta(hours=5)
    )
)

client.update_job_schedule(
    job_schedule_id="my-job-schedule",
    job_schedule=schedule_update
)

Get job task count

The get_job_task_counts method provides summary counts of tasks in different states for a specific job, including active, running, and completed tasks.

task_counts = client.get_job_task_counts(job_id="my-job")

print(f"Completed tasks: {task_counts.completed}")
print(f"Succeeded tasks: {task_counts.succeeded}")
print(f"Failed tasks: {task_counts.failed}")

Task Operations

A task is a unit of computation that is associated with a job. It runs on a node. Tasks are assigned to a node for execution, or are queued until a node becomes free. Put simply, a task runs one or more programs or scripts on a compute node to perform the work you need done. For more information, see Jobs and Tasks in Azure Batch.

Create a task

There are three ways that a task can be created using this package. This first example shows how to create a single task on a job using create_task with the parameter type BatchTaskCreateOptions.

from azure.batch import BatchClient, models
from azure.core.credentials import AzureNamedKeyCredential

credentials = AzureNamedKeyCredential(account_name, account_key)
client = BatchClient(endpoint=batch_account_endpoint, credential=credentials)

task_spec = models.BatchTaskCreateOptions(
    id="my-task",
    command_line='cmd /c "echo Hello World"'
)

client.create_task(job_id="my-job", task=task_spec)

This second example demonstrates creating multiple tasks in a group using BatchTaskGroup with the create_task_collection method. A BatchTaskGroup can contain up to 100 tasks.

task1 = models.BatchTaskCreateOptions(id="task1", command_line='cmd /c "echo hello world"')
task2 = models.BatchTaskCreateOptions(id="task2", command_line='cmd /c "echo hello world"')
task3 = models.BatchTaskCreateOptions(id="task3", command_line='cmd /c "echo hello world"')

task_group = models.BatchTaskGroup(task_values=[task1, task2, task3])
result = client.create_task_collection(job_id="my-job", task_collection=task_group)

Finally, you can use create_tasks for creating multiple tasks with no limit. This method will package up the list of BatchTaskCreateOptions tasks passed in and repeatly call the create_task_collection method with groups of tasks bundled into BatchTaskGroup objects. This utility method allows you to select the number of parallel calls to the create_task_collection method.

tasks_to_add = []
for i in range(1000):
    task = models.BatchTaskCreateOptions(
        id=task_id + str(i),
        command_line='cmd /c "echo hello world"',
    )
    tasks_to_add.append(task)
result = client.create_tasks(job_id="my-job", task_collection=tasks_to_add)

Get task

The get_task method retrieves details about a specific task.

my_task = client.get_task(job_id="my-job", task_id="my-task")

List tasks

The list_tasks method lists all tasks associated with a specific job.

tasks = client.list_tasks(job_id="my-job")

Delete task

The delete_task method deletes a task from a job.

client.delete_task(job_id="my-job", task_id="my-task")

Retrieve output file from task

In Azure Batch, each task has a working directory under which it can create files and directories. This working directory can be used for storing the program that is run by the task, the data that it processes, and the output of the processing it performs. All files and directories of a task are owned by the task user.

The Batch service exposes a portion of the file system on a node as the root directory. This root directory is located on the temporary storage drive of the VM, not directly on the OS drive. For more information, see Files and Directories in Azure Batch.

List task files

List all files available in a task's directory using the list_task_files method:

all_files = client.list_task_files(job_id="my-job", task_id="my-task")
only_files = [f for f in all_files if not f.is_directory]

for file in only_files:
    print(f"File: {file.name}")

Node Operations

A node is an Azure virtual machine (VM) or cloud service VM that is dedicated to processing a portion of your application's workload. The size of a node determines the number of CPU cores, memory capacity, and local file system size that is allocated to the node. For more information, please see Nodes and Pools in Azure Batch.

Get node

The get_node method retrieves details about a specific compute node in a pool.

node = client.get_node(pool_id="my-pool", node_id="node1")

print(f"Node state: {node.state}")
print(f"Scheduling state: {node.scheduling_state}")
print(f"Node agent version: {node.node_agent_info.version}")

List nodes

The list_nodes method lists all compute nodes in a specific pool.

nodes = client.list_nodes(pool_id="my-pool")

for node in nodes:
    print(f"Node ID: {node.id}, State: {node.state}")

Reboot node

The begin_reboot_node method is a Long Running Operation (LRO) that reboots a compute node in a pool. You can specify the reboot kind to control how running tasks are handled during the reboot.

Synchronous approach - Wait for the reboot to complete:

poller = client.begin_reboot_node(
    pool_id="my-pool",
    node_id="node1",
    models.BatchNodeRebootOptions(
        node_reboot_kind=models.BatchNodeRebootKind.TERMINATE
    )
)
result = poller.result()
print("Node rebooted successfully")

Asynchronous approach - Start the reboot and check status later:

poller = client.begin_reboot_node(
    pool_id="my-pool",
    node_id="node1",
    models.BatchNodeRebootOptions(
        node_reboot_kind=models.BatchNodeRebootKind.REQUEUE
    ),
    polling_interval=5
)

if poller.done():
    print("Node reboot completed")
else:
    print("Node reboot still in progress")
    poller.wait()

Error Handling

We adopted the Azure Core exception framework, which provides a variety of exception types that map directly to HTTP status codes and common error scenarios. The base HttpResponseError is the foundation, with specialized exceptions like ClientAuthenticationError, ResourceNotFoundError, ResourceExistsError, and more providing specific error categorization. This system also provides direct access to HTTP status codes, response headers, and request information.

from azure.batch import BatchClient
from azure.core.exceptions import (
    HttpResponseError,
    ResourceNotFoundError,
)

try:
    client = BatchClient(endpoint, credentials)
    pools = client.list_pools()
except ResourceNotFoundError as not_found_error:
    print(f"Service could not find resource {not_found_error.status_code}: {not_found_error.error.message.value}")
    create_missing_resource()
except HttpResponseError as error:
    print(f"HTTP Status: {error.status_code}")
    print(f"Error Code: {error.error.code}")
    print(f"Message: {error.error.message.value}")

Usage

Note: Comprehensive code samples for the v15.x package are currently in development and will be available soon. In the meantime, the examples provided throughout this README demonstrate common operations for the latest version.

For code examples using the v14.x package, see the Batch samples repo on GitHub or see Batch on docs.microsoft.com.

Provide Feedback

If you encounter any bugs or have suggestions, please file an issue in the Issues section of the project.

Release History

15.1.0 (2026-03-06)

Other Changes

  • This is the GA release of the features introduced in the 15.0.0 and 15.1.0 beta versions, including LRO support, job-level FIFO scheduling, CMK support on pools, IPv6 support, metadata security protocol support, IP tag support, and confidential VM enhancements.

Breaking Changes

  • Renamed BatchNodeUserUpdateOptions to BatchNodeUserReplaceOptions.

  • Renamed OutputFileUploadConfig to OutputFileUploadConfiguration.

  • Removed Models:

    • Removed AuthenticationTokenSettings
  • NameSpace changed azure.batch.models._models:

    • BatchJobTerminateOptions
    • BatchNodeDeallocateOptions
    • BatchNodeRebootOptions
    • BatchNodeReimageOptions
  • Removed Enums:

    • Removed BatchAccessScope
  • NameSpace changed azure.batch.models._enums:

    • BatchNodeDeallocateOption
    • BatchNodeRebootKind
    • BatchNodeReimageOption
  • Renamed public methods:

    • list_sub_tasks -> list_subtasks
    • get_task_file -> download_task_file
    • get_node_file -> download_node_file
  • Renamed parameters across all operation methods:

    • timeout -> service_timeout
    • ocpdate -> ocp_date
    • starttime -> start_time
    • endtime -> end_time
    • concurrencies -> max_concurrency
  • Renamed properties in models:

    • e_tag -> etag in BatchJob, BatchJobSchedule, BatchPool, BatchTask, and BatchTaskCreateResult
    • values_property -> error_values in AutoScaleRunError, BatchError, and ResizeError
    • values_property -> result_values in CollectionResult
    • values_property -> task_values in BatchTaskGroup
    • avg_memory_gi_b -> avg_memory_gib, peak_memory_gi_b -> peak_memory_gib, avg_disk_gi_b -> avg_disk_gib, peak_disk_gi_b -> peak_disk_gib, disk_read_gi_b -> disk_read_gib, disk_write_gi_b -> disk_write_gib, network_read_gi_b -> network_read_gib, network_write_gi_b -> network_write_gib in BatchPoolResourceStatistics
  • Removed Properties:

    • Removed authentication_token_settings from BatchJobManagerTask, BatchStartTask, and BatchTask
    • Removed access from AuthenticationTokenSettings

15.1.0b3 (2026-02-05)

Other Changes

  • Minor parameter renaming: read_io_gi_b to read_io_gib, write_io_gi_b to write_io_gib, and v_tpm_enabled to vtpm_enabled.

15.1.0b2 (2025-11-20)

Features Added

  • Job level FIFO

    • Added BatchJobDefaultOrder types.
    • Extended BatchTaskSchedulingPolicy with a new jobDefaultOrder property to support job-level FIFO scheduling.
  • CMK support on Pools

    • Added DiskCustomerManagedKey and DiskEncryptionSetParameters for customer-managed key (CMK) support on pools.
    • Extended DiskEncryptionConfiguration with a new customerManagedKey property.
    • Extended ManagedDisk with a new diskEncryptionSet property.
    • Added BatchPoolIdentityReference for referencing managed identities in disk encryption scenarios.
  • IPv6 support on Pools

    • Added ipv6Address to BatchNode.
    • Added ipv6RemoteLoginIPAddress and ipv6RemoteLoginPort to BatchNodeRemoteLoginSettings.
  • Metadata Security Protocol Support on Pools

    • Added HostEndpointSettings and HostEndpointSettingsModeTypes.
    • Added ProxyAgentSettings.
    • Extended SecurityProfile with a new proxyAgentSettings property for metadata security protocol support.
  • IP Tag Support

    • Added IPFamily and IPTag types.
    • Extended BatchPublicIpAddressConfiguration with new ipFamilies and ipTags properties for IP tag support.

Breaking Changes

  • Removed all Certifcate API's

    • Removed create_certificate
    • Removed get_certificate
    • Removed list_certificates
    • Removed cancel_certificate_deletion
    • Removed begin_delete_certificate
  • Removed Models:

    • Removed BatchCertificate
    • Removed BatchCertificateDeleteError
    • Removed BatchCertificateFormat
    • Removed BatchCertificateReference
    • Removed BatchCertificateState
    • Removed BatchCertificateStoreLocation
    • Removed BatchCertificateVisibility
    • Removed BatchNodeCommunicationMode
  • Removed Properties:

    • Removed CertificateReferences from BatchNode
    • Removed ResourceTags and CertificateReferences from BatchPool
    • Removed CertificateReferences, ResourceTags, and TargetNodeCommunicationMode from BatchPoolCreateOptions
    • Removed CertificateReferences and TargetNodeCommunicationMode from BatchPoolReplaceOptions
    • Removed CertificateReferences, ResourceTags, and TargetNodeCommunicationMode from BatchPoolSpecifications
    • Removed CertificateReferences, ResourceTags, and TargetNodeCommunicationMode from BatchPoolUpdateOptions
    • Removed CertificateReferences, ResourceTags, and TargetNodeCommunicationMode from ComputeBatchModelFactory

15.1.0b1 (2025-10-01)

Features Added

  • Added Long Running Operation (LRO) support for the following operation methods:
    • delete_job -> begin_delete_job
    • disable_job -> begin_disable_job
    • enable_job -> begin_enable_job
    • delete_job_schedule -> begin_delete_job_schedule
    • delete_pool -> begin_delete_pool
    • delete_certificate -> begin_delete_certificate
    • deallocate_node -> begin_deallocate_node
    • reboot_node -> begin_reboot_node
    • reimage_node -> begin_reimage_node
    • remove_nodes -> begin_remove_nodes
    • resize_pool -> begin_resize_pool
    • start_node -> begin_start_node
    • stop_pool_resize -> begin_stop_pool_resize
    • terminate_job -> begin_terminate_job
    • terminate_job_schedule -> begin_terminate_job_schedule

Breaking Changes

  • Renamed the following models. These name changes include several models with the suffix Content being renamed to have the suffix Options.

    • AccessScope -> BatchAccessScope
    • AffinityInfo -> BatchAffinityInfo
    • BatchJobAction -> BatchJobActionKind
    • BatchJobCreateContent -> BatchJobCreateOptions
    • BatchJobDisableContent -> BatchJobDisableOptions
    • BatchJobScheduleCreateContent -> BatchJobScheduleCreateOptions
    • BatchJobScheduleUpdateContent -> BatchJobScheduleUpdateOptions
    • BatchJobTerminateContent -> BatchJobTerminateOptions
    • BatchJobUpdateContent -> BatchJobUpdateOptions
    • BatchNodeDeallocateContent -> BatchNodeDeallocateOptions
    • BatchNodeDisableSchedulingContent -> BatchNodeDisableSchedulingOptions
    • BatchNodeRebootContent -> BatchNodeRebootOptions
    • BatchNodeRebootOption -> BatchNodeRebootKind
    • BatchNodeReimageContent -> BatchNodeReimageOptions
    • BatchNodeRemoveContent -> BatchNodeRemoveOptions
    • BatchNodeUserCreateContent -> BatchNodeUserCreateOptions
    • BatchNodeUserUpdateContent -> BatchNodeUserUpdateOptions
    • BatchPoolCreateContent -> BatchPoolCreateOptions
    • BatchPoolEnableAutoScaleContent -> BatchPoolEnableAutoScaleOptions
    • BatchPoolEvaluateAutoScaleContent -> BatchPoolEvaluateAutoScaleOptions
    • BatchPoolReplaceContent -> BatchPoolReplaceOptions
    • BatchPoolResizeContent -> BatchPoolResizeOptions
    • BatchPoolUpdateContent -> BatchPoolUpdateOptions
    • BatchTaskCreateContent -> BatchTaskCreateOptions
    • ContainerConfiguration -> BatchContainerConfiguration
    • ContainerConfigurationUpdate -> BatchContainerConfigurationUpdate
    • DeleteBatchCertificateError -> BatchCertificateDeleteError
    • DiffDiskSettings -> BatchDiffDiskSettings
    • ErrorCategory -> BatchErrorSourceCategory
    • HttpHeader -> OutputFileUploadHeader
    • ImageReference -> BatchVmImageReference
    • OSDisk -> BatchOsDisk
    • OnAllBatchTasksComplete -> BatchAllTasksCompleteMode
    • OnBatchTaskFailure -> BatchAllTasksCompleteMode
    • PublicIpAddressConfiguration -> BatchPublicIpAddressConfiguration
    • UefiSettings -> BatchUefiSettings
    • UploadBatchServiceLogsContent -> UploadBatchServiceLogsOptions
    • VMDiskSecurityProfile -> BatchVMDiskSecurityProfile
  • Renamed parameters in the following operation methods:

    • begin_disable_job changed content parameter to disable_options
    • begin_deallocate_node changed parameters parameter to options
    • begin_remove_nodes changed content parameter to remove_options.
    • begin_resize_pool changed content parameter to resize_options.
    • begin_terminate_job changed parameters parameter to options.
    • begin_reboot_node changed parameters parameter to options.
    • begin_reimage_node changed parameters parameter to options.
    • disable_node_scheduling changed parameters parameter to options.
    • enable_pool_auto_scale changed content parameter to enable_auto_scale_options.
    • evaluate_pool_auto_scale changed content parameter to evaluate_auto_scale_options.
    • upload_node_logs changed content parameter to upload_options.
    • replace_node_user changed content parameter to update_options.

15.0.0b2 (2025-03-01)

Features Added

  • Force delete/terminate job or job schedule:

    • Added force parameter of type Boolean to delete_job, terminate_job, delete_job_schedule, and terminate_job_schedule
  • Support for compute node start/deallocate operations:

    • Added start_node, deallocate_node methods to BatchClient and AsyncBatchClient
  • Container task data mount isolation:

    • Added containerHostBatchBindMounts of type List<ContainerHostBatchBindMountEntry> to BatchTaskContainerSettings
  • Patch improvements for pool and job:

    • Added displayName, vmSize, taskSlotsPerNode, taskSchedulingPolicy, enableInterNodeCommunication, virtualMachineConfiguration, networkConfiguration, userAccounts, mountConfiguration, upgradePolicy, and resourceTags to BatchPoolUpdateContent
    • Added networkConfiguration to BatchJobUpdateContent
  • Confidential VM support:

    • Added confidentialVM to SecurityTypes.
    • Added securityProfile of type VMDiskSecurityProfile to ManagedDisk
  • Support for shared and community gallery images:

    • Added sharedGalleryImageId and communityGalleryImageId to ImageReference
  • Re-add support for BatchCertificate (temporary since this feature is deprecated):

    • Added create_certificate, list_certificates, cancel_certificate_deletion, delete_certificate, andget_certificate methods to BatchClient and AsyncBatchClient

Breaking Changes

  • Removed get_remote_desktop method from BatchClient. Use get_node_remote_login_settings instead to remotely login to a compute node
  • Removed CloudServiceConfiguration from pool models and operations. Use VirtualMachineConfiguration when creating pools
  • Removed ApplicationLicenses from pool models and operations

15.0.0b1 (2024-09-01)

Breaking Changes

  • Remove certificates
  • Remove render licenses
  • Remove CloudServiceConfiguration for pool models and operations. VirtualMachineConfiguration is supported for pool configurations moving forward.

14.2.0 (2024-02-01)

Features Added

  • Added UpgradePolicy to CloudPool definition for pool creation
    • Added AutomaticOSUpgradePolicy to include configuration parameters for automatic OS upgrades
    • Added RollingUpgradePolicy to include configuration parameters for rolling upgrades

14.1.0 (2023-11-01)

Features Added

  • Added ResourceTags support to Pool Creation so users are able to specify resource tags for a pool. This feature is currently only supported for pool creation but will be updatable in the future.

    • Added resourceTags property to PoolSpecification definition
    • Added resourceTags property to CloudPool definition
  • Added SecurityProfile support to Pool Creation. Trusted Launch provides advanced security to Guest OS preventing boot-kits/rootkits (like un-signed driver or kernel modification) to be introduced into boot-chain.

    • Added serviceArtifactReference and securityProfile property to VirtualMachineConfiguration definition
  • Added ServiceArtifactReference and OSDisk support to Pool Creation

    • Added standardssd_lrs value to StorageAccountType enum
    • Added caching, managedDisk, diskSizeGB, and writeAcceleratorEnabled property to NodePlacementPolicyType definition
    • Added scaleSetVmResourceID property to VirtualMachineInfo definition

14.0.0 (2023-05-01)

Features Added

  • Added boolean property enableAcceleratedNetworking to NetworkConfiguration.
    • This property determines whether this pool should enable accelerated networking, with default value as False.
    • Whether this feature can be enabled is also related to whether an operating system/VM instance is supported, which should align with AcceleratedNetworking Policy (AcceleratedNetworking Limitations and constraints).
  • Added boolean property enableAutomaticUpgrade to VMExtension.
    • This property determines whether the extension should be automatically upgraded by the platform if there is a newer version of the extension available.
  • Added a new property type to ContainerConfiguration. Possible values include: dockerCompatible and criCompatible.

Breaking Changes

  • Removed lifetime statistics API. This API is no longer supported.
    • Removed job.get_all_lifetime_statistics API.
    • Removed pool.get_all_lifetime_statistics API.

Other Changes

  • Deprecating CertificateOperations related methods.

13.0.0 (2022-11-08)

Features Added

  • Added new custom enum type NodeCommunicationMode.
    • This property determines how a pool communicates with the Batch service.
    • Possible values: Default, Classic, Simplified.
  • Added properties current_node_communication_mode and target_node_communication_mode of type NodeCommunicationMode to CloudPool.
  • Added property target_node_communication_mode of type NodeCommunicationMode to PoolSpecification, PoolAddParameter, PoolPatchParameter, and PoolUpdatePropertiesParameter.

12.0.0 (2022-02-01)

Features

  • Added property uploadHeaders to OutputFileBlobContainerDestination.
    • Allows users to set custom HTTP headers on resource file uploads.
    • Array of type HttpHeader (also being added).
  • Added boolean property allow_task_preemption to JobSpecification, CloudJob, JobAddParameter, JobPatchParameter, JobUpdateParameter
    • Mark Tasks as preemptible for higher priority Tasks (requires Comms-Enabled or Single Tenant Pool).
  • Replaced comment (title, description, etc.) references of "low-priority" with "Spot/Low-Priority", to reflect new service behavior.

11.0.0 (2021-07-30)

Features

  • Add ability to assign user-assigned managed identities to CloudPool. These identities will be made available on each node in the pool, and can be used to access various resources.
  • Added identity_reference property to the following models to support accessing resources via managed identity:
    • AzureBlobFileSystemConfiguration
    • OutputFileBlobContainerDestination
    • ContainerRegistry
    • ResourceFile
    • UploadBatchServiceLogsConfiguration
  • Added new compute_node_extension operations to BatchServiceClient for getting/listing VM extensions on a node
  • Added new extensions property to VirtualMachineConfiguration on CloudPool to specify virtual machine extensions for nodes
  • Added the ability to specify availability zones using a new property node_placement_configuration on VirtualMachineConfiguration
  • Added new os_disk property to VirtualMachineConfiguration, which contains settings for the operating system disk of the Virtual Machine.
    • The placement property on DiffDiskSettings specifies the ephemeral disk placement for operating system disks for all VMs in the pool. Setting it to "CacheDisk" will store the ephemeral OS disk on the VM cache.
  • Added max_parallel_tasks property on CloudJob to control the maximum allowed tasks per job (defaults to -1, meaning unlimited).
  • Added virtual_machine_info property on ComputeNode which contains information about the current state of the virtual machine, including the exact version of the marketplace image the VM is using.

10.0.0 (2020-09-01)

Features

  • [Breaking] Replaced property maxTasksPerNode with taskSlotsPerNode on the pool. Using this property tasks in a job can consume a dynamic amount of slots allowing for more fine-grained control over resource consumption.
  • [Breaking] Changed the response type of GetTaskCounts to return TaskCountsResult, which is a complex object containing the previous TaskCounts object and a new TaskSlotCounts object providing similar information in the context of slots being used.
  • Added property requiredSlots to the task allowing user to specify how many slots on a node it should take up.

9.0.0 (2020-03-24)

Features

  • Added ability to encrypt ComputeNode disk drives using the new disk_encryption_configuration property of VirtualMachineConfiguration.
  • [Breaking] The virtual_machine_id property of ImageReference can now only refer to a Shared Image Gallery image.
  • [Breaking] Pools can now be provisioned without a public IP using the new public_ip_address_configuration property of NetworkConfiguration.
    • The public_ips property of NetworkConfiguration has moved in to public_ip_address_configuration as well. This property can only be specified if ip_provisioning_type is UserManaged.

REST API version

This version of the Batch .NET client library targets version 2020-03-01.11.0 of the Azure Batch REST API.

8.0.0 (2019-8-5)

  • Using REST API version 2019-08-01.10.0.
    • Added ability to specify a collection of public IPs on NetworkConfiguration via the new public_ips property. This guarantees nodes in the Pool will have an IP from the list user provided IPs.
    • Added ability to mount remote file-systems on each node of a pool via the mount_configuration property on CloudPool.
    • Shared Image Gallery images can now be specified on the virtual_machine_image_id property of ImageReference by referencing the image via its ARM ID.
    • Breaking When not specified, the default value for wait_for_success on StartTask is now True (was False).
    • Breaking When not specified, the default value for scope on AutoUserSpecification is now always Pool (was Task on Windows nodes, Pool on Linux nodes).

7.0.0 (2019-6-11)

  • Using REST API version 2019-06-01.9.0.
    • Breaking Replaced AccountOperations.list_node_agent_skus with AccountOperations.list_supported_images. list_supported_images contains all of the same information originally available in list_node_agent_skus but in a clearer format. New non-verified images are also now returned. Additional information about capabilities and batch_support_end_of_life is accessible on the ImageInformation object returned by list_supported_images.
    • Now support network security rules blocking network access to a CloudPool based on the source port of the traffic. This is done via the source_port_ranges property on network_security_group_rules.
    • When running a container, Batch now supports executing the task in the container working directory or in the Batch task working directory. This is controlled by the working_directory property on TaskContainerSettings.

6.0.1 (2019-2-26)

  • Fix bug in TaskOperations.add_collection methods exception handling

6.0.0 (2018-12-14)

  • Using REST API version 2018-12-01.8.0.
    • Breaking Removed support for the upgrade_os API on CloudServiceConfiguration pools.
      • Removed PoolOperations.upgrade_os API.
      • Renamed target_os_version to os_version and removed current_os_version on CloudServiceConfiguration.
      • Removed upgrading state from PoolState enum.
    • Breaking Removed data_egress_gi_b and data_ingress_gi_b from PoolUsageMetrics. These properties are no longer supported.
    • Breaking ResourceFile improvements
      • Added the ability specify an entire Azure Storage container in ResourceFile. There are now three supported modes for ResourceFile:
        • http_url creates a ResourceFile pointing to a single HTTP URL.
        • storage_container_url creates a ResourceFile pointing to the blobs under an Azure Blob Storage container.
        • auto_storage_container_name creates a ResourceFile pointing to the blobs under an Azure Blob Storage container in the Batch registered auto-storage account.
      • URLs provided to ResourceFile via the http_url property can now be any HTTP URL. Previously, these had to be an Azure Blob Storage URL.
      • The blobs under the Azure Blob Storage container can be filtered by blob_prefix property.
    • Breaking Removed os_disk property from VirtualMachineConfiguration. This property is no longer supported.
    • Pools which set the dynamic_vnet_assignment_scope on NetworkConfiguration to be DynamicVNetAssignmentScope.job can now dynamically assign a Virtual Network to each node the job's tasks run on. The specific Virtual Network to join the nodes to is specified in the new network_configuration property on CloudJob and JobSpecification.
      • Note: This feature is in public preview. It is disabled for all Batch accounts except for those which have contacted us and requested to be in the pilot.
    • The maximum lifetime of a task is now 180 days (previously it was 7).
    • Added support on Windows pools for creating users with a specific login mode (either batch or interactive) via WindowsUserConfiguration.login_mode.
    • The default task retention time for all tasks is now 7 days, previously it was infinite.
  • Breaking Renamed the base_url parameter to batch_url on BatchServiceClient class, and it is required now.

5.1.1 (2018-10-16)

Bugfixes

  • Fix authentication class to allow HTTP session to be re-used

Note

  • azure-nspkg is not installed anymore on Python 3 (PEP420-based namespace package)

5.1.0 (2018-08-28)

  • Update operation TaskOperations.add_collection with the following added functionality:
    • Retry server side errors.
    • Automatically chunk lists of more than 100 tasks to multiple requests.
    • If tasks are too large to be submitted in chunks of 100, reduces number of tasks per request.
    • Add a parameter to specify number of threads to use when submitting tasks.

5.0.0 (2018-08-24)

  • Using REST API version 2018-08-01.7.0.
    • Added node_agent_info in ComputeNode to return the node agent information
    • Breaking Removed the validation_status property from TaskCounts.
    • Breaking The default caching type for DataDisk and OSDisk is now read_write instead of none.
  • BatchServiceClient can be used as a context manager to keep the underlying HTTP session open for performance.
  • Breaking Model signatures are now using only keywords-arguments syntax. Each positional argument must be rewritten as a keyword argument.
  • Breaking The following operations signatures are changed:
    • Operation PoolOperations.enable_auto_scale
    • Operation TaskOperations.update
    • Operation ComputeNodeOperations.reimage
    • Operation ComputeNodeOperations.disable_scheduling
    • Operation ComputeNodeOperations.reboot
    • Operation JobOperations.terminate
  • Enum types now use the "str" mixin (class AzureEnum(str, Enum)) to improve the behavior when unrecognized enum values are encountered.

4.1.3 (2018-04-24)

  • Update some APIs' comments
  • New property leaving_pool in node_counts type.

4.1.2 (2018-04-23)

Bugfixes

  • Compatibility of the sdist with wheel 0.31.0
  • Compatibility with msrestazure 0.4.28

4.1.1 (2018-03-26)

  • Fix regression on method enable_auto_scale.

4.1.0 (2018-03-07)

  • Using REST API version 2018-03-01.6.1.
  • Added the ability to query pool node counts by state, via the new list_pool_node_counts method.
  • Added the ability to upload Azure Batch node agent logs from a particular node, via the upload_batch_service_logs method.
    • This is intended for use in debugging by Microsoft support when there are problems on a node.

4.0.0 (2017-09-25)

  • Using REST API version 2017-09-01.6.0.
  • Added the ability to get a discount on Windows VM pricing if you have on-premises licenses for the OS SKUs you are deploying, via license_type on VirtualMachineConfiguration.
  • Added support for attaching empty data drives to VirtualMachineConfiguration based pools, via the new data_disks attribute on VirtualMachineConfiguration.
  • Breaking Custom images must now be deployed using a reference to an ARM Image, instead of pointing to .vhd files in blobs directly.
    • The new virtual_machine_image_id property on ImageReference contains the reference to the ARM Image, and OSDisk.image_uris no longer exists.
    • Because of this, image_reference is now a required attribute of VirtualMachineConfiguration.
  • Breaking Multi-instance tasks (created using MultiInstanceSettings) must now specify a coordination_commandLine, and number_of_instances is now optional and defaults to 1.
  • Added support for tasks run using Docker containers. To run a task using a Docker container you must specify a container_configuration on the VirtualMachineConfiguration for a pool, and then add container_settings on the Task.

3.1.0 (2017-07-24)

  • Added a new operation job.get_task_counts to retrieve the number of tasks in each state.
  • Added suuport for inbound endpoint configuration on a pool - there is a new pool_endpoint_configuration attribute on NetworkConfiguration. This property is only supported on pools that use virtual_machine_configuration.
  • A ComputeNode now also has an endpoint_configuration attribute with the details of the applied endpoint configuration for that node.

3.0.0 (2017-05-10)

  • Added support for the new low-priority node type; AddPoolParameter and PoolSpecification now have an additional property target_low_priority_nodes.
  • target_dedicated and current_dedicated on CloudPool, AddPoolParameter and PoolSpecification have been renamed to target_dedicated_nodes and current_dedicated_nodes.
  • resize_error on CloudPool is now a collection called resize_errors.
  • Added a new is_dedicated property on ComputeNode, which is false for low-priority nodes.
  • Added a new allow_low_priority_node property to JobManagerTask, which if true allows the JobManagerTask to run on a low-priority compute node.
  • PoolResizeParameter now takes two optional parameters, target_dedicated_nodes and target_low_priority_nodes, instead of one required parameter target_dedicated. At least one of these two parameters must be specified.
  • Added support for uploading task output files to persistent storage, via the OutputFiles property on CloudTask and JobManagerTask.
  • Added support for specifying actions to take based on a task's output file upload status, via the file_upload_error property on ExitConditions.
  • Added support for determining if a task was a success or a failure via the new result property on all task execution information objects.
  • Renamed scheduling_error on all task execution information objects to failure_information. TaskFailureInformation replaces TaskSchedulingError and is returned any time there is a task failure. This includes all previous scheduling error cases, as well as nonzero task exit codes, and file upload failures from the new output files feature.
  • Renamed SchedulingErrorCategory enum to ErrorCategory.
  • Renamed scheduling_error on ExitConditions to pre_processing_error to more clearly clarify when the error took place in the task life-cycle.
  • Added support for provisioning application licenses to your pool, via a new application_licenses property on PoolAddParameter, CloudPool and PoolSpecification. Please note that this feature is in gated public preview, and you must request access to it via a support ticket.
  • The ssh_private_key attribute of a UserAccount object has been replaced with an expanded LinuxUserConfiguration object with additional settings for a user ID and group ID of the user account.
  • Removed unmapped enum state from AddTaskStatus, CertificateFormat, CertificateVisibility, CertStoreLocation, ComputeNodeFillType, OSType, and PoolLifetimeOption as they were not ever used.
  • Improved and clarified documentation.

2.0.1 (2017-04-19)

  • This wheel package is now built with the azure wheel extension

2.0.0 (2017-02-23)

  • AAD token authentication now supported.
  • Some operation names have changed (along with their associated parameter model classes):
    • pool.list_pool_usage_metrics -> pool.list_usage_metrics
    • pool.get_all_pools_lifetime_statistics -> pool.get_all_lifetime_statistics
    • job.get_all_jobs_lifetime_statistics -> job.get_all_lifetime_statistics
    • file.get_node_file_properties_from_task -> file.get_properties_from_task
    • file.get_node_file_properties_from_compute_node -> file.get_properties_from_compute_node
  • The attribute 'file_name' in relation to file operations has been renamed to 'file_path'.
  • Change in naming convention for enum values to use underscores: e.g. StartTaskState.waitingforstarttask -> StartTaskState.waiting_for_start_task.
  • Support for running tasks under a predefined or automatic user account. This includes tasks, job manager tasks, job preparation and release tasks and pool start tasks. This feature replaces the previous 'run_elevated' option on a task.
  • Tasks now have an optional scoped authentication token (only applies to tasks and job manager tasks).
  • Support for creating pools with a list of user accounts.
  • Support for creating pools using a custom VM image (only supported on accounts created with a "User Subscription" pool allocation mode).

1.1.0 (2016-09-15)

  • Added support for task reactivation

1.0.0 (2016-08-09)

  • Added support for joining a CloudPool to a virtual network on using the network_configuration property.
  • Added support for application package references on CloudTask and JobManagerTask.
  • Added support for automatically terminating jobs when all tasks complete or when a task fails, via the on_all_tasks_complete property and the CloudTask exit_conditions property.

0.30.0rc5

  • Initial Release

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

azure_batch-15.1.0.tar.gz (298.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

azure_batch-15.1.0-py3-none-any.whl (260.9 kB view details)

Uploaded Python 3

File details

Details for the file azure_batch-15.1.0.tar.gz.

File metadata

  • Download URL: azure_batch-15.1.0.tar.gz
  • Upload date:
  • Size: 298.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: RestSharp/106.13.0.0

File hashes

Hashes for azure_batch-15.1.0.tar.gz
Algorithm Hash digest
SHA256 2f684d176b457b7213535e145180a914ed6844b2d1066fcc22ac88c3ac9e3fa5
MD5 331ad001e0a3987a9cda781c8c6b0c83
BLAKE2b-256 e9ec9c38379dea2a4817a5c5b44282eff56476326cefa09da9e804809b397656

See more details on using hashes here.

File details

Details for the file azure_batch-15.1.0-py3-none-any.whl.

File metadata

  • Download URL: azure_batch-15.1.0-py3-none-any.whl
  • Upload date:
  • Size: 260.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: RestSharp/106.13.0.0

File hashes

Hashes for azure_batch-15.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 6001232ba67a4051f4f1b35d434232757b78baa14e4964ec3bfb91afa1d10ed4
MD5 c798bbfa8c52ecbde9edc81b05ee4379
BLAKE2b-256 9b35358a421ebbb0d6fa929333dd71b94922d34445aa02d9d2fa7c0a277dcf53

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page