Skip to main content

Azure Storage Connector for PyTorch

Project description

Azure Storage Connector for PyTorch (azstoragetorch) (Preview)

The Azure Storage Connector for PyTorch (azstoragetorch) is a library that provides seamless, performance-optimized integrations between Azure Storage and PyTorch. Use this library to easily access and store data in Azure Storage while using PyTorch. The library currently offers:

Documentation

For detailed documentation on azstoragetorch, we recommend visiting its official documentation. It includes both a user guide and API references for the project. Content in this README is scoped to a high-level overview of the project and its GitHub repository policies.

Backwards compatibility

While the project is major version 0 (i.e., version is 0.x.y), public interfaces are not stable.

Backwards incompatible changes may be introduced between minor version bumps (e.g., upgrading from 0.1.0 to 0.2.0). If backwards compatibility is needed while using the library, we recommend pinning to a minor version of the library (e.g., azstoragetorch~=0.1.0).

Getting started

Prerequisites

Installation

Install the library with pip:

pip install azstoragetorch

Configuration

azstoragetorch should work without any explicit credential configuration.

azstoragetorch interfaces default to DefaultAzureCredential for credentials which automatically retrieves Microsoft Entra ID tokens based on your current environment. For more information on using credentials with azstoragetorch, see the user guide.

Features

This section highlights core features of azstoragetorch. For more details, see the user guide.

Saving and loading PyTorch models (Checkpointing)

PyTorch supports saving and loading trained models (i.e., checkpointing). The core PyTorch interfaces for saving and loading models are torch.save() and torch.load() respectively. Both of these functions accept a file-like object to be written to or read from.

azstoragetorch offers the azstoragetorch.io.BlobIO file-like object class to save and load models directly to and from Azure Blob Storage when using torch.save() and torch.load():

import torch
import torchvision.models  # Install separately: ``pip install torchvision``
from azstoragetorch.io import BlobIO

# Update URL with your own Azure Storage account and container name
CONTAINER_URL = "https://<my-storage-account-name>.blob.core.windows.net/<my-container-name>"

# Model to save. Replace with your own model.
model = torchvision.models.resnet18(weights="DEFAULT")

# Save trained model to Azure Blob Storage. This saves the model weights
# to a blob named "model_weights.pth" in the container specified by CONTAINER_URL.
with BlobIO(f"{CONTAINER_URL}/model_weights.pth", "wb") as f:
    torch.save(model.state_dict(), f)

# Load trained model from Azure Blob Storage.  This loads the model weights
# from the blob named "model_weights.pth" in the container specified by CONTAINER_URL.
with BlobIO(f"{CONTAINER_URL}/model_weights.pth", "rb") as f:
    model.load_state_dict(torch.load(f))

PyTorch Datasets

PyTorch offers the Dataset and DataLoader primitives for loading data samples. azstoragetorch provides implementations for both types of PyTorch datasets, map-style and iterable-style datasets, to load data samples from Azure Blob Storage:

Data samples returned from both datasets map directly one-to-one to blobs in Azure Blob Storage. When instantiating these dataset classes, use one of their class methods:

  • from_container_url() - Instantiate dataset by listing blobs from an Azure Storage container.
  • from_blob_urls() - Instantiate dataset from provided blob URLs
from azstoragetorch.datasets import BlobDataset, IterableBlobDataset

# Update URL with your own Azure Storage account and container name
CONTAINER_URL = "https://<my-storage-account-name>.blob.core.windows.net/<my-container-name>"

# Create an iterable-style dataset by listing blobs in the container specified by CONTAINER_URL.
dataset = IterableBlobDataset.from_container_url(CONTAINER_URL)

# Print the first blob in the dataset. Default output is a dictionary with
# the blob URL and the blob data. Use `transform` keyword argument when
# creating dataset to customize output format.
print(next(iter(dataset)))

# List of blob URLs to create dataset from. Update with your own blob names.
blob_urls = [
    f"{CONTAINER_URL}/<blob-name-1>",
    f"{CONTAINER_URL}/<blob-name-2>",
    f"{CONTAINER_URL}/<blob-name-3>",
]

# Create a map-style dataset from the list of blob URLs
blob_list_dataset = BlobDataset.from_blob_urls(blob_urls)

print(blob_list_dataset[0])  # Print the first blob in the dataset

Once instantiated, azstoragetorch datasets can be provided directly to a PyTorch DataLoader for loading samples:

from torch.utils.data import DataLoader

# Create a DataLoader to load data samples from the dataset in batches of 32
dataloader = DataLoader(dataset, batch_size=32)

for batch in dataloader:
    print(batch["url"])  # Prints blob URLs for each 32 sample batch

Additional resources

For more information on using the Azure Storage Connector for PyTorch, see the following resources:

Contributing

This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.microsoft.com.

When you submit a pull request, a CLA-bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., label, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.

This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact opencode@microsoft.com with any additional questions or comments.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

azstoragetorch-0.1.2.tar.gz (86.0 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

azstoragetorch-0.1.2-py3-none-any.whl (22.5 kB view details)

Uploaded Python 3

File details

Details for the file azstoragetorch-0.1.2.tar.gz.

File metadata

  • Download URL: azstoragetorch-0.1.2.tar.gz
  • Upload date:
  • Size: 86.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: RestSharp/106.13.0.0

File hashes

Hashes for azstoragetorch-0.1.2.tar.gz
Algorithm Hash digest
SHA256 3e3ce6528a6069229b9bd9cb3050a8104dc655b1b212337ffec92c006bad2eb3
MD5 cb24c4570a4fc57ac2ad640ed9b3330d
BLAKE2b-256 4822a6dd8d72c7d8e2f6013f68af7a83a651b49a9e3d2537876fcfa69a7b94a0

See more details on using hashes here.

File details

Details for the file azstoragetorch-0.1.2-py3-none-any.whl.

File metadata

File hashes

Hashes for azstoragetorch-0.1.2-py3-none-any.whl
Algorithm Hash digest
SHA256 712d218a64cee56c515a21161d6130c01d88397b3a230b8a93b338b3bc159275
MD5 b81b4e6ca4b7d5a29e0d1464292daddf
BLAKE2b-256 b619e2bf68860e8e7b5793fff1ade90c7a02316b8e9507f5e449c5bf36afb827

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page