Microsoft Azure Azure File Share Storage Client Library for Python
Project description
Azure Storage File Share client library for Python
Azure File Share storage offers fully managed file shares in the cloud that are accessible via the industry standard Server Message Block (SMB) protocol. Azure file shares can be mounted concurrently by cloud or on-premises deployments of Windows, Linux, and macOS. Additionally, Azure file shares can be cached on Windows Servers with Azure File Sync for fast access near where the data is being used.
Azure file shares can be used to:
- Replace or supplement on-premises file servers
- "Lift and shift" applications
- Simplify cloud development with shared application settings, diagnostic share, and Dev/Test/Debug tools
Source code | Package (PyPI) | API reference documentation | Product documentation | Samples
Getting started
Prerequisites
- Python 2.7, or 3.5 or later is required to use this package.
- You must have an Azure subscription and an Azure storage account to use this package.
Install the package
Install the Azure Storage File Share client library for Python with pip:
pip install azure-storage-file-share
Create a storage account
If you wish to create a new storage account, you can use the Azure Portal, Azure PowerShell, or Azure CLI:
# Create a new resource group to hold the storage account -
# if using an existing resource group, skip this step
az group create --name my-resource-group --location westus2
# Create the storage account
az storage account create -n my-storage-account-name -g my-resource-group
Create the client
The Azure Storage File Share client library for Python allows you to interact with four types of resources: the storage account itself, file shares, directories, and files. Interaction with these resources starts with an instance of a client. To create a client object, you will need the storage account's file service URL and a credential that allows you to access the storage account:
from azure.storage.fileshare import ShareServiceClient
service = ShareServiceClient(account_url="https://<my-storage-account-name>.file.core.windows.net/", credential=credential)
Looking up the account URL
You can find the storage account's file service URL using the Azure Portal, Azure PowerShell, or Azure CLI:
# Get the file service URL for the storage account
az storage account show -n my-storage-account-name -g my-resource-group --query "primaryEndpoints.file"
Types of credentials
The credential
parameter may be provided in a number of different forms, depending on the type of
authorization you wish to use:
-
To use a shared access signature (SAS) token, provide the token as a string. If your account URL includes the SAS token, omit the credential parameter. You can generate a SAS token from the Azure Portal under "Shared access signature" or use one of the
generate_sas()
functions to create a sas token for the storage account, share, or file:from datetime import datetime, timedelta from azure.storage.fileshare import ShareServiceClient, generate_account_sas, ResourceTypes, AccountSasPermissions sas_token = generate_account_sas( account_name="<storage-account-name>", account_key="<account-access-key>", resource_types=ResourceTypes(service=True), permission=AccountSasPermissions(read=True), expiry=datetime.utcnow() + timedelta(hours=1) ) share_service_client = ShareServiceClient(account_url="https://<my_account_name>.file.core.windows.net", credential=sas_token)
-
To use a storage account shared key (aka account key or access key), provide the key as a string. This can be found in the Azure Portal under the "Access Keys" section or by running the following Azure CLI command:
az storage account keys list -g MyResourceGroup -n MyStorageAccount
Use the key as the credential parameter to authenticate the client:
from azure.storage.fileshare import ShareServiceClient service = ShareServiceClient(account_url="https://<my_account_name>.file.core.windows.net", credential="<account_access_key>")
Creating the client from a connection string
Depending on your use case and authorization method, you may prefer to initialize a client instance with a storage
connection string instead of providing the account URL and credential separately. To do this, pass the storage
connection string to the client's from_connection_string
class method:
from azure.storage.fileshare import ShareServiceClient
connection_string = "DefaultEndpointsProtocol=https;AccountName=xxxx;AccountKey=xxxx;EndpointSuffix=core.windows.net"
service = ShareServiceClient.from_connection_string(conn_str=connection_string)
The connection string to your storage account can be found in the Azure Portal under the "Access Keys" section or by running the following CLI command:
az storage account show-connection-string -g MyResourceGroup -n MyStorageAccount
Key concepts
The following components make up the Azure File Share Service:
- The storage account itself
- A file share within the storage account
- An optional hierarchy of directories within the file share
- A file within the file share, which may be up to 1 TiB in size
The Azure Storage File Share client library for Python allows you to interact with each of these components through the use of a dedicated client object.
Clients
Four different clients are provided to to interact with the various components of the File Share Service:
- ShareServiceClient -
this client represents interaction with the Azure storage account itself, and allows you to acquire preconfigured
client instances to access the file shares within. It provides operations to retrieve and configure the service
properties as well as list, create, and delete shares within the account. To perform operations on a specific share,
retrieve a client using the
get_share_client
method. - ShareClient -
this client represents interaction with a specific file share (which need not exist yet), and allows you to acquire
preconfigured client instances to access the directories and files within. It provides operations to create, delete,
configure, or create snapshots of a share and includes operations to create and enumerate the contents of
directories within it. To perform operations on a specific directory or file, retrieve a client using the
get_directory_client
orget_file_client
methods. - ShareDirectoryClient -
this client represents interaction with a specific directory (which need not exist yet). It provides operations to
create, delete, or enumerate the contents of an immediate or nested subdirectory, and includes operations to create
and delete files within it. For operations relating to a specific subdirectory or file, a client for that entity can
also be retrieved using the
get_subdirectory_client
andget_file_client
functions. - ShareFileClient - this client represents interaction with a specific file (which need not exist yet). It provides operations to upload, download, create, delete, and copy a file.
For details on path naming restrictions, see Naming and Referencing Shares, Directories, Files, and Metadata.
Examples
The following sections provide several code snippets covering some of the most common Storage File Share tasks, including:
Creating a file share
Create a file share to store your files
from azure.storage.fileshare import ShareClient
share = ShareClient.from_connection_string(conn_str="<connection_string>", share_name="my_share")
share.create_share()
Use the async client to create a file share
from azure.storage.fileshare.aio import ShareClient
share = ShareClient.from_connection_string(conn_str="<connection_string>", share_name="my_share")
await share.create_share()
Uploading a file
Upload a file to the share
from azure.storage.fileshare import ShareFileClient
file_client = ShareFileClient.from_connection_string(conn_str="<connection_string>", share_name="my_share", file_path="my_file")
with open("./SampleSource.txt", "rb") as source_file:
file_client.upload_file(source_file)
Upload a file asynchronously
from azure.storage.fileshare.aio import ShareFileClient
file_client = ShareFileClient.from_connection_string(conn_str="<connection_string>", share_name="my_share", file_path="my_file")
with open("./SampleSource.txt", "rb") as source_file:
await file_client.upload_file(source_file)
Downloading a file
Download a file from the share
from azure.storage.fileshare import ShareFileClient
file_client = ShareFileClient.from_connection_string(conn_str="<connection_string>", share_name="my_share", file_path="my_file")
with open("DEST_FILE", "wb") as file_handle:
data = file_client.download_file()
data.readinto(file_handle)
Download a file asynchronously
from azure.storage.fileshare.aio import ShareFileClient
file_client = ShareFileClient.from_connection_string(conn_str="<connection_string>", share_name="my_share", file_path="my_file")
with open("DEST_FILE", "wb") as file_handle:
data = await file_client.download_file()
await data.readinto(file_handle)
Listing contents of a directory
List all directories and files under a parent directory
from azure.storage.fileshare import ShareDirectoryClient
parent_dir = ShareDirectoryClient.from_connection_string(conn_str="<connection_string>", share_name="my_share", directory_path="parent_dir")
my_list = list(parent_dir.list_directories_and_files())
print(my_list)
List contents of a directory asynchronously
from azure.storage.fileshare.aio import ShareDirectoryClient
parent_dir = ShareDirectoryClient.from_connection_string(conn_str="<connection_string>", share_name="my_share", directory_path="parent_dir")
my_files = []
async for item in parent_dir.list_directories_and_files():
my_files.append(item)
print(my_files)
Optional Configuration
Optional keyword arguments that can be passed in at the client and per-operation level.
Retry Policy configuration
Use the following keyword arguments when instantiating a client to configure the retry policy:
- retry_total (int): Total number of retries to allow. Takes precedence over other counts.
Pass in
retry_total=0
if you do not want to retry on requests. Defaults to 10. - retry_connect (int): How many connection-related errors to retry on. Defaults to 3.
- retry_read (int): How many times to retry on read errors. Defaults to 3.
- retry_status (int): How many times to retry on bad status codes. Defaults to 3.
- retry_to_secondary (bool): Whether the request should be retried to secondary, if able.
This should only be enabled of RA-GRS accounts are used and potentially stale data can be handled.
Defaults to
False
.
Other client / per-operation configuration
Other optional configuration keyword arguments that can be specified on the client or per-operation.
Client keyword arguments:
- connection_timeout (int): Optionally sets the connect and read timeout value, in seconds.
- transport (Any): User-provided transport to send the HTTP request.
Per-operation keyword arguments:
- raw_response_hook (callable): The given callback uses the response returned from the service.
- raw_request_hook (callable): The given callback uses the request before being sent to service.
- client_request_id (str): Optional user specified identification of the request.
- user_agent (str): Appends the custom value to the user-agent header to be sent with the request.
- logging_enable (bool): Enables logging at the DEBUG level. Defaults to False. Can also be passed in at the client level to enable it for all requests.
- headers (dict): Pass in custom headers as key, value pairs. E.g.
headers={'CustomValue': value}
Troubleshooting
General
Storage File clients raise exceptions defined in Azure Core.
All File service operations will throw a StorageErrorException
on failure with helpful error codes.
Logging
This library uses the standard logging library for logging. Basic information about HTTP sessions (URLs, headers, etc.) is logged at INFO level.
Detailed DEBUG level logging, including request/response bodies and unredacted
headers, can be enabled on a client with the logging_enable
argument:
import sys
import logging
from azure.storage.fileshare import ShareServiceClient
# Create a logger for the 'azure.storage.fileshare' SDK
logger = logging.getLogger('azure.storage.fileshare')
logger.setLevel(logging.DEBUG)
# Configure a console output
handler = logging.StreamHandler(stream=sys.stdout)
logger.addHandler(handler)
# This client will log detailed information about its HTTP sessions, at DEBUG level
service_client = ShareServiceClient.from_connection_string("your_connection_string", logging_enable=True)
Similarly, logging_enable
can enable detailed logging for a single operation,
even when it isn't enabled for the client:
service_client.get_service_properties(logging_enable=True)
Next steps
More sample code
Get started with our File Share samples.
Several Storage File Share Python SDK samples are available to you in the SDK's GitHub repository. These samples provide example code for additional scenarios commonly encountered while working with Storage File Share:
-
file_samples_hello_world.py (async version) - Examples found in this article:
- Client creation
- Create a file share
- Upload a file
-
file_samples_authentication.py (async version) - Examples for authenticating and creating the client:
- From a connection string
- From a shared access key
- From a shared access signature token
-
file_samples_service.py (async version) - Examples for interacting with the file service:
- Get and set service properties
- Create, list, and delete shares
- Get a share client
-
file_samples_share.py (async version) - Examples for interacting with file shares:
- Create a share snapshot
- Set share quota and metadata
- List directories and files
- Get the directory or file client to interact with a specific entity
-
file_samples_directory.py (async version) - Examples for interacting with directories:
- Create a directory and add files
- Create and delete subdirectories
- Get the subdirectory client
-
file_samples_client.py (async version) - Examples for interacting with files:
- Create, upload, download, and delete files
- Copy a file from a URL
Additional documentation
For more extensive documentation on Azure File Share storage, see the Azure File Share storage documentation on docs.microsoft.com.
Contributing
This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.microsoft.com.
When you submit a pull request, a CLA-bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., label, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.
This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact opencode@microsoft.com with any additional questions or comments.
Release History
12.1.0
New features
- Added support for the 2019-07-07 service version, and added
api_version
parameter to clients. ShareLeaseClient
was introduced to both sync and async versions of the SDK, which allows users to perform operations on file leases.failed_handles_count
info was included inclose_handle
andclose_all_handles
result.- Added support for obtaining premium file properties in
list_shares
andget_share_properties
. - Added support for additional
start_copy_from_url
parameters -file_permission
,permission_key
,file_attributes
,file_creation_time
,file_last_write_time
,ignore_read_only
, andset_archive_attribute
.
Fixes and improvements
- Fixed a bug:
clear_range
API was not working.
Fixes
- Responses are always decoded as UTF8
12.0.0
New features
- Added
delete_directory
method to theshare_client
. - All the clients now have a
close()
method to close the sockets opened by the client when using without a context manager.
Fixes and improvements
- Fixes a bug where determining length breaks while uploading a file when provided with an invalid fileno.
Breaking changes
close_handle(handle)
andclose_all_handles()
no longer return int. These functions return a dictionary which has the number of handles closed and number of handles failed to be closed.
12.0.0b5
Important: This package was previously named azure-storage-file
Going forward, to use this SDK, please install azure-storage-file-share
.
Additionally:
- The namespace within the package has also been renamed to
azure.storage.fileshare
. FileServiceClient
has been renamed toShareServiceClient
.DirectoryClient
has been renamed toShareDirectoryClient
.FileClient
has been renamed toShareFileClient
.
Additional Breaking changes
ShareClient
now accepts onlyaccount_url
with mandatory a string paramshare_name
. To use a share_url, the methodfrom_share_url
must be used.ShareDirectoryClient
now accepts onlyaccount_url
with mandatory string paramsshare_name
anddirectory_path
. To use a directory_url, the methodfrom_directory_url
must be used.ShareFileClient
now accepts onlyaccount_url
with mandatory string paramsshare_name
andfile_path
. To use a file_url, the methodfrom_file_url
must be used.file_permission_key
parameter has been renamed topermission_key
set_share_access_policy
has required parametersigned_identifiers
.NoRetry
policy has been removed. Use keyword argumentretry_total=0
for no retries.- Removed types that were accidentally exposed from two modules. Only
ShareServiceClient
,ShareClient
,ShareDirectoryClient
andShareFileClient
should be imported from azure.storage.fileshare.aio - Some parameters have become keyword only, rather than positional. Some examples include:
loop
max_concurrency
validate_content
timeout
etc.
- Client and model files have been made internal. Users should import from the top level modules
azure.storage.fileshare
andazure.storage.fileshare.aio
only. - The
generate_shared_access_signature
methods on each ofShareServiceClient
,ShareClient
andShareFileClient
have been replaced by module level functionsgenerate_account_sas
,generate_share_sas
andgenerate_file_sas
. start_range
andend_range
params are now renamed to and behave likeoffset
andlength
in the following APIs:- download_file
- upload_range
- upload_range_from_url
- clear_range
- get_ranges
StorageStreamDownloader
is no longer iterable. To iterate over the file data stream, useStorageStreamDownloader.chunks
.- The public attributes of
StorageStreamDownloader
have been limited to:name
(str): The name of the file.path
(str): The full path of the file.share
(str): The share the file will be downloaded from.properties
(FileProperties
): The properties of the file.size
(int): The size of the download. Either the total file size, or the length of a subsection if sepcified. Previously calleddownload_size
.
StorageStreamDownloader
now has new functions:readall()
: Reads the complete download stream, returning bytes. This replaces the functionscontent_as_bytes
andcontent_as_text
which have been deprecated.readinto(stream)
: Download the complete stream into the supplied writable stream, returning the number of bytes written. This replaces the functiondownload_to_stream
which has been deprecated.
ShareFileClient.close_handles
andShareDirectoryClient.close_handles
have both been replaced by two functions each;close_handle(handle)
andclose_all_handles()
. These functions are blocking and return integers (the number of closed handles) rather than polling objects.get_service_properties
now returns a dict with keys consistent toset_service_properties
New features
ResourceTypes
,NTFSAttributes
, andServices
now have methodfrom_string
which takes parameters as a string.
12.0.0b4
Breaking changes
- Permission models.
AccountPermissions
,SharePermissions
andFilePermissions
have been renamed toAccountSasPermissions
,ShareSasPermissions
andFileSasPermissions
respectively.- enum-like list parameters have been removed from all three of them.
__add__
and__or__
methods are removed.
max_connections
is now renamed tomax_concurrency
.
New features
AccountSasPermissions
,FileSasPermissions
,ShareSasPermissions
now have methodfrom_string
which takes parameters as a string.
12.0.0b3
New features
- Added upload_range_from_url API to write the bytes from one Azure File endpoint into the specified range of another Azure File endpoint.
- Added set_http_headers for directory_client, create_permission_for_share and get_permission_for_share APIs.
- Added optional parameters for smb properties related parameters for create_file*, create_directory* related APIs and set_http_headers API.
- Updated get_properties for directory and file so that the response has SMB properties.
Dependency updates
-
Adopted azure-core 1.0.0b3
- If you later want to revert to previous versions of azure-storage-file, or another Azure SDK library requiring azure-core 1.0.0b1 or azure-core 1.0.0b2, you must explicitly install the specific version of azure-core as well. For example:
pip install azure-core==1.0.0b2 azure-storage-file==12.0.0b2
Fixes and improvements
- Fix where content-type was being added in the request when not mentioned explicitly.
12.0.0b2
Breaking changes
- Renamed
copy_file_from_url
tostart_copy_from_url
and changed behaviour to return a dictionary of copy properties rather than a polling object. Status of the copy operation can be retrieved with theget_file_properties
operation. - Added
abort_copy
operation to theFileClient
class. This replaces the previous abort operation on the copy status polling operation. - The behavior of listing operations has been modified:
- The previous
marker
parameter has been removed. - The iterable response object now supports a
by_page
function that will return a secondary iterator of batches of results. This function supports acontinuation_token
parameter to replace the previousmarker
parameter.
- The previous
- The new listing behaviour is also adopted by the
receive_messages
operation:- The receive operation returns a message iterator as before.
- The returned iterator supports a
by_page
operation to receive messages in batches.
New features
- Added async APIs to subnamespace
azure.storage.file.aio
. - Distributed tracing framework OpenCensus is now supported.
Dependency updates
-
Adopted azure-core 1.0.0b2
- If you later want to revert to azure-storage-file 12.0.0b1, or another Azure SDK library requiring azure-core 1.0.0b1, you must explicitly install azure-core 1.0.0b1 as well. For example:
pip install azure-core==1.0.0b1 azure-storage-file==12.0.0b1
Fixes and improvements
- Fix for closing file handles - continuation token was not being passed to subsequent calls.
- General refactor of duplicate and shared code.
12.0.0b1
Version 12.0.0b1 is the first preview of our efforts to create a user-friendly and Pythonic client library for Azure Storage Files. For more information about this, and preview releases of other Azure SDK libraries, please visit https://aka.ms/azure-sdk-preview1-python.
Breaking changes: New API design
-
Operations are now scoped to a particular client:
FileServiceClient
: This client handles account-level operations. This includes managing service properties and listing the shares within an account.ShareClient
: The client handles operations for a particular share. This includes creating or deleting that share, as well as listing the directories within that share, and managing properties and metadata.DirectoryClient
: The client handles operations for a particular directory. This includes creating or deleting that directory, as well as listing the files and subdirectories, and managing properties and metadata.FileClient
: The client handles operations for a particular file. This includes creating or deleting that file, as well as upload and download data and managing properties.
These clients can be accessed by navigating down the client hierarchy, or instantiated directly using URLs to the resource (account, share, directory or file). For full details on the new API, please see the reference documentation.
-
The copy file operation now returns a polling object that can be used to check the status of the operation, as well as abort the operation.
-
The
close_handles
operation now return a polling object that can be used to check the status of the operation. -
Download operations now return a streaming object that can download data in multiple ways:
- Iteration: The streamer is an iterable object that will download and yield the content in chunks. Only supports single threaded download.
content_as_bytes
: Return the entire file content as bytes. Blocking operation that supports multi-threaded download.content_as_text
: Return the entire file content as decoded text. Blocking operation that supports multi-threaded download.download_to_stream
: Download the entire content to an open stream handle (e.g. an open file). Supports multi-threaded download.
-
New underlying REST pipeline implementation, based on the new
azure.core
library. -
Client and pipeline configuration is now available via keyword arguments at both the client level, and per-operation. See reference documentation for a full list of optional configuration arguments.
-
New error hierarchy:
- All service errors will now use the base type:
azure.core.exceptions.HttpResponseError
- The are a couple of specific exception types derived from this base type for common error scenarios:
ResourceNotFoundError
: The resource (e.g. queue, message) could not be found. Commonly a 404 status code.ResourceExistsError
: A resource conflict - commonly caused when attempting to create a resource that already exists.ResourceModifiedError
: The resource has been modified (e.g. overwritten) and therefore the current operation is in conflict. Alternatively this may be raised if a condition on the operation is not met.ClientAuthenticationError
: Authentication failed.
- All service errors will now use the base type:
-
Operation
set_file_properties
has been renamed toset_http_headers
. -
Operations
get_file_to_<output>
have been replaced withdownload_file
. See above for download output options. -
Operations
create_file_from_<input>
have been replace withupload_file
. -
Operations
get_share_acl
andset_share_acl
have been renamed toget_share_access_policy
andset_share_access_policy
. -
Operation
set_share_properties
has been renamed toset_share_quota
. -
Operation
snapshot_share
has been renamed tocreate_snapshot
. -
Operation
copy_file
has been renamed tocopy_file_from_url
. -
No longer have specific operations for
get_metadata
- useget_properties
instead. -
No longer have specific operations for
exists
- useget_properties
instead. -
Operation
update_range
has been renamed toupload_range
.
2.0.1
- Updated dependency on azure-storage-common.
2.0.0
- Support for 2018-11-09 REST version. Please see our REST API documentation and blogs for information about the related added features.
- Added an option to get share stats in bytes.
- Added support for listing and closing file handles.
1.4.0
- azure-storage-nspkg is not installed anymore on Python 3 (PEP420-based namespace package)
1.3.1
- Fixed design flaw where get_file_to_* methods buffer entire file when max_connections is set to 1.
1.3.0
- Support for 2018-03-28 REST version. Please see our REST API documentation and blog for information about the related added features.
1.2.0rc1
- Support for 2017-11-09 REST version. Please see our REST API documentation and blog for information about the related added features.
1.1.0
- Support for 2017-07-29 REST version. Please see our REST API documentation and blogs for information about the related added features.
- Error message now contains the ErrorCode from the x-ms-error-code header value.
1.0.0
- The package has switched from Apache 2.0 to the MIT license.
- Fixed bug where get_file_to_* cannot get a single byte when start_range and end_range are both equal to 0.
- Metadata keys are now case-preserving when fetched from the service. Previously they were made lower-case by the library.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for azure-storage-file-share-12.1.0.zip
Algorithm | Hash digest | |
---|---|---|
SHA256 | 24ef07f031fcaf59d62249d3fed7f87c49ef0ea318ca4cfcb1ffe60bae4406ad |
|
MD5 | 2383b59f1f0f5baa3a21bac097d0a398 |
|
BLAKE2b-256 | 34ad1cf5e03c7861413d03996617e42b40690a61039bcb3e12972db44c0c1f8b |
Hashes for azure_storage_file_share-12.1.0-py2.py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 94461f8dd8a958705afd1ca415e9d7f42e2524fa9851fbe395afeabe0302d583 |
|
MD5 | 10f75ab0612cbe4e865efa93b31a649c |
|
BLAKE2b-256 | b15ce7250d2df0e1a3890f81200cb60692a4cbe0ef317554c7dc29755b6f10e7 |