Skip to main content

Azure cloud services SDK with Storage (blob, file share, queue), Key Vault, Cosmos DB, AI Foundry Projects, Document Intelligence, Speech, OpenTelemetry tracing, AI Foundry GenAI tracing, and builder patterns.

Project description

azpaddypy

Azure cloud services SDK with Storage (blob, append blob, file share, queue), Key Vault, Cosmos DB, AI Foundry Projects, Document Intelligence, Speech, OpenTelemetry tracing, AI Foundry GenAI tracing, and builder patterns.

Designed for Python 3.11+ running in Dockerized Azure Function Apps and Web Apps.

Installation

uv add azpaddypy

Quick Start

from azpaddypy import AzureStorage, AzureIdentity, create_azure_storage

# Factory function (cached instances, auto-creates identity)
storage = create_azure_storage(
    account_url="https://myaccount.blob.core.windows.net/",
    service_name="my_service",
)

# Or explicit identity
identity = AzureIdentity(service_name="my_service")
storage = AzureStorage(
    account_url="https://myaccount.blob.core.windows.net/",
    azure_identity=identity,
    enable_file_storage=True,
)

Storage Operations

Blob Storage

# Upload
storage.upload_blob(
    container_name="documents",
    blob_name="report.pdf",
    data=pdf_bytes,
    content_type="application/pdf",
    metadata={"author": "team"},
)

# Download (returns None if not found)
data = storage.download_blob(container_name="documents", blob_name="report.pdf")

# Upload and get SAS URL
sas_url = storage.upload_blob_with_sas(
    container_name="documents",
    blob_name="report.pdf",
    data=pdf_bytes,
    sas_permission="r",
    sas_expiry_delta=timedelta(hours=3),
)

# List, exists, delete
blobs = storage.list_blobs(container_name="documents", name_starts_with="reports/")
exists = storage.blob_exists(container_name="documents", blob_name="report.pdf")
storage.delete_blob(container_name="documents", blob_name="report.pdf")

# Metadata upsert (merges with existing)
storage.upsert_blob_metadata(
    container_name="documents",
    blob_name="report.pdf",
    metadata={"status": "processed"},
)

# SAS token generation
blob_sas = storage.get_blob_sas(container_name="docs", blob_name="file.pdf")
container_sas = storage.get_container_sas(container_name="docs", permission="r")

Append Blob Storage

Append blobs are optimized for append operations such as logging, auditing, or streaming data. Each append block can be up to 4 MiB. Unlike block blobs, append blobs do not support overwriting existing content.

# Create an empty append blob
storage.create_append_blob(
    container_name="logs",
    blob_name="app-2026-04-05.log",
    content_type="text/plain; charset=utf-8",
    metadata={"source": "web-app"},
)

# Append data blocks
storage.append_block(
    container_name="logs",
    blob_name="app-2026-04-05.log",
    data="2026-04-05T10:00:00Z INFO Application started\n",
)

storage.append_block(
    container_name="logs",
    blob_name="app-2026-04-05.log",
    data=b"2026-04-05T10:00:01Z DEBUG Connection pool initialized\n",
)

# Convenience: create-if-missing + append in one call
storage.append_blob_from_text(
    container_name="logs",
    blob_name="app-2026-04-05.log",
    text="2026-04-05T10:05:00Z WARN High memory usage\n",
    create_if_not_exists=True,  # default, skips creation if blob already exists
)

File Share Storage

Requires enable_file_storage=True. Uses Azure File Shares (SMB/NFS), not blob storage.

storage = AzureStorage(
    account_url="https://myaccount.blob.core.windows.net/",
    azure_identity=identity,
    enable_file_storage=True,
)

# Upload (auto-creates parent directories)
storage.upload_share_file(
    share_name="myshare",
    file_path="reports/2026/q1.pdf",
    data=pdf_bytes,
    content_type="application/pdf",
)

# Download (returns None if not found)
data = storage.download_share_file(share_name="myshare", file_path="reports/2026/q1.pdf")

# List files and directories
items = storage.list_share_files(share_name="myshare", directory_path="reports/2026")
# Returns: [{"name": "q1.pdf", "is_directory": False, "size": 1024}, ...]

# Exists, properties, delete
exists = storage.share_file_exists(share_name="myshare", file_path="reports/2026/q1.pdf")
props = storage.get_share_file_properties(share_name="myshare", file_path="reports/2026/q1.pdf")
storage.delete_share_file(share_name="myshare", file_path="reports/2026/q1.pdf")

# Directory management
storage.create_share_directory(share_name="myshare", directory_path="reports/2026/q2")
storage.delete_share_directory(share_name="myshare", directory_path="reports/2026/q2")

# Metadata upsert (merges with existing)
storage.upsert_share_file_metadata(
    share_name="myshare",
    file_path="reports/2026/q1.pdf",
    metadata={"reviewed": "true"},
)

Queue Storage

# Send
storage.send_message(
    queue_name="tasks",
    content='{"task": "process"}',
    visibility_timeout=30,
    time_to_live=3600,
)

# Receive
messages = storage.receive_messages(queue_name="tasks", messages_per_page=5)
for msg in messages:
    print(msg["id"], msg["content"])
    storage.delete_message(
        queue_name="tasks",
        message_id=msg["id"],
        pop_receipt=msg["pop_receipt"],
    )

Builder Pattern

For complex multi-resource setups:

from azpaddypy.builder import AzureManagementBuilder, AzureResourceBuilder
from azpaddypy.builder.directors import ConfigurationSetupDirector

# One-liner setup with director
config = ConfigurationSetupDirector.default_setup(
    service_name="my_app",
    service_version="1.0.0",
)

# Or step-by-step with builders
mgmt = (
    AzureManagementBuilder()
    .with_logger(service_name="my_app")
    .with_identity()
    .with_keyvault(vault_url="https://myvault.vault.azure.net/")
    .build()
)

resources = (
    AzureResourceBuilder(mgmt, env_config)
    .with_storage("default", enable_blob=True, enable_queue=True)
    .with_storage("archive", account_url="https://archive.blob.core.windows.net/", enable_file=True)
    .with_ai_project(endpoint="https://my-ai.services.ai.azure.com/api/projects/my-project")
    .with_document_intelligence(endpoint="https://my-ai.cognitiveservices.azure.com/")
    .with_speech(
        region="westeurope",
        resource_id="/subscriptions/<sub>/resourceGroups/<rg>/providers/Microsoft.CognitiveServices/accounts/<ai-services>",
    )
    .build()
)

storage = resources.get_storage("default")
archive = resources.get_storage("archive")
ai_project = resources.get_ai_project("default")
doc_intel = resources.get_document_intelligence("default")
speech = resources.get_speech("default")

Note: Document Intelligence and Speech are configured exclusively through mgmt_config (typically from Key Vault secrets). They have no environment-variable fallbacks — pass endpoint (and for Speech, region + resource_id) explicitly.

Key Vault

from azpaddypy import AzureKeyVault, create_azure_keyvault

kv = create_azure_keyvault(
    vault_url="https://myvault.vault.azure.net/",
    service_name="my_service",
)

secret = kv.get_secret("database-connection-string")

AI Foundry Projects

Manage Azure AI Foundry agents, deployments, and connections with integrated OpenAI client support.

from azpaddypy import AzureAIProject, create_azure_ai_project

# Factory function (cached instances, auto-creates identity)
ai = create_azure_ai_project(
    endpoint="https://my-ai.services.ai.azure.com/api/projects/my-project",
    service_name="my_service",
)

# List deployments
deployments = ai.list_deployments()

# Get an authenticated OpenAI client
openai_client = ai.get_openai_client()

# Agent operations
from azure.ai.projects.models import PromptAgentDefinition

agent = ai.create_agent(
    agent_name="my-agent",
    definition=PromptAgentDefinition(model="gpt-4o", instructions="You are helpful"),
)

agents = ai.list_agents()
details = ai.get_agent(agent_name="my-agent")

# Invoke an agent via OpenAI responses API
result = ai.invoke_agent(agent_name="my-agent", user_message="Hello")
print(result["response"])

# Connections
connections = ai.list_connections()
connection = ai.get_connection(name="my-openai-connection", include_credentials=True)

Feature Flags

ai = AzureAIProject(
    endpoint="https://my-ai.services.ai.azure.com/api/projects/my-project",
    azure_identity=identity,
    enable_agents=True,       # Agent CRUD + invocation
    enable_deployments=True,  # List/get model deployments
    enable_connections=False,  # Disable connection enumeration
)

Document Intelligence

Analyze documents using Azure AI Document Intelligence (formerly Form Recognizer). Shares the same Cognitive Services / AI Services account as AI Foundry.

from azpaddypy import AzureDocumentIntelligence, create_azure_document_intelligence

di = create_azure_document_intelligence(
    endpoint="https://my-ai.cognitiveservices.azure.com/",
    service_name="my_service",
    enable_administration=True,  # opt in to model management
)

# Analyze from URL with a prebuilt model
result = di.analyze_document_from_url(
    model_id="prebuilt-layout",
    url_source="https://example.com/invoice.pdf",
)
print(f"Pages: {len(result.pages)}")

# Analyze from bytes
with open("contract.pdf", "rb") as f:
    result = di.analyze_document_from_bytes(model_id="prebuilt-read", document=f.read())

# Manage custom models
models = di.list_models()
model = di.get_model(model_id="my-custom-model")
di.delete_model(model_id="my-custom-model")

Speech

Azure Cognitive Services Speech with Entra ID authentication. Unlike most Azure SDKs, the Speech SDK does not accept TokenCredential directly — it requires the special aad#<resource-id>#<token> auth string. azpaddypy handles token acquisition, format, and refresh.

You must provide both the Azure region and the full ARM resource ID of the Speech / AI Services account.

from azpaddypy import AzureSpeech, create_azure_speech

speech = create_azure_speech(
    region="westeurope",
    resource_id=(
        "/subscriptions/<sub>/resourceGroups/<rg>"
        "/providers/Microsoft.CognitiveServices/accounts/<ai-services>"
    ),
    service_name="my_service",
    default_speech_synthesis_voice_name="en-US-JennyNeural",
)

# Synthesize text to in-memory bytes (server / container scenarios)
audio: bytes = speech.synthesize_text_to_bytes("Hello from azpaddypy")

# Synthesize and write directly to a file
speech.synthesize_text_to_file("Hello from azpaddypy", file_path="out.wav")

# Synthesize and play on the default speaker (interactive / local dev)
speech.synthesize_text_to_speaker("Hello from azpaddypy")

Custom synthesizers and recognizers

For full control (streaming, recognition, custom audio configs, event callbacks), get a fresh SpeechConfig and build your own:

import azure.cognitiveservices.speech as speechsdk

speech_config = speech.get_speech_config()
synthesizer = speechsdk.SpeechSynthesizer(
    speech_config=speech_config,
    audio_config=speechsdk.audio.AudioOutputConfig(filename="out.wav"),
)
synthesizer.speak_text_async("Hello from azpaddypy").get()

# Refresh AAD token on long-lived synthesizers/recognizers
# (Speech tokens expire after ~10 minutes)
speech.refresh_authorization_token(synthesizer)

Observability

All operations include OpenTelemetry spans and structured logging via Application Insights.

storage = AzureStorage(
    account_url="https://myaccount.blob.core.windows.net/",
    azure_identity=identity,
    connection_string="InstrumentationKey=...",  # App Insights
)

# Correlation tracking across distributed calls
storage.set_correlation_id("request-abc-123")

AI Foundry Tracing

AzureLogger automatically instruments the OpenAI SDK (via opentelemetry-instrumentation-openai-v2) on initialization. Every openai.chat.completions.create() call produces GenAI-semantic spans (model, token usage, latency) that flow through Azure Monitor to the AI Foundry Tracing UI.

When log_result=True is set on the @logger.trace_function() decorator, GenAI content recording is enabled so prompt messages and completion responses are captured in the trace spans.

from mgmt_config import logger, ai_projects, log_execution_config

@logger.trace_function(log_result=True)
async def generate_summary(document_text: str) -> str:
    ai_project = ai_projects.get("aiservices")
    openai_client = ai_project.get_openai_client()

    response = openai_client.chat.completions.create(
        model="gpt-5",
        messages=[
            {"role": "system", "content": "Summarize the document."},
            {"role": "user", "content": document_text},
        ],
    )
    return response.choices[0].message.content

The trace in AI Foundry shows a parent span for generate_summary with a child chat gpt-5 span containing model, token counts, and (with log_result=True) the full prompt/completion content.

Feature Flags

Enable only the storage services you need:

Flag Default Service
enable_blob_storage True BlobServiceClient
enable_file_storage False ShareServiceClient (requires token_intent="backup" RBAC)
enable_queue_storage True QueueServiceClient

Dependencies

  • azure-storage-blob - Blob operations
  • azure-storage-file-share - File share operations
  • azure-storage-queue - Queue operations
  • azure-identity - Credential management
  • azure-keyvault-secrets / keys / certificates - Key Vault
  • azure-cosmos - Cosmos DB
  • azure-ai-projects - AI Foundry Projects (agents, deployments, connections)
  • azure-ai-documentintelligence - Document Intelligence (analyze, model management)
  • azure-cognitiveservices-speech - Speech (synthesis, recognition with Entra ID)
  • azure-monitor-opentelemetry - Telemetry
  • opentelemetry-instrumentation-openai-v2 - AI Foundry tracing for OpenAI SDK calls

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

azpaddypy-0.9.81.tar.gz (128.2 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

azpaddypy-0.9.81-py3-none-any.whl (81.7 kB view details)

Uploaded Python 3

File details

Details for the file azpaddypy-0.9.81.tar.gz.

File metadata

  • Download URL: azpaddypy-0.9.81.tar.gz
  • Upload date:
  • Size: 128.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.11.5 {"installer":{"name":"uv","version":"0.11.5","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}

File hashes

Hashes for azpaddypy-0.9.81.tar.gz
Algorithm Hash digest
SHA256 d0e63b463b0d3e8a1e8cda84a7690f8025fefdb67dd346521ffc7ab81f64b17b
MD5 0f2aca37f60d7a47b4e4aca0b272514b
BLAKE2b-256 da32dbe672da7692944dcb3ae8b6dd11b9aa59e4c439a7a906f5a9e2c1c6d569

See more details on using hashes here.

File details

Details for the file azpaddypy-0.9.81-py3-none-any.whl.

File metadata

  • Download URL: azpaddypy-0.9.81-py3-none-any.whl
  • Upload date:
  • Size: 81.7 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.11.5 {"installer":{"name":"uv","version":"0.11.5","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}

File hashes

Hashes for azpaddypy-0.9.81-py3-none-any.whl
Algorithm Hash digest
SHA256 89db3550f70ec47f4103d6b0d8d005418bc00e371e2fbb4678ad1f6354fff7e0
MD5 0c98af3db52d68341b461c953ebe0d66
BLAKE2b-256 3d5201994783c62a31ba66d76573728de2a628eb6dea151cf4caa699101d03e7

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page