Python SDK for the AMPA API
Project description
AMPA SDK & Service Consumption
This guide is for developers, automation scripts, and services that need to call AMPA without digging through the full platform repo. It covers installing the Python SDK, configuring authentication, and exercising the underlying REST API.
Overview
- SDK package:
the37lab_ampa_sdkwraps the REST endpoints with an ergonomic Python interface. - API base URL: All client traffic targets the AMPA API (default prefix
/api/v1). - Auth options: Use basic auth (username/password) or a long-lived API token.
- Output formats: SDK methods return JSON dictionaries, plain text, or raw bytes depending on the endpoint and content type.
Installation
pip install the37lab_ampa_sdk
Requirements:
- Python 3.9+
- Network access to your AMPA API deployment
If you are developing against the monorepo, install the editable package instead:
pip install -e ampa_sdk
Configuration & Authentication
You can configure the client via constructor arguments or environment variables. The SDK resolves values in this order: explicit parameters → environment variables → defaults.
| Variable | Purpose |
|---|---|
AMPA_API_URL |
Base URL of the AMPA API (include protocol and optional /api/v1). Default: https://ampa.the37lab.com:13002/api/v1. |
AMPA_API_USERNAME |
Username for basic authentication. |
AMPA_API_PASSWORD |
Password for basic authentication. |
AMPA_API_TOKEN |
Personal access token alternative to username/password. |
Passing configuration explicitly
from the37lab_ampa_sdk import PromptAPI
client = PromptAPI(
ampa_url="http://localhost:8080/api/v1",
username="admin",
password="supersecret",
)
Token-based authentication
from the37lab_ampa_sdk import PromptAPI
client = PromptAPI(
ampa_url="https://ampa.example.com/api/v1",
api_token="api_1234abcd...", # value created in the admin panel
)
Tokens are sent as X-API-Token headers by the SDK. Keep them secret; they grant the same rights as the issuing user.
Python SDK Quickstart
from the37lab_ampa_sdk import PromptAPI
client = PromptAPI(
ampa_url="http://localhost:8080/api/v1",
username="admin",
password="supersecret",
)
# Create a prompt (first version is created automatically)
prompt = client.create_prompt({
"prompt_name": "My Prompt",
"description": "A helpful assistant",
"purpose": "You are a helpful assistant",
"instruction": "Tell a story about Sweden",
})
# Execute the prompt with runtime variables
response = client.call_prompt(
prompt_name="My Prompt",
variables={"name": "John"},
prompt="Tell me a story",
)
# Inspect versions
versions = client.list_prompt_versions(prompt["id"])
Common helpers:
client.list_prompts()– enumerate prompts.client.update_prompt(prompt_id, **fields)– modify metadata.client.delete_prompt(prompt_id)– retire a prompt.client.list_prompt_tests()– manage test suites.
Refer to ampa_sdk/README.md for the complete method surface.
Error Handling Contract
When the API returns a non-2xx response, the SDK raises AMPAAPIError instead of a plain requests exception.
from the37lab_ampa_sdk import AMPAAPIError, PromptAPI
client = PromptAPI(
ampa_url="http://localhost:8080/api/v1",
username="admin",
password="supersecret",
)
try:
client.call_prompt(prompt="bad-prompt")
except AMPAAPIError as exc:
print(exc.title) # e.g. "Invalid prompt configuration"
print(exc.hint) # e.g. "The prompt's structured output schema is invalid..."
print(exc.code) # e.g. "INVALID_PAYLOAD"
print(exc.status) # e.g. 400
print(exc.request_id) # correlation ID for support/debugging
print(exc.details) # structured metadata for UI or automation
The API error body exposed through the SDK includes:
code: Stable machine-readable error code for branching logic.status: HTTP status mirrored into the JSON body.title: Short UI-safe summary.hint: High-level remediation guidance.error: Raw operator-facing detail.request_id: Correlation ID, also mirrored in theX-Request-Idresponse header.details: Structured metadata.details.error_type,details.error_class, anddetails.error_reasonare intended for UI or automation.
Typical caller strategy:
- Use
codeordetails.error_reasonfor branching. - Show
titleandhintin the UI. - Log
error,details, andrequest_id.
Text-to-Speech (TTS) with ElevenLabs
AMPA supports text-to-speech synthesis using ElevenLabs as the provider. When calling a TTS prompt, the response is raw audio bytes (typically MP3 format) instead of text.
Prerequisites
- Configure ElevenLabs Provider: Ensure an ElevenLabs account and API key are configured in AMPA (via the Models page in the admin UI).
- Configure ElevenLabs TTS Model: Use the Model Settings page to configure an ElevenLabs TTS model:
- Navigate to Model Settings in the admin UI
- Select ElevenLabs as the AI Provider
- Select a TTS model (e.g.,
eleven_multilingual_v2) - The page will automatically load available voices from the ElevenLabs API
- Select desired voices and output formats using the checkboxes
- The
model_propertiesJSON will automatically update with selected voices and formats
- Create a TTS Prompt: Create a prompt with:
- Model type:
TTS - AI Provider:
ElevenLabs - Model: An ElevenLabs voice ID (e.g.,
"21m00Tcm4TlvDq8ikWAM"for Rachel voice) - Optional: Set
output_formatin the model'smodel_properties(defaults tomp3_44100_128)
- Model type:
Model Settings Page Features
The Model Settings page (/model-settings) provides a comprehensive interface for configuring ElevenLabs TTS models:
Voice Selection:
- Automatic Voice Loading: When an ElevenLabs TTS model is selected, available voices are automatically fetched from the ElevenLabs API
- Multi-select Checkboxes: Select multiple voices by checking the boxes next to each voice
- Voice Information: Each voice displays:
- Voice name
- Voice ID (in brackets)
- Supported languages
- Search Bar: Filter voices by name, voice ID, or description
- Language Filter: Multi-select filter to show voices by language (Swedish, Norwegian, German, English variants, French)
- Show Selected/All Toggle: Toggle between showing only selected voices (default) or all available voices
- Reset Filter Button: Clears search and language filters, shows all voices
Output Formats:
- Multi-select Checkboxes: Select multiple output formats
- Supported Formats: All ElevenLabs-supported formats are available (MP3 variants, PCM, ULAW, Opus)
Model Properties JSON:
- Automatic Updates: The
model_propertiesJSON field automatically updates when voices or output formats are selected/deselected - Structure: The JSON contains:
{ "voices": [ { "voice_id": "21m00Tcm4TlvDq8ikWAM", "name": "Rachel", "description": "Calm and empathetic", "languages": ["en-US", "en-GB"] } ], "output_formats": ["mp3_44100_128", "mp3_44100_192"] }
- Validation: Before saving, the structure is validated to ensure:
voicesis an array (if present)- Each voice has required fields:
voice_id(string),name(string) - Optional fields:
description(string),languages(array of strings) output_formatsis an array of strings (if present)
Subscription Information:
- Displays ElevenLabs subscription tier, character limits, and voice limits when available
Recommendation: The page includes a recommendation to explore voices on the ElevenLabs homepage first, then identify the same voice ID in the Model Settings page.
Calling a TTS Prompt
from the37lab_ampa_sdk import PromptAPI
client = PromptAPI(
ampa_url="http://localhost:8080/api/v1",
username="admin",
password="supersecret",
)
# Call a TTS prompt - response is raw MP3 audio bytes
response = client.call_prompt(
prompt="my_tts_prompt", # Prompt name or ID
prompt_text="Hello, this is a test message", # Text to synthesize
variables={
"stability": 0.7, # Voice stability (0.0-1.0)
"similarity_boost": 0.6, # Similarity boost (0.0-1.0)
"output_format": "mp3_44100_128" # Optional: override output format
}
)
# Save audio to file
if response.headers.get('Content-Type') == 'audio/mpeg':
with open("output.mp3", "wb") as f:
f.write(response.content)
Parameters
call_prompt() for TTS
| Parameter | Type | Required | Description |
|---|---|---|---|
prompt |
str or int |
Yes | Prompt ID or name (must be configured with ElevenLabs TTS model) |
prompt_text |
str |
Yes | The text to synthesize into speech |
variables |
dict |
No | Voice settings (see below) |
Voice Settings (variables)
The variables parameter should only contain voice settings. Do not use it for template variable substitution in TTS prompts.
| Key | Type | Default | Description |
|---|---|---|---|
output_format |
str |
mp3_44100_128 |
Audio output format. Priority: variables > model_properties > default |
stability |
float |
0.5 |
Voice stability (0.0-1.0). Higher = more consistent, lower = more variation |
similarity_boost |
float |
0.5 |
Similarity boost (0.0-1.0). Higher = closer to original voice |
Output Format Priority:
- If
output_formatis invariables, it takes precedence - Otherwise, if
output_formatis set in the prompt's modelmodel_properties, that is used - Otherwise, defaults to
mp3_44100_128
Supported Output Formats:
mp3_22050_32,mp3_44100_32,mp3_44100_64,mp3_44100_96,mp3_44100_128,mp3_44100_192pcm_16000,pcm_22050,pcm_24000,pcm_44100ulaw_8000opus_48000_128
Response Format
When calling a TTS prompt, the response is different from text-based prompts:
- Content-Type:
audio/mpeg(or other audio format based onoutput_format) - Body: Raw audio bytes (MP3, PCM, etc.) - not base64 encoded
- Headers: Metadata in
X-AMPA-*headers (e.g.,X-AMPA-tokens,X-AMPA-cost)
Important: The SDK's call_prompt() method returns raw audio bytes for TTS responses, not a requests.Response object.
Example: Complete TTS Workflow
from the37lab_ampa_sdk import PromptAPI
import os
client = PromptAPI(
ampa_url=os.getenv("AMPA_API_URL"),
api_token=os.getenv("AMPA_API_TOKEN")
)
# Synthesize speech with custom voice settings
response = client.call_prompt(
prompt="narrator_voice",
prompt_text="Welcome to our application. How can I help you today?",
variables={
"stability": 0.8, # More consistent voice
"similarity_boost": 0.7, # Closer to original voice
"output_format": "mp3_44100_192" # Higher quality
}
)
# Verify it's audio
if response.headers.get('Content-Type', '').startswith('audio/'):
# Save to file
output_file = "welcome_message.mp3"
with open(output_file, "wb") as f:
f.write(response.content)
print(f"Audio saved to {output_file}")
# Get metadata from headers
tokens = response.headers.get('X-AMPA-tokens', 'N/A')
cost = response.headers.get('X-AMPA-cost', 'N/A')
print(f"Tokens: {tokens}, Cost: {cost}")
else:
# Handle error (response might be JSON with error details)
print(f"Error: {response.text}")
REST API for TTS
You can also call TTS prompts directly via HTTP:
curl -X POST "http://localhost:8080/api/v1/prompts/my_tts_prompt/call" \
-u admin:supersecret \
-H "Content-Type: application/json" \
-d '{
"var": {
"stability": 0.7,
"similarity_boost": 0.6,
"output_format": "mp3_44100_128"
}
}' \
--data-urlencode "prompt=Hello, world!" \
--output output.mp3
Note: The prompt parameter should be passed as a query parameter (?prompt=...) or in the URL-encoded form data. The variables are passed in the JSON body under the var key.
Troubleshooting TTS
- 401/403 errors: Verify ElevenLabs API key is configured in AMPA Models page
- Wrong content type: Ensure the prompt is configured with model type
TTSand providerElevenLabs - Audio format issues: Check that
output_formatis a valid ElevenLabs format (see supported formats above) - Empty audio: Verify
prompt_textis provided and not empty - Voice not found: Ensure the model field contains a valid ElevenLabs voice ID
- Voices not loading: Use the refresh button in Model Settings to reload voices from ElevenLabs API
- Model properties validation errors: Ensure selected voices have valid
voice_idandnamefields, and output formats are strings
Managing prompt tests
PromptAPI also exposes CRUD helpers for the prompt_tests table so you can curate evaluation suites alongside prompts:
from the37lab_ampa_sdk import PromptAPI
client = PromptAPI(ampa_url="http://localhost:8080/api/v1", token="api_123...")
# Create a test case
payload = {
"name": "Greeting",
"description": "Basic salutation",
"data": {"tone": "casual"},
"prompt": "Say hi",
"prompt_ids": [prompt["id"]],
}
test = client.create_prompt_test(**payload)
# Update and fetch
client.update_prompt_test(test["id"], description="Friendly tone")
latest = client.get_prompt_test(test["id"])
# Enumerate tests
all_tests = client.list_prompt_tests()
scoped = client.list_prompt_tests_by_prompt_id(prompt["id"])
# Clean up
client.delete_prompt_test(test["id"])
These helpers return Python dictionaries that mirror the REST payloads, making them easy to log or serialise.
REST API Quick Reference
Use HTTP directly when you need another language runtime or fine-grained control.
Create a prompt
curl -X POST "http://localhost:8080/api/v1/prompts" \
-u admin:supersecret \
-H "Content-Type: application/json" \
-d '{
"prompt_name": "My Prompt",
"description": "A helpful assistant",
"purpose": "You are a helpful assistant",
"instruction": "Tell a story about Sweden"
}'
Run a prompt
curl -X POST "http://localhost:8080/api/v1/prompts/{prompt_id}/run" \
-u admin:supersecret \
-H "Content-Type: application/json" \
-d '{
"instruction_variables": {"name": "John"},
"prompt": "Tell me a story"
}'
Interactive documentation is available once the API is running:
- Swagger UI:
<API_BASE>/docs - ReDoc:
<API_BASE>/redoc
Integration Checklist
- Obtain API credentials or mint a token in the admin panel.
- Set
AMPA_API_URL(and optional auth variables) or pass configuration directly toPromptAPI. - Create prompts and tests in a non-production environment.
- Exercise the relevant endpoints/SDK methods and capture expected responses.
- Promote configuration to production and enable monitoring for
/api/v1/healthand/api/v1/status.
Troubleshooting
- 401/403 errors – credentials are missing, incorrect, or the user lacks the required role (administrator for admin endpoints).
- Connection errors – verify the API host/port, HTTPS certificates, and any corporate proxies.
- Validation errors – the API returns structured error messages with
detailfields; inspect them for the failing attribute. - SDK exceptions – the client raises
requestsexceptions for transport issues and propagates API errors asPromptAPIErrorwith the server payload.
Related Resources
AMPA.md– platform architecture, deployment, and operations.ampa_sdk/README.md– full SDK reference inside the repo.ampa_api/README.md– endpoint catalog and server configuration.ampa-ui/ADMIN_INTEGRATION.md– managing users, roles, and tokens for authentication.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file the37lab_ampa_sdk-0.1.3.tar.gz.
File metadata
- Download URL: the37lab_ampa_sdk-0.1.3.tar.gz
- Upload date:
- Size: 80.6 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.13
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
220182683ed8b1bb8cfb54dfd37e256f1301bdecdb58b6ebbc1478db2d91bc69
|
|
| MD5 |
753c5a02553bacdf11052478c0c31177
|
|
| BLAKE2b-256 |
0778d6a1a55f0f4e9b4e644878d3869c6c14503eeba63368ba2bb2e9f32836ac
|
File details
Details for the file the37lab_ampa_sdk-0.1.3-py3-none-any.whl.
File metadata
- Download URL: the37lab_ampa_sdk-0.1.3-py3-none-any.whl
- Upload date:
- Size: 174.5 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.13
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
68d71213a7ed5d7b5c79033e87167eb282d7a95c4e0ad5710148cd0427914ead
|
|
| MD5 |
13c84c811d2f1a7ae50a4ce51a40a3d5
|
|
| BLAKE2b-256 |
d8a4d2e293244032deac781126211dc18fb7c06a7b1959578a1955f1ffee3aba
|