TestMu A2A CLI - Agent-to-Agent Testing from the Terminal
Project description
TestMu A2A CLI Guide
Command-line interface for TestMu AI Agent Testing. Test your chat and phone agents from the terminal.
Installation
pip install -e .
Or run directly from the project:
python -m cli.main --help
Quick Reference
testmu-a2a auth Authenticate with TestMu
testmu-a2a test Quick chat agent test (one command)
testmu-a2a init Initialize testmu-a2a.yaml config
testmu-a2a run Run tests from testmu-a2a.yaml
testmu-a2a call Test phone agents with real calls
testmu-a2a redteam Adversarial security testing
testmu-a2a prompts Set agent prompt and upload requirements
testmu-a2a projects Manage projects
testmu-a2a results View chat evaluation results
testmu-a2a scenarios Manage chat test scenarios
testmu-a2a phone-scenarios Manage phone test scenarios
testmu-a2a suites Manage test suites
testmu-a2a schedules Manage scheduled runs
testmu-a2a call-results View phone call results
testmu-a2a profiles Manage test/agent/endpoint profiles
testmu-a2a recordings Upload and analyze call recordings
testmu-a2a voices Browse available voices
testmu-a2a personas Manage test personas
testmu-a2a phone-numbers Manage phone numbers
testmu-a2a thresholds Manage pass/fail thresholds
testmu-a2a assessments Go-live readiness assessments
testmu-a2a health System health check
testmu-a2a credits View credit balance
Authentication
All CLI commands require authentication with your LambdaTest/TestMu credentials.
Interactive Login
testmu-a2a auth -u <username> -k <access_key>
Point to a Specific Environment
# Local development
testmu-a2a auth -u <username> -k <access_key> --base-url http://localhost:8000
# Staging
testmu-a2a auth -u <username> -k <access_key> --base-url https://stage-agent-testing.lambdatestinternal.com
# Production (default)
testmu-a2a auth -u <username> -k <access_key>
CI/CD (Non-Interactive)
Set environment variables instead of running testmu-a2a auth:
export TESTMU_USERNAME=<username>
export TESTMU_ACCESS_KEY=<access_key>
export TESTMU_BASE_URL=https://agent-testing.lambdatest.com
LambdaTest aliases are also supported:
export LT_USERNAME=<username>
export LT_ACCESS_KEY=<access_key>
Check Auth Status
testmu-a2a auth status
Logout
testmu-a2a auth logout
Credentials are stored in ~/.testmu-a2a/credentials.json with owner-only permissions (600).
Phone Caller Agent Testing
Complete end-to-end flow for testing inbound and outbound phone agents.
Inbound Agent Flow (someone calls your agent)
Step 1: Create a Phone Project
testmu-a2a projects create \
--name "Airline Support Agent" \
--description "Testing our IVR booking agent" \
--type phone_caller_inbound
Note the project ID from the output.
Step 2: Set the Agent Prompt
Tell TestMu what your agent does. This is the most important step — the prompt drives scenario generation, evaluation criteria, and go-live assessments.
Inline:
testmu-a2a prompts set --project <project_id> \
--prompt "You are an airline booking assistant. You help customers find
flights, make reservations, handle cancellations, and process refunds.
Always verify the customer's identity before making changes.
Never share other customers' booking information.
If the customer is upset, empathize before offering solutions."
From a file:
testmu-a2a prompts set --project <project_id> \
--prompt-file ./agent_system_prompt.md
With additional requirement documents (compliance rules, product specs, etc.):
testmu-a2a prompts set --project <project_id> \
--prompt-file ./agent_prompt.md \
--files ./compliance_rules.pdf,./fare_structure.docx \
--context "Agent must comply with DOT airline passenger rights regulations"
Verify what was saved:
testmu-a2a prompts get --project <project_id>
Step 3: Generate Test Scenarios
testmu-a2a phone-scenarios generate \
--project <project_id> \
--count 5 \
--personas "frustrated,confused,elderly,rushed" \
--instructions "Test the agent's ability to handle flight booking, cancellation, and rebooking"
Step 4: Review Generated Scenarios
testmu-a2a phone-scenarios list --project <project_id>
Optionally create a manual scenario:
testmu-a2a phone-scenarios create \
--project <project_id> \
--title "Customer cancels mid-booking" \
--description "Customer starts booking a flight, then changes mind halfway" \
--persona "indecisive"
Step 5: Create a Test Suite
Simple (same call config for all scenarios — supply number/voice at run time):
testmu-a2a suites create \
--project <project_id> \
--name "Booking Flow Regression" \
--scenarios "<scenario_id_1>,<scenario_id_2>,<scenario_id_3>"
Per-scenario config from a YAML file (each scenario has its own number, voice, and background sound):
testmu-a2a suites create \
--project <project_id> \
--name "Booking Flow Regression" \
--from-file suite.yaml
suite.yaml:
scenarios:
- id: <scenario_id_1>
phone_number: "+15551234567"
voice: Neha
voice_provider: vapi
background_sound_url: https://example.com/office-noise.mp3
background_sound_enabled: true
- id: <scenario_id_2>
phone_number: "+15559876543"
voice: andrew
voice_provider: azure
- id: <scenario_id_3>
phone_number: "+15551234567"
Available YAML fields per scenario:
| Field | Description |
|---|---|
id |
Scenario ID (required) |
phone_number |
Phone number to call (E.164 format) |
voice |
Voice ID (e.g., Neha, andrew) |
voice_provider |
Voice synthesis: vapi, azure, 11labs, google |
background_sound_enabled |
Enable background noise (true/false) |
background_sound_url |
URL of background audio file |
voice_name |
Voice display name |
first_speaker |
Who speaks first: simulator (default) or agent |
wait_seconds |
Response delay in seconds (0.5–5.0) |
max_duration_seconds |
Max call duration in seconds (60–1800) |
Step 6: Run the Suite
If per-scenario config was stored at create time (number, voice, etc. already set):
testmu-a2a suites run \
--project <project_id> \
--name "Booking Flow Regression"
Or override all scenarios at run time with the same number/voice:
testmu-a2a suites run \
--project <project_id> \
--name "Booking Flow Regression" \
--number +15551234567 \
--voice Neha \
--voice-provider vapi \
--background-sound https://example.com/office-noise.mp3
Step 7: Check Results
# List all call results for the project
testmu-a2a call-results list --project <project_id>
# Get detailed result for a specific call
testmu-a2a call-results get <call_id>
# Get suite-level summary
testmu-a2a call-results summary <suite_id>
Step 8: Schedule Recurring Runs
testmu-a2a schedules create \
--project <project_id> \
--suite <suite_id> \
--frequency daily \
--time 09:00
Step 9: Go-Live Readiness Check
testmu-a2a assessments create --project <project_id> --type phone
Outbound Agent Flow (your agent calls someone)
# Create outbound project
testmu-a2a projects create \
--name "Sales Outreach Agent" \
--description "Testing outbound sales calls" \
--type phone_caller_outbound
# Set agent prompt (what the outbound agent says and does)
testmu-a2a prompts set --project <project_id> \
--prompt "You are a sales agent for Acme Corp. You call existing customers
to offer premium plan upgrades. Be polite, handle objections gracefully,
and never pressure the customer. If they say no, thank them and end the call."
# Generate outbound-specific scenarios
testmu-a2a phone-scenarios generate \
--project <project_id> \
--count 5 \
--type outbound \
--personas "busy executive,interested buyer,skeptical prospect" \
--instructions "Agent offers premium plan upgrade, handles objections"
# Create suite and run (same as inbound)
testmu-a2a suites create \
--project <project_id> \
--name "Outbound Sales Test" \
--scenarios "<scenario_id_1>,<scenario_id_2>,<scenario_id_3>"
testmu-a2a suites run --project <project_id> --name "Outbound Sales Test"
Quick One-Shot Call
Skip the suite and run a single test call:
testmu-a2a call \
--number <phone_number> \
--persona frustrated \
--scenario "Customer wants to cancel their premium subscription" \
--voice Neha \
--voice-provider vapi
Options:
| Flag | Description | Default |
|---|---|---|
--number, -n |
Phone number (E.164 format) | Required |
--persona, -p |
Test persona | neutral |
--scenario, -s |
Scenario description | General inquiry |
--provider |
Voice provider (vapi, pipecat, bolna) |
vapi |
--voice |
Voice ID (e.g., Neha, andrew) |
Provider default |
--voice-provider |
Voice synthesis (vapi, azure, 11labs, google) |
vapi |
--type, -t |
Call type (inbound, outbound) |
inbound |
--max-duration |
Max call duration in seconds | 180 |
--verbose, -v |
Show call transcript | false |
--format, -f |
Output format (table, json) |
table |
Chat Agent Testing
Quick Test (One Command)
Test any chat agent endpoint with a single command:
testmu-a2a test \
--agent https://my-bot.com/api/chat \
--spec "A travel booking assistant that helps users find flights" \
--count 10
For agents with custom request formats:
testmu-a2a test \
--agent https://my-bot.com/chat \
--body-template '{"input": "{{message}}"}' \
--response-path "output.text" \
--spec "Customer support bot for an e-commerce store" \
-H "Authorization: Bearer <token>"
Options:
| Flag | Description | Default |
|---|---|---|
--agent, -a |
Target agent endpoint URL | Required |
--spec, -s |
Agent description or path to spec file | None |
--count, -n |
Number of test scenarios | 10 |
--categories, -c |
Comma-separated categories | All |
--threshold, -t |
Pass/fail threshold (0.0-1.0) | 0.80 |
--max-turns |
Max conversation turns per scenario | 10 |
--format, -f |
Output format (table, json, junit) |
table |
--output, -o |
Write results to file | None |
--verbose, -v |
Show conversation transcripts | false |
--parallel, -p |
Number of parallel evaluations | 5 |
--body-template |
JSON body with {{message}} placeholder |
None |
--response-path |
JSONPath to extract agent reply | None |
--method, -m |
HTTP method | POST |
--header, -H |
Custom header (repeatable) | None |
Config-Driven Testing
Initialize Config
testmu-a2a init --endpoint https://my-bot.com/api/chat
Creates testmu-a2a.yaml and directories:
testmu-a2a.yaml Project configuration
specs/ Spec documents (PDF, DOCX, MD)
scenarios/ Custom scenario YAML files
reports/ Test reports
testmu-a2a.yaml Structure
agent:
endpoint: "https://my-bot.com/api/chat"
type: chat
method: POST
headers:
Content-Type: "application/json"
body_template:
message: "{{message}}"
response_path: "data.reply"
scenarios:
generate:
from: ./specs/
categories:
- conversational-flow
- intent-recognition
- context-memory
- error-handling
- security
count: 30
evaluation:
thresholds:
accuracy: 0.80
relevance: 0.80
coherence: 0.80
context_retention: 0.75
max_turns: 10
output_format: table
security:
enabled: true
intensity: intermediate
categories:
- prompt-injection
- jailbreak
- pii-leakage
- data-exfiltration
Run from Config
# Run all tests
testmu-a2a run
# Run specific category
testmu-a2a run --category security
# Output as JUnit XML (for CI/CD)
testmu-a2a run --format junit --output results.xml
Red Team / Security Testing
Adversarial testing across 9 attack categories with 3 difficulty levels.
testmu-a2a redteam \
--agent https://my-bot.com/api/chat \
--intensity advanced \
--spec "Banking customer support agent"
Test specific categories:
testmu-a2a redteam \
--agent https://my-bot.com/api/chat \
--categories prompt-injection,jailbreak,pii-leakage
Attack categories: prompt-injection, jailbreak, data-exfiltration, pii-leakage, harmful-content, overreliance, hijacking, policy-violation, technical-injection
Intensity levels: basic, intermediate, advanced
Output includes a letter grade (A+ through F) and per-category breakdown.
Agent Prompt & Requirements
The prompt is the single most important input — it tells TestMu what your agent does so it can generate relevant scenarios and evaluate correctly.
Set Prompt (inline)
testmu-a2a prompts set --project <project_id> \
--prompt "You are a customer support agent for a SaaS product.
You help users with billing, account issues, and technical troubleshooting.
Always verify the user's email before making account changes.
Escalate to a human if the customer asks for a refund over $500."
Set Prompt (from file)
testmu-a2a prompts set --project <project_id> \
--prompt-file ./agent_system_prompt.md
Set Prompt with Additional Requirements
Upload compliance docs, product specs, or knowledge base files alongside the prompt:
testmu-a2a prompts set --project <project_id> \
--prompt-file ./agent_prompt.md \
--files ./compliance_rules.pdf,./product_catalog.docx,./faq.md \
--context "Agent must comply with GDPR and never store PII in logs"
Supported file types: PDF, DOCX, TXT, MD, XLSX, MP3, WAV, M4A
View Current Prompt
testmu-a2a prompts get --project <project_id>
testmu-a2a prompts get --project <project_id> --format json
Update Prompt
testmu-a2a prompts update --project <project_id> --id <prompt_id> \
--prompt "Updated prompt text..."
testmu-a2a prompts update --project <project_id> --id <prompt_id> \
--prompt-file ./updated_prompt.md
Delete Prompt
testmu-a2a prompts delete --project <project_id> --id <prompt_id>
Project Management
# List all projects
testmu-a2a projects list
testmu-a2a projects list --format json
# Create a project
testmu-a2a projects create \
--name "My Agent" \
--description "Agent description" \
--type chat
# Update a project
python -m cli.main projects update <project_id> --name "New Name"
python -m cli.main projects update <project_id> --description "Updated description"
python -m cli.main projects update <project_id> \
--name "New Name" \
--description "Updated description"
# Delete a project
testmu-a2a projects delete <project_id>
testmu-a2a projects delete <project_id> --yes # skip confirmation
Project types: chat, phone_caller_inbound, phone_caller_outbound, image_analyzer
Chat Scenario Management
# List scenarios in a workflow
testmu-a2a scenarios list --workflow <workflow_id> --project <project_id>
# Create a custom scenario
testmu-a2a scenarios create \
--workflow <workflow_id> \
--project <project_id> \
--title "Edge case: empty input" \
--description "Test how agent handles empty messages" \
--persona "confused user"
# Delete scenarios
testmu-a2a scenarios delete \
--workflow <workflow_id> \
--project <project_id> \
--ids "<scenario_id_1>,<scenario_id_2>"
# Export to CSV
testmu-a2a scenarios export \
--workflow <workflow_id> \
--project <project_id> \
--output scenarios.csv
# Import from CSV
testmu-a2a scenarios import \
--workflow <workflow_id> \
--project <project_id> \
--file scenarios.csv
# Download CSV template
testmu-a2a scenarios template --workflow <workflow_id>
Phone Scenario Management
# List phone scenarios
testmu-a2a phone-scenarios list --project <project_id>
# Generate inbound scenarios
testmu-a2a phone-scenarios generate \
--project <project_id> \
--count 10 \
--personas "frustrated,confused,elderly" \
--instructions "Focus on billing and refund scenarios"
# Generate outbound scenarios
testmu-a2a phone-scenarios generate \
--project <project_id> \
--count 5 \
--type outbound \
--personas "busy,skeptical"
# Create manually
testmu-a2a phone-scenarios create \
--project <project_id> \
--title "Angry customer wants refund" \
--description "Customer received wrong item, demands immediate refund" \
--persona "angry"
# Edit a scenario
testmu-a2a phone-scenarios edit \
--project <project_id> \
--id <scenario_id> \
--title "Updated title" \
--persona "frustrated"
# Delete scenarios
testmu-a2a phone-scenarios delete \
--project <project_id> \
--ids "<scenario_id_1>,<scenario_id_2>"
# Bulk import/export
testmu-a2a phone-scenarios import --project <project_id> --file scenarios.csv
testmu-a2a phone-scenarios template --project <project_id>
Suite Management
Suites group scenarios for repeatable test runs.
# List suites
testmu-a2a suites list --project <project_id>
# Create a suite (simple)
testmu-a2a suites create \
--project <project_id> \
--name "Regression Suite" \
--scenarios "<scenario_id_1>,<scenario_id_2>,<scenario_id_3>"
# Create a suite with per-scenario call config from YAML
testmu-a2a suites create \
--project <project_id> \
--name "Regression Suite" \
--from-file suite.yaml
# Run a suite (triggers all calls)
testmu-a2a suites run --project <project_id> --name "Regression Suite"
# Get suite overview with pass rates
testmu-a2a suites overview --project <project_id>
# Update a suite name or scenarios (simple)
testmu-a2a suites update \
--id <suite_id> \
--name "Updated Suite Name" \
--scenarios "<scenario_id_1>,<scenario_id_4>"
# Update with per-scenario call config from YAML
testmu-a2a suites update \
--id <suite_id> \
--from-file suite.yaml
Schedule Management
Automate recurring test runs.
# List schedules
testmu-a2a schedules list --project <project_id>
# Create daily schedule
testmu-a2a schedules create \
--project <project_id> \
--suite <suite_id> \
--frequency daily \
--time 09:00
# Create weekly schedule
testmu-a2a schedules create \
--project <project_id> \
--suite <suite_id> \
--frequency weekly \
--days mon,wed,fri \
--time 14:00
# Trigger a schedule immediately
testmu-a2a schedules trigger <schedule_id>
# Update a schedule
testmu-a2a schedules update <schedule_id> --frequency daily --time 10:00
# Delete a schedule
testmu-a2a schedules delete <schedule_id>
Call Results
# List results by project
testmu-a2a call-results list --project <project_id>
# List results by suite
testmu-a2a call-results list --suite <suite_id>
# Get detailed call result with transcript
testmu-a2a call-results get <call_id>
# Get result with audio info
testmu-a2a call-results get <call_id> --audio
# Suite summary with aggregated scores
testmu-a2a call-results summary <suite_id>
# Bookmark a call result
testmu-a2a call-results bookmark <result_id>
# Remove bookmark
testmu-a2a call-results bookmark <result_id> --remove
# List bookmarked results
testmu-a2a call-results bookmarked --suite <suite_id>
Chat Evaluation Results
# View results from a previous test run
testmu-a2a results <workflow_id> --project <project_id>
# Output as JSON
testmu-a2a results <workflow_id> --project <project_id> --format json
# Output as JUnit XML
testmu-a2a results <workflow_id> --project <project_id> --format junit --output results.xml
Recording Analysis
Upload and analyze existing call recordings.
# Upload recordings
testmu-a2a recordings upload --project <project_id> --files call1.mp3,call2.wav
# Analyze a recording
testmu-a2a recordings analyze <recording_id>
# View analysis result
testmu-a2a recordings result <recording_id>
# View transcript
testmu-a2a recordings transcript <recording_id>
# List all recordings
testmu-a2a recordings list --project <project_id>
# View available metrics
testmu-a2a recordings metrics
# Bookmark/unbookmark
testmu-a2a recordings bookmark <recording_id>
testmu-a2a recordings bookmark <recording_id> --remove
# Delete a recording
testmu-a2a recordings delete <recording_id>
Profile Management
Test Profiles (test data)
testmu-a2a profiles test list --project <project_id>
testmu-a2a profiles test get --project <project_id> --id <profile_id>
testmu-a2a profiles test create \
--project <project_id> \
--name "Premium User" \
--data '{"name": "John Doe", "plan": "premium", "account_id": "ACC123"}'
testmu-a2a profiles test delete --project <project_id> --ids "<id_1>,<id_2>"
Agent Profiles
testmu-a2a profiles agent list
testmu-a2a profiles agent create \
--name "Support Agent v2" \
--data '{"agent_type": "support", "version": "2.0"}'
Endpoint Profiles
testmu-a2a profiles endpoint list --project <project_id>
testmu-a2a profiles endpoint create \
--project <project_id> \
--name "Production Endpoint" \
--data '{"url": "https://api.example.com/chat", "method": "POST"}'
Threshold Configuration
Set pass/fail criteria for evaluations.
# Get current thresholds
testmu-a2a thresholds get --project <project_id> --type chat
testmu-a2a thresholds get --project <project_id> --type phone
# Set thresholds
testmu-a2a thresholds set \
--project <project_id> \
--type chat \
--config '{"accuracy": 0.85, "relevance": 0.80, "coherence": 0.80}'
testmu-a2a thresholds set \
--project <project_id> \
--type phone \
--config '{"resolution_rate": 0.90, "avg_response_time": 2.0}'
Go-Live Assessments
Get a production-readiness verdict for your agent.
# Generate assessment
testmu-a2a assessments create --project <project_id> --type phone
testmu-a2a assessments create --project <project_id> --type chat
# Get latest assessment
testmu-a2a assessments get --project <project_id> --type phone
# View assessment history
testmu-a2a assessments history --project <project_id> --type phone
Voice Library
# List voices (default provider: azure)
testmu-a2a voices list
python -m cli.main voices list
# Filter by provider
testmu-a2a voices list --provider azure
testmu-a2a voices list --provider 11labs
testmu-a2a voices list --provider google
# Filter by language (azure only)
testmu-a2a voices list --provider azure --language es # Spanish
testmu-a2a voices list --provider azure --language hi # Hindi
testmu-a2a voices list --provider azure --language multi # Multilingual
testmu-a2a voices list --provider azure --language all # All languages
# Filter by target platform
testmu-a2a voices list --target bolna
testmu-a2a voices list --provider 11labs --target pipecat
# JSON output
testmu-a2a voices list --format json
testmu-a2a voices list --provider 11labs --format json
Use the Name column value as providerId when configuring per-scenario voice in a suite.
Personas
# List custom personas
testmu-a2a personas list --org <org_id>
# Create a custom persona
testmu-a2a personas create \
--org <org_id> \
--name "Impatient Executive" \
--description "A busy executive who expects quick, direct answers with no filler"
Built-in personas are always available: neutral, frustrated, confused, elderly, tech-savvy, rushed, and 25+ more.
Phone Numbers
# List configured numbers
testmu-a2a phone-numbers list --org <org_id>
# Add a phone number
testmu-a2a phone-numbers create \
--org <org_id> \
--data '{"phoneNumber": "+15551234567", "name": "Support Line"}'
# Delete a phone number
testmu-a2a phone-numbers delete --org <org_id>
System
Health Check
testmu-a2a health
testmu-a2a health info # detailed system info
testmu-a2a health agents # list available agent types
Credits
testmu-a2a credits # balance summary
testmu-a2a credits totals # detailed breakdown
CI/CD Integration
GitHub Actions
name: Agent Tests
on: [push]
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-python@v5
with:
python-version: '3.11'
- name: Install TestMu A2A CLI
run: pip install testmu-a2a-cli
- name: Run agent tests
env:
TESTMU_USERNAME: ${{ secrets.TESTMU_USERNAME }}
TESTMU_ACCESS_KEY: ${{ secrets.TESTMU_ACCESS_KEY }}
run: |
testmu-a2a test \
--agent ${{ vars.AGENT_ENDPOINT }} \
--spec "Customer support chatbot" \
--count 10 \
--format junit \
--output results.xml
- name: Publish results
uses: dorny/test-reporter@v1
if: always()
with:
name: Agent Test Results
path: results.xml
reporter: java-junit
Exit Codes
| Code | Meaning |
|---|---|
0 |
All tests passed |
1 |
One or more tests failed, or a command error occurred |
Output Formats
| Format | Flag | Use Case |
|---|---|---|
table |
--format table |
Human-readable terminal output |
json |
--format json |
Programmatic consumption, piping |
junit |
--format junit |
CI/CD test reporters |
Write output to a file with --output <path>:
testmu-a2a test --agent <url> --format junit --output results.xml
testmu-a2a test --agent <url> --format json --output results.json
Global Options
| Flag | Description |
|---|---|
--version, -V |
Show CLI version |
--help, -h |
Show help for any command |
--install-completion |
Install shell completion |
Shell completion works with bash, zsh, and fish:
testmu-a2a --install-completion
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file testmu_a2a_cli-0.1.1.tar.gz.
File metadata
- Download URL: testmu_a2a_cli-0.1.1.tar.gz
- Upload date:
- Size: 58.6 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.13.2
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
10858e017db79398d1012fd7ba07ad165842a806a08c36daa9cc50da82ac2d03
|
|
| MD5 |
fb395425b4ea776c6f0681efae83c42f
|
|
| BLAKE2b-256 |
a64f322a479141226bf5785f173da7abd8632782669cc3b787f16cc4a976f38f
|
File details
Details for the file testmu_a2a_cli-0.1.1-py3-none-any.whl.
File metadata
- Download URL: testmu_a2a_cli-0.1.1-py3-none-any.whl
- Upload date:
- Size: 66.6 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.13.2
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
7a19a2a09afa67897d0f3ac22ea08cb8bc0d7901fbb6f15e5dfebf8867b654ad
|
|
| MD5 |
62a55d7c614832659f61bb7d9f4090f8
|
|
| BLAKE2b-256 |
cfeeaa684e2f30e3925cca28c8dc9665f34b931b2afbf0337b70d51376632281
|