Skip to main content

A utility to report pytest results to Jira and Slack

Project description

jira-test-reporting

Description

This repository contains utility scripts for

  • reporting automated test results Jira
  • sending a slack notification with proper stats and jira url that shows failed tests in the current test run id (trid) specifically designed for use in CI/CD pipelines such as Bitbucket Pipelines.

Prerequisites

  • Python: Version 3.12 or higher.
  • Json Report: Calling project should generate json report. How to create pytest json report
  • Jira Access: A Jira instance with API token authentication. How to create jira api token
  • Slack Webhook: A Slack webhook URL for sending notifications. How to create slack incoming webhook
  • Configuration File: A _env_configs/third_party.conf file with Jira and Slack settings.
  • AI LLM Access (for detailed slack notifications): For generating AI summaries, you need access to either:
    • Ollama with the qwen3-coder:30b model
    • OpenAI API access with GPT-4 model
    • Google Gemini API access

Installation

pip install jira-test-reporting

Jira project preparation

Create new Jira project and configure issue type "Task" with following fields

The script uses the following custom fields in Jira tasks:

  • Test Environment : Field Type - Dropdown. Important - Pre-populate the values
  • Test Area : Field Type - Dropdown - Important - Pre-populate the values
  • Test Type : Field Type - Labels
  • Test Run : Field Type - Short Text
  • Test Tags : Field Type - Labels
  • Test Status : Field Type - Dropdown Important - Pre-populate the values
  • TRID : Field Type - Short Text

Important Instructions

  • In the pytest json report check block "nodeid": "api_tests/Test_Pilot/test_jira_reporting_scenarios.py::Test_JIRA_Reporting_Scenarios::test_jira_reporting_test_passed", -- api_tests should be pre-populated under the Test Type field options -- Test_Pilot should be pre-populated under the Test Area field options
  • Similarly, in the pytest json report check block "outcome": "passed" -- Passed should be pre-populated under the Test Status field options. For this field, the values should be pre-populate in the title case.
  • Also, make sure that in your jira project, the issue type "Task" has default fields Description and Status
  • In the caller project, create _env_configs/third_party.conf file with the following structure:
    [DEFAULT]
    jira_field_id_test_env = customfield_10208
    jira_field_id_test_area = customfield_10236
    jira_field_id_test_type = customfield_10301
    jira_field_id_test_run_name = customfield_10205
    jira_field_id_test_tags = customfield_10202
    jira_field_id_test_status = customfield_10235
    jira_field_id_test_run_ids = customfield_11500
    scm_url_variable = BITBUCKET_GIT_HTTP_ORIGIN
    scm_build_number_variable = BITBUCKET_BUILD_NUMBER
    

The script also uses the following default fields in Jira tasks:

  • project - reflects jira_project_key
  • summary - test_name
  • description - failure or passing description
  • status - reflects test_status as in jira_field_id_test_status
  The values for the fields above will be fetched directly from the json_report
  New jira tasks will be created for non-existing tests
  Existing tests will be updated

Examples

Jira Reported Tests Example

image

Slack Notification Example - Regular

API Test Results
──────────────
🚀 *Test Run:* Release-X
🌎 *Environment:* Staging
❌ *Failed:* 4
──────────────
🧪 *Total Tests:* 148
✅ *Passed:* 143
🔄 *Executed:* 147
⏸️ *Skipped:* 1
📈 Click to open Test Report in Jira
📡 FYA: @User1 @User2
Execution Date: May-23-2025

Slack Notification Example - Detailed with AI Summary

AI SOC - Reliability Check - Dev
──────────────
1. *`Failed`*: Test User Authentication
    *RCA*: Authentication failed due to invalid credentials. User not found in database. Root cause: Missing user records in test environment.
    Case ID: AUTH-123 - User not found
    Case ID: AUTH-124 - Invalid password format

2. *`Failed`*: Test Data Validation
    *RCA*: Data validation failed for invalid input types. Root cause: Incorrect type validation implementation.

3. *`Passed`*: Test API Response Time
    _More Info_: Response time 200ms meets SLA requirements

──────────────
- *Total:* 148 | *Failed:* 4 | *Passed:* 143
- *Reliability:* 97.3 (%)
- Click to view <https://my-jira-team.atlassian.net/issues/?jql=project%20%3D%20TQER%20AND%20%22trid%5BShort%20text%5D%22%20~%20%2212345%22%20AND%20%22test%20status%5BDropdown%5D%22%20IN%20(Failed%2C%20Skipped)%20ORDER%20BY%20status%20ASC|Failed/Skipped> tests in Jira
Execution Date: _May-23-2025 at 14:30_
_*🌳 Generated with Quality AIngineering - ΣβΔΨ*_

Usage

Standalone

  1. Ensure parameters in _env_configs/third_party.conf has valid values
  2. Export environment variables as follows
export jira_host_url=https://my-jira-team.atlassian.net
export jira_username=whoami@my-jira-team.com
export jira_password=XXXXXXXXXXXXXXXXXXXXX
export jira_project_key=TQER
export slack_dev_channel_webhook=https://hooks.slack.com/services/AAAAAA/BBBBBBB/CCCCCCCCC
export slack_prod_channel_webhook=https://hooks.slack.com/services/AAAAAA/BBBBBBB/CCCCCCCCC
export slack_test_webhook=https://hooks.slack.com/services/AAAAAA/BBBBBBB/CCCCCCCCC
# For AI Summary support:
export QE_AGENT_BASE_URL=http://localhost:11434
export QE_AGENT_API_KEY=your-ollama-api-key
export LLM_PROVIDER=ollama  # or 'openai' or 'gemini'
  1. Run the script with command-line arguments to process a pytest report:
python -m jira_test_reporting.test_results_processor --test-env=Dev --test-run=Release-X --report=test-reports/pytest_report.json --notify-slack=yes --slack-message-type=detailed

CI-CD hooked example (this copies required files into your test_automation directory)

Assuming you have

  • set Repository Variables (as in step #2 mentioned in the standalone setup above) in your scm tool. (How to: bitbcket, github)
  • configured pipeline in your SCM tool or a shell script in the caller project to execute the tests.
#!/bin/bash
# -----------------------------------------------------------------------------------------  
# # Test Execution
# -----------------------------------------------------------------------------------------  
.. pip install -r requirements.txt > /dev/null 2>&1
.. test execution code here
.. pytest -s --tb=no --no-header api_tests --testenv="$TEST_ENV" --json-report -v --json-report-indent=4 --json-report-omit collectors setup teardown --json-report-file=./test-reports/pytest_report.json
# JUST ADD FOLLOWING CODE BLOCK to report the issues
# -----------------------------------------------------------------------------------------  
# Report test results to Jira
# -----------------------------------------------------------------------------------------  
echo "Reporting test results into Jira and notifying slack"
if [ -n "$TEST_RUN_NAME" ]; then
    python -m jira_test_reporting.test_results_processor --test-env="$TEST_ENV" --test-run="$TEST_RUN_NAME" --slack-message-type=detailed
else
    python -m jira_test_reporting.test_results_processor --test-env="$TEST_ENV" --slack-message-type=detailed
fi

Arguments

  • --test-env: Test environment (default: Dev). Examples: --test-env=dev, --test-env=stage.
  • --test-run: Test run identifier (default: Daily Run). Examples: --test-run=Release-X, --test-run="Regression Tests".
  • --report: Test report file path (default: test-reports/pytest_report.json). Examples: --report=my-test-reports/my-pytest_report.json
  • --notify-slack: Whether or not you want to send out a notification to slack (default: yes). Examples: --notify-slack=yes, --notify-slack=no
  • --comments-cleanup: whether or not you want to clean up the comments on the tests if it has grown a big pile. Examples: --comments-cleanup=yes, --comments-cleanup=no
  • --slack-message-type: Type of slack message to send. Options are:
    • regular (default): Basic summary with counts and links
    • detailed: Rich summary with AI-generated insights and RCA (requires LLM access)

AI Summary Generation

When using --slack-message-type=detailed, the system will generate an AI summary with the following characteristics:

  1. AI Summary Format:

    • Lists each failed test with its status and name
    • Provides Root Cause Analysis (RCA) for failed tests (max 40 words)
    • Extracts case IDs from error details (max 3 IDs with reasons)
    • Includes "More Info" section for passed tests with key information (max 50 words)
  2. AI Providers Supported:

    • Ollama: Requires QE_AGENT_BASE_URL and QE_AGENT_API_KEY environment variables, and model qwen3-coder:30b
    • OpenAI: Requires QE_AGENT_API_KEY environment variable and GPT-4 model
    • Gemini: Requires QE_AGENT_API_KEY environment variable and Gemini model
  3. Configuration:

    • Set LLM_PROVIDER environment variable to specify which provider to use
    • If not set, the default is ollama

Troubleshooting

  • Jira Connection Errors:
    • Verify jira_host_url, jira_username, and jira_password in _env_configs/third_party.conf.
    • Ensure the API token is valid and has “Create Issues” and “Edit Issues” permissions.
  • Slack Notification Failure:
    • Check the webhook URL in the config file.
    • Ensure the Slack app is configured to allow incoming webhooks.
  • Pytest Report Issues:
    • Confirm test-reports/pytest_report.json exists and contains valid JSON.
  • Custom Field Errors:
    • Validate field IDs and allowed values in Jira Admin > Issues > Custom Fields.
  • AI Summary Errors:
    • Verify LLM access credentials
    • Ensure the selected LLM provider is accessible and has proper model availability

Contributing

Please read CONTRIBUTE.md

License

This project is licensed under the MIT License. See LICENSE for details.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

jira_test_reporting-1.6.5.tar.gz (17.7 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

jira_test_reporting-1.6.5-py3-none-any.whl (15.5 kB view details)

Uploaded Python 3

File details

Details for the file jira_test_reporting-1.6.5.tar.gz.

File metadata

  • Download URL: jira_test_reporting-1.6.5.tar.gz
  • Upload date:
  • Size: 17.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.13

File hashes

Hashes for jira_test_reporting-1.6.5.tar.gz
Algorithm Hash digest
SHA256 d9884e3b274677285e9e8683fa249fa3c8166f9a9beff69785a6dc758b50565b
MD5 fdd709c803d290c62de5bec258c5b782
BLAKE2b-256 8a09c48d307cefe69ac0642adb81aa539a72ccc48533be52999cf8221dd10b4e

See more details on using hashes here.

File details

Details for the file jira_test_reporting-1.6.5-py3-none-any.whl.

File metadata

File hashes

Hashes for jira_test_reporting-1.6.5-py3-none-any.whl
Algorithm Hash digest
SHA256 cd52d7ade148f8bdd843d093973d88584b569c60f160d024d1c2d58afcc39c6b
MD5 8242dcdfc035eef4010b647ca351e956
BLAKE2b-256 79704ae9e58d20ee2ad1d8e062aa0e852ff998e5cf1d472534840643b9884839

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page