Skip to main content

Tools for serialising test results to SQL database

Project description

TestArchiver

TestArchiver is a tool for archiving your test results to a SQL database.

And Epimetheus is the tool for browsing the results you archived.

Testing framework support

Framework Status Fixture test status Parser option
Robot Framework Supported Done robot
Mocha Supported Done mocha-junit
pytest Supported Done pytest-junit
PHPUnit Supported Done php-junit
JUnit Experimental Missing junit
xUnit Experimental Missing xunit
MSTest Experimental Missing mstest

Experimental status here means that there is a parser that can take in e.g. generic JUnit formatted output but there is no specific test set or any extensive testing or active development for the parser.

Contributions for output parsers or listeners for different testing frameworks are appreciated. Contributing simply a fixture test set (that can be used to generate output files for developing a specific parser) is extremely helpful for any new framework.

Installation

sudo -H python3 -m pip install testarchiver

Supported databases

SQLite

SQLite default database for the archiver and is mainly useful for testing and demo purposes. Sqlite3 driver is part of the python standard library so there are no additional dependencies for trying out the archiver.

PostgreSQL

PostgreSQL is the currently supported database for real projects. For example Epimetheus service uses a PosrgreSQL database. For accessing PostgreSQL databases the script uses psycopg2 module: pip install psycopg2-binary (comes with pip install)

Basic usage

The output files from different testing frameworks can be parsed into a database using test_archiver/output_parser.py script.

testarchiver --database test_archive.db output.xml

Assuming that output.xml is a output file generated by Robot Framework (the default parser option), this will create a SQLite database file named test_archive.db that contains the results.

For list of other options: testarchiver --help

positional arguments:
  output_files          list of test output files to parse in to the test
                        archive

optional arguments:
  -h, --help            show this help message and exit
  --version, -v         show program's version number and exit
  --config CONFIG_FILE  Path to JSON config file containing database
                        credentials and other configurations. Options given on
                        command line will override options set in a config
                        file.
  --format {robot,robotframework,xunit,junit,mocha-junit,pytest-junit,mstest,php-junit}
                        output format (default: robotframework)
  --repository REPOSITORY
                        The repository of the test cases. Used to
                        differentiate between test with same name in different
                        projects.
  --team TEAM           Team name for the test series
  --series SERIES       Name of the test series (and optionally build number
                        'SERIES_NAME#BUILD_NUM' or build id
                        'SERIES_NAME#BUILD_ID')
  --metadata NAME:VALUE
                        Adds given metadata to the test run. Expected format:
                        'NAME:VALUE'

Database connection:
  --dbengine DB_ENGINE  Database engine, postgresql or sqlite (default)
  --database DATABASE   database name
  --host HOST           database host name
  --user USER           database user
  --pw PASSWORD, --password PASSWORD
                        database password
  --port PORT           database port (default: 5432)
  --dont-require-ssl    Disable the default behavior to require ssl from the
                        target database.

Schema updates:
  --allow-minor-schema-updates
                        Allow TestArchiver to perform MINOR (backwards
                        compatible) schema updates the test archive
  --allow-major-schema-updates
                        Allow TestArchiver to perform MAJOR (backwards
                        incompatible) schema updates the test archive

Limit archived data:
  --no-keywords         Do not archive keyword data
  --no-keyword-stats    Do not archive keyword statistics
  --ignore-logs-below {TRACE,DEBUG,INFO,WARN}
                        Sets a cut off level for archived log messages. By
                        default archives all available log messages.
  --ignore-logs         Do not archive any log messages

Adjust timestamps:
  --time-adjust-secs TIME_ADJUST_SECS
                        Adjust time in timestamps by given seconds. This can
                        be used to change time to utc before writing the
                        results to database, especially if the test system
                        uses local time, such as robot framework. For example
                        if test were run in Finland (GMT+3) in summer (+1hr),
                        calculate total hours by minutes and seconds and
                        invert to adjust in correct direction, i.e.
                        -(3+1)*60*60, so --time-adjust-secs -14400. This
                        option is useful if you are archiving in a different
                        location to where tests are run.If you are running
                        tests and archiving in same timezone, time-adjust-
                        with-system-timezone may be a better option. This
                        option may be used in conjunction with --time-adjust-
                        with-system-timezone if desired.
  --time-adjust-with-system-timezone
                        Adjust the time in timestamps by the system timezone
                        (including daylight savings adjust). If you are
                        archiving tests in the same timezone as you are
                        running tests, setting this option will ensure time
                        written to the database is in UTC/GMT time. This
                        assumes that if multiple computers are used that their
                        timezone and daylight savings settings are identical.
                        Take care also that you do not run tests just before a
                        daylight savings time adjust and archive just after,
                        as times will be out by one hour. This could easily
                        happen if long running tests cross a timezone adjust
                        boundary. This option may be used in conjunction with
                        --time-adjust-secs.

ChangeEngine:
  --change-engine-url CHANGE_ENGINE_URL
                        Starts a listener that feeds results to ChangeEngine
  --execution-context EXECUTION_CONTEXT
                        To separate data from different build pipelines for
                        ChangeEngine prioritization. Example if same changes
                        or tests may be used to verify app in Android and iOS
                        platforms, then it would be good to separate the
                        result from different builds pipelines/platforms. The
                        ChangeEngine prioritization might not give correct
                        result if different results from different platforms
                        are mixed together.

Data model

Schema and data model (NOTICE: this points to latest version)

Useful metadata

There are meta data that are useful to add with the results. Some testing frameworks allow adding metadata to your test results and for those frameworks (e.g. Robot Framework) it is recommended to add that metadata already to the tests so the same information is also available in the results. Additional metadata can be added when parsing the results using the --metadata option. Metadata given during the parsing is linked to the top level test suite.

--metadata NAME:VALUE

Test series and teams

In the data model, each test result file is represented as single test run. These test runs are linked and organized into builds in in different result series. Depending on the situation the series can be e.g. CI build jobs or different branches. By default if no series is specified the results are linked to a default series with autoincrementing build numbers. Different test runs (from different testing frameworks or parallel executions) that belong together can be organized into the same build. Different test series are additionally organized by team. Series name and build number/id are separated by #.

Some examples using the --series and --team options of testarchiver

  • --series ${JENKINS_JOB_NAME}#${BUILD_NUMBER}
  • --series "UI tests"#<commit hash>
  • --series ${CURRENT_BRANCH}#${BUILD_ID} --team Team-A
  • --series manually_run

Each build will have a build number in the series. If the build number is specified then that number is used. If the build number/id is omitted then the build number will be checked from the previous build in that series and incremented. If the build number/id is not a number it is considered a build identifier string. If that id is new to the series the build number is incremented just as if it no build number was specified. If the same build id is found in the same test series then the results are added under that same previously archived build.

If the tests are executed in a CI environment the build number/id is an excellent way to link the archived results to the actual builds.

The series can also be indicated using metadata. Any metadata with name prefixed with series are interpreted as series information. This is especially useful when using listeners. For example when using Robot Framework metadata --metadata team:A-Team --metadata series:JENKINS_JOB_NAME#BUILD_NUMBER

Timestamp adjustment

Some test frameworks use local time in their timestamps. For archiving into databases this can be problematic if tests are viewed and or run in different timezones. To address this two ways to adjust the time back to GMT/UTC are provided.

The first allows the user to apply an adjustment of a fixed time in seconds of their choosing. This is useful for cases where tests were already run and the place/timezone where they were run are known. This option is useful if you are archiving in a different location to where tests are run. The time value provided as an option is added to the timestamp. Care must be taken with places where summer time is different (usually +1hr).

For example if test were run in Finland (GMT+2), plus 1 hour in summer, calculate total hours by minutes and seconds and invert to adjust in correct direction, i.e. -(2+1)6060, so --time-adjust-secs -10800 in summer time, and -7200 otherwise.

The second provides for automated adjustment based on the system timezone and/or daylight savings if it applies. This is useful if the tests and archiving are performed in the same place and time. This assumes that if multiple computers are used that their timezone and daylight savings settings are identical. Care must also be taken that tests are not run just before a daylight savings time adjust and archived just after as times will be out by one hour. This could easily happen if long running tests cross a timezone adjust boundary. This can be set using --time-adjust-with-system-timezone.

The ArchiverRobotListener allows for the second option if its adjust_with_system_timezone argument is set to True.

To ensure any of the optional adjustments are traceable, two meta data values are added to the suites' test run. If time-adjust-secs is set to a value, time_adjust_secs with that value is written to the suite_metadata table. If --time-adjust-with-system-timezone option is included, then the addition of the time-adjust-secs and the system timezone is written to the suite_metadata tables as time_adjust_secs_total.

e.g with command line

output_parser.py --time-adjust-secs -3600 --time-adjust-with-system-timezone ...

the following values would be added to suite_metadata table for (GMT+2)

  • time_adjust_secs with value -3600
  • time_adjust_secs_total with -10800.

This example is mimicking adding daylight savings (1hr = 3600 secs) onto a system offset secs of 7200 (GMT+2). i.e. if the computer being used had the 'daylight savings' setting of and you want to manually add it during archiving.

Release notes

  • 2.4.0 (2021-04-28)

    • Updates to support archiving Robot Framework 4.0
    • Fixes bug in execution path calculation for log messages
  • 2.3.0 (2021-03-16)

    • Better support for feeding changes and execution context for ChangeEngine
      • --changes for feeding changes information
      • --execution-id Identifier or version of the tested application for given execution-context. Stored in ChangeEngine and returned by last_update query.
  • 2.2.0 (2020-12-21)

    • Ability to adjust times as reported by timestamps in test results.
      • --time-adjust-secs allows for manual adjustment of the timestamps with given value
      • --time-adjust-with-system-timezone allows for automatic adjustment of timestamps by timezone and/or daylight savings.
    • Support for parsing PHPUnit output
    • ChangeEngine:
      • Listener ignores skipped tests
      • Adds test type to all parsers
      • --execution-context option for setting the execution context for the results
    • diff2change_context_list.py now uses the term change context instead of simply context for clarity
    • Adds Dockerfiles for an empty database and database with sample data generated from project's tests
  • 2.1.0 (2020-09-16)

    • New options for controlling archiving of keywords and log messages
      • --no-keywords for ignoring all keyword data
      • --no-keyword-stats for not collecting keyword statistics data
      • --ignore-logs for not collecting any log message data
      • --ignore-logs-below for ignoring all log messages below the given log level
  • 2.0.0 (2020-09-04)

    • Distribution as pip package pip3 install testarchiver and it installs:
      • testarchiver script (aka: output_parser.py)
      • testarchive_schematool script (aka: database.py)
      • diff2change_context_list.py script
      • test_archiver module
    • Functionality for managing schema updates
      • TestArchiver version has to match with the schema version
      • TestArchiver can apply schema updates when explicitly allowed
      • Updates are divided to major and minor updates
      • Minor updates are backwards compatible for applications reading the database
      • Major updates are backwards incompatible for applications reading the database
    • Major schema update #1:
      • Adds schema_updates table for recording schema updates
      • Adds schema_version column to test_run table to make schema incompatible with old versions TestArchiver
      • Adds index for log messages for query performance
    • Renamed output_parser.py cli option --change_engine_url to --change-engine-url
    • Record an execution path for test cases, suites and log messages.
      • The path explains the position of the item in its test run.
      • E.g. s1-s2-t3 means the third test in the second subsuite of the top suite.
    • Minor schema update #2:
      • Adds execution_path column to test_result, suite_result and log_message tables
    • Renamed Robot Framework listener ArchiverListener.py as ArchiverRobotListener.py for clarity
  • 1.2.0 (2020-08-18)

    • Important database integrity fix when using sqlite database
    • Record test criticality (Robot Framework specific)
    • Redesign of configurations management
      • Allows using both config file and command line arguments uniformly
      • CLI arguments override options set in config file
  • 1.1.3 (2020-06-09)

    • Performance fix for the schema existence check
    • Improved error messages:
      • Error when trying to archive results that have already been archived
      • Error when psycopg2 module is not found

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

testarchiver-2.4.0.tar.gz (37.6 kB view hashes)

Uploaded Source

Built Distribution

testarchiver-2.4.0-py3-none-any.whl (41.8 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page