Evaluate and promote builds on a CI/CD platform based on performance
Continuous Delivery Performance Promotion Tool
The Continuous Delivery Performance Promotion Tool is a Python program that is used to evaluate whether applications are performing well enough to be moved on to the next stage in the continuous delivery system. This tool allows users to define how their application should be performing via a simple JSON configuration file and then evaluates the application’s actual performance against those criteria. Currently, the program supports AppDynamics, BlazeMeter, and WebPageTest but aims to support other load testing tools like SilkPerformer and Visual Studio Test Suite as well in the future.
- Run pip install cd_perf_promotion
- You’re done!
- Make sure that you have the latest version of Python and Pip installed and that you can run both via your command-line interface (CLI).
- Download the source code and navigate to it using your CLI.
- Inside the cd_perf_promotion downloaded directory, run python setup.py install to begin installation.
- The application will download the necessary dependencies via Pip and install itself. You will now be able to run the program from your CLI via the command cdperfpromotion.
- You’re done!
Defining Your Promotion Criteria
Your config.json file contains all of the configuration information that the tool needs to retrieve data from your performance tools and evaluate whether your application meets your performance standards. We’ve provided some sample configuration files (located in documentation/sample_configs/input) to help you get started. A complete list of all of the available data items that can be used to evaluate the performance of your application and information on what they really mean is available in the dictionary.md file inside the documentation directory.
We’ve put a lot of work into making sure that the program is modular and customizable so that you don’t have to include all of the data items that exist in the configuration file. Instead, only include the tools that you are using and the data items that you would like to include. Anything not included will not be evaluated against. Please note that you must include configuration information for the tools used to gather the information that is defined. For example, if you wanted to include your application’s average response time, you must also include a BlazeMeter section with a BlazeMeter API key and test ID.
Evaluating the Results
The promotion_gates JSON object in the output file contains all of the high-level information about whether each data metric met the configuration file target. If any of the transactions/runs has a data item that does not meet the predefined performance target, the whole data item is marked as failed in the promotion_gates section. If a data item is failed, you can go up to that respective data item’s parent tool and figure out where it failed and what the actual result was.
For example, the sample configuration file config_all.json.sample defines that there must not be any AppDynamics health rule violations with a severity of warning ("warning": true) or critical ("critical": true). Unfortunately, the output file, cdperfpromodata_timestamp_all.json.sample, states in the promotion_gates section that this data item has failed ("appdynamics_health": false). Knowing that, we can go up and look at the appdynamics section which reveals that the application we’re evaluating has a health rule violation with a severity of warning that is notifying us that the application is using too much memory.
It’s important to note that if any of the data items specified in the configuration file fails, the entire application will fail and will not be promoted to the next stage.