A simple plugin to use with pytest
Project description
pytest-elk-reporter
A plugin to send pytest test results to ELK stack, with extra context data
Features
- Reporting into Elasticsearch each test result, as the test finish
- Automatically append context data to each test:
- git information such as
branch
orlast commit
and more - all of CI env variables
- Jenkins
- Travis
- Circle CI
- Github Actions
- username if available
- git information such as
- Report a test summery to Elastic for each session with all the context data
- can append any user data into the context sent to elastic
Requirements
- having pytest tests written
Installation
You can install "pytest-elk-reporter" via pip from PyPI
pip install pytest-elk-reporter
ElasticSearch configuration
We need this auto_create_index enable for the indexes that are going to be used, since we don't have code to create the indexes, this is the default
curl -X PUT "localhost:9200/_cluster/settings" -H 'Content-Type: application/json' -d'
{
"persistent": {
"action.auto_create_index": "true"
}
}
'
For more info on this elasticsearch feature check their index documention
Usage
Run and configure from command line
pytest --es-address 127.0.0.1:9200
# or if you need user/password to authenticate
pytest --es-address my-elk-server.io:9200 --es-username fruch --es-password 'passwordsarenicetohave'
Configure from code (ideally in conftest.py)
from pytest_elk_reporter import ElkReporter
def pytest_plugin_registered(plugin, manager):
if isinstance(plugin, ElkReporter):
# TODO: get credentials in more secure fashion programmatically, maybe AWS secrets or the likes
# or put them in plain-text in the code... what can ever go wrong...
plugin.es_address = "my-elk-server.io:9200"
plugin.es_user = 'fruch'
plugin.es_password = 'passwordsarenicetohave'
plugin.es_index_name = 'test_data'
Configure from pytest ini file
# put this in pytest.ini / tox.ini / setup.cfg
[pytest]
es_address = my-elk-server.io:9200
es_user = fruch
es_password = passwordsarenicetohave
es_index_name = test_data
see pytest docs for more about how to configure using .ini files
Collect context data for the whole session
For example, with this I'll be able to build a dash board per version
import pytest
@pytest.fixture(scope="session", autouse=True)
def report_formal_version_to_elk(request):
"""
Append my own data specific, for example which of the code uner test is used
"""
# TODO: take it programticly of of the code under test...
my_data = {"formal_version": "1.0.0-rc2" }
elk = request.config.pluginmanager.get_plugin("elk-reporter-runtime")
elk.session_data.update(**my_data)
Collect data for specific tests
import requests
def test_my_service_and_collect_timings(request, elk_reporter):
response = requests.get("http://my-server.io/api/do_something")
assert response.status_code == 200
elk_reporter.append_test_data(request, {"do_something_response_time": response.elapsed.total_seconds() })
# now doing response time per version dashboard quite easy
# and yeah, it's not exactly real usable metric, it's just an example...
Or via record_property
built-in fixture (that is normally used to collect data into the junitxml):
import requests
def test_my_service_and_collect_timings(record_property):
response = requests.get("http://my-server.io/api/do_something")
assert response.status_code == 200
record_property("do_something_response_time", response.elapsed.total_seconds())
# now doing response time per version dashboard quite easy
# and yeah, it's not exactly real usable metric, it's just an example...
Split tests base on history
Cool thing that can be done now that you have history of the tests is to split the test base on their actually runtime when passing. for integration test which might be long (minutes to hours), this would be priceless.
In this example we going to split the run in maximum 4min slices while any test that doesn't have history information would be assumed to be 60sec long
# pytest --collect-only --es-splice --es-max-splice-time=4 --es-default-test-time=60
...
0: 0:04:00 - 3 - ['test_history_slices.py::test_should_pass_1', 'test_history_slices.py::test_should_pass_2', 'test_history_slices.py::test_should_pass_3']
1: 0:04:00 - 2 - ['test_history_slices.py::test_with_history_data', 'test_history_slices.py::test_that_failed']
...
# cat include000.txt
test_history_slices.py::test_should_pass_1
test_history_slices.py::test_should_pass_2
test_history_slices.py::test_should_pass_3
# cat include000.txt
test_history_slices.py::test_with_history_data
test_history_slices.py::test_that_failed
### now we can run the each slice on it's own machines
### on machine1
# pytest $(cat include000.txt)
### on machine2
# pytest $(cat include001.txt)
Contributing
Contributions are very welcome. Tests can be run with tox
, please ensure
the coverage at least stays the same before you submit a pull request.
License
Distributed under the terms of the MIT license, "pytest-elk-reporter" is free and open source software
Issues
If you encounter any problems, please file an issue along with a detailed description.
Thanks
This pytest plugin was generated with Cookiecutter along with @hackebrot's cookiecutter-pytest-plugin template.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for pytest-elk-reporter-0.2.2.tar.gz
Algorithm | Hash digest | |
---|---|---|
SHA256 | 4177a0e21bb1b76af00bd7b3b623137d2c6413abeb7bf302cdcc613a401bf950 |
|
MD5 | 3aa8793cfad6227b03fee1f6f1d707ae |
|
BLAKE2b-256 | 73d4bdfedae7f96cec9cbe6c28b0c7fb3e469e8a54d8d2f9021280f95e972497 |
Hashes for pytest_elk_reporter-0.2.2-py2.py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 58826b9bb8a55b633b7b438ae5ad4c8eff56f1b6f5f20165d16a5a4960b44653 |
|
MD5 | 7d32805ba79ef9ebf1872ddafa011099 |
|
BLAKE2b-256 | f61a27a859e42329bb0966220ae908a1de2aa75d98ec567736e77c5352166254 |