A pytest plugin to report test results as JSON files
Project description
Pytest JSON Report
This pytest plugin creates test reports as JSON. This makes it easy to process test results in other applications.
It can report a summary, test details, captured output, logs, exception tracebacks and more. Additionally, you can use the available fixtures and hooks to add metadata and customize the report as you like.
Table of contents
Installation
pip install pytest-json-report --upgrade
Options
Option | Description |
---|---|
--json-report |
Create JSON report |
--json-report-file=PATH |
Target path to save JSON report (use "none" to not save the report) |
--json-report-summary |
Just create a summary without per-test details |
--json-report-omit=FIELD_LIST |
List of fields to omit in the report (choose from: collectors , log , traceback , streams , warnings , keywords ) |
--json-report-indent=LEVEL |
Pretty-print JSON with specified indentation level |
--json-report-verbosity=LEVEL |
Set verbosity (default is value of --verbosity ) |
Usage
Just run pytest with --json-report
. The report is saved in .report.json
by default.
$ pytest --json-report -v tests/
$ cat .report.json
{"created": 1518371686.7981803, ... "tests":[{"nodeid": "test_foo.py", "outcome": "passed", ...}, ...]}
If you just need to know how many tests passed or failed and don't care about details, you can produce a summary only:
$ pytest --json-report --json-report-summary
Many fields can be omitted to keep the report size small. E.g., this will leave out keywords and stdout/stderr output:
$ pytest --json-report --json-report-omit keywords streams
If you don't like to have the report saved, you can specify none
as the target file name:
$ pytest --json-report --json-report-file none
Advanced usage
Metadata
The easiest way to add your own metadata to a test item is by using the json_metadata
test fixture:
def test_something(json_metadata):
json_metadata['foo'] = {"some": "thing"}
json_metadata['bar'] = 123
Or use the pytest_json_runtest_metadata
hook (in your conftest.py
) to add metadata based on the current test run. The dict returned will automatically be merged with any existing metadata. E.g., this adds the start and stop time of each test's call
stage:
def pytest_json_runtest_metadata(item, call):
if call.when != 'call':
return {}
return {'start': call.start, 'stop': call.stop}
Also, you could add metadata using pytest-metadata's --metadata
switch which will add metadata to the report's environment
section, but not to a specific test item. You need to make sure all your metadata is JSON-serializable.
A note on hooks
If you're using a pytest_json_*
hook although the plugin is not installed or not active (not using --json-report
), pytest doesn't recognize it and may fail with an internal error like this:
INTERNALERROR> pluggy.manager.PluginValidationError: unknown hook 'pytest_json_runtest_metadata' in plugin <module 'conftest' from 'conftest.py'>
You can avoid this by declaring the hook implementation optional:
import pytest
@pytest.hookimpl(optionalhook=True)
def pytest_json_runtest_metadata(item, call):
...
Modifying the report
You can modify the entire report before it's saved by using the pytest_json_modifyreport
hook.
Just implement the hook in your conftest.py
, e.g.:
def pytest_json_modifyreport(json_report):
# Add a key to the report
json_report['foo'] = 'bar'
# Delete the summary from the report
del json_report['summary']
After pytest_sessionfinish
, the report object is also directly available to script via config._json_report.report
. So you can access it using some built-in hook:
def pytest_sessionfinish(session):
report = session.config._json_report.report
print('exited with', report['exitcode'])
If you really want to change how the result of a test stage run is turned into JSON, you can use the pytest_json_runtest_stage
hook. It takes a TestReport
and returns a JSON-serializable dict:
def pytest_json_runtest_stage(report):
return {'outcome': report.outcome}
Direct invocation
You can use the plugin when invoking pytest.main()
directly from code:
import pytest
from pytest_jsonreport.plugin import JSONReport
plugin = JSONReport()
pytest.main(['--json-report-file=none', 'test_foo.py'], plugins=[plugin])
You can then access the report
object:
print(plugin.report)
And save the report manually:
plugin.save_report('/tmp/my_report.json')
Format
The JSON report contains metadata of the session, a summary, collectors, tests and warnings. You can find a sample report in sample_report.json
.
Key | Description |
---|---|
created |
Report creation date. (Unix time) |
duration |
Session duration in seconds. |
exitcode |
Process exit code as listed in the pytest docs. The exit code is a quick way to tell if any tests failed, an internal error occurred, etc. |
root |
Absolute root path from which the session was started. |
environment |
Environment entry. |
summary |
Summary entry. |
collectors |
Collectors entry. (absent if --json-report-summary or if no collectors) |
tests |
Tests entry. (absent if --json-report-summary ) |
warnings |
Warnings entry. (absent if --json-report-summary or if no warnings) |
Example
{
"created": 1518371686.7981803,
"duration": 0.1235666275024414,
"exitcode": 1,
"root": "/path/to/tests",
"environment": ENVIRONMENT,
"summary": SUMMARY,
"collectors": COLLECTORS,
"tests": TESTS,
"warnings": WARNINGS,
}
Summary
Number of outcomes per category and the total number of test items.
Key | Description |
---|---|
collected |
Total number of tests collected. |
total |
Total number of tests run. |
deselected |
Total number of tests deselected. (absent if number is 0) |
<outcome> |
Number of tests with that outcome. (absent if number is 0) |
Example
{
"collected": 10,
"passed": 2,
"failed": 3,
"xfailed": 1,
"xpassed": 1,
"error": 2,
"skipped": 1,
"total": 10
}
Environment
The environment section is provided by pytest-metadata. All metadata given by that plugin will be added here, so you need to make sure it is JSON-serializable.
Example
{
"Python": "3.6.4",
"Platform": "Linux-4.56.78-9-ARCH-x86_64-with-arch",
"Packages": {
"pytest": "3.4.0",
"py": "1.5.2",
"pluggy": "0.6.0"
},
"Plugins": {
"json-report": "0.4.1",
"xdist": "1.22.0",
"metadata": "1.5.1",
"forked": "0.2",
"cov": "2.5.1"
},
"foo": "bar", # Custom metadata entry passed via pytest-metadata
}
Collectors
A list of collector nodes. These are useful to check what tests are available without running them, or to debug an error during test discovery.
Key | Description |
---|---|
nodeid |
ID of the collector node. (See docs) The root node has an empty node ID. |
outcome |
Outcome of the collection. (Not the test outcome!) |
result |
Nodes collected by the collector. |
longrepr |
Representation of the collection error. (absent if no error occurred) |
The result
is a list of the collected nodes:
Key | Description |
---|---|
nodeid |
ID of the node. |
type |
Type of the collected node. |
lineno |
Line number. (absent if not applicable) |
deselected |
true if the test is deselected. (absent if not deselected) |
Example
[
{
"nodeid": "",
"outcome": "passed",
"result": [
{
"nodeid": "test_foo.py",
"type": "Module"
}
]
},
{
"nodeid": "test_foo.py",
"outcome": "passed",
"result": [
{
"nodeid": "test_foo.py::test_pass",
"type": "Function",
"lineno": 24,
"deselected": true
},
...
]
},
{
"nodeid": "test_bar.py",
"outcome": "failed",
"result": [],
"longrepr": "/usr/lib/python3.6 ... invalid syntax"
},
...
]
Tests
A list of test nodes. Each completed test stage produces a stage object (setup
, call
, teardown
) with its own outcome
.
Key | Description |
---|---|
nodeid |
ID of the test node. |
lineno |
Line number where the test starts. |
keywords |
List of keywords and markers associated with the test. |
outcome |
Outcome of the test run. |
{setup, call, teardown} |
Test stage entry. To find the error in a failed test you need to check all stages. (absent if stage didn't run) |
metadata |
Metadata item. (absent if no metadata) |
Example
[
{
"nodeid": "test_foo.py::test_fail",
"lineno": 50,
"keywords": [
"test_fail",
"test_foo.py",
"test_foo0"
],
"outcome": "failed",
"setup": TEST_STAGE,
"call": TEST_STAGE,
"teardown": TEST_STAGE,
"metadata": {
"foo": "bar",
}
},
...
]
Test stage
A test stage item.
Key | Description |
---|---|
duration |
Duration of the test stage in seconds. |
outcome |
Outcome of the test stage. (can be different from the overall test outcome) |
crash |
Crash entry. (absent if no error occurred) |
traceback |
List of traceback entries. (absent if no error occurred; affected by --tb option) |
stdout |
Standard output. (absent if none available) |
stderr |
Standard error. (absent if none available) |
log |
Log entry. (absent if none available) |
longrepr |
Representation of the error. (absent if no error occurred; format affected by --tb option) |
Example
{
"duration": 0.00018835067749023438,
"outcome": "failed",
"crash": {
"path": "/path/to/tests/test_foo.py",
"lineno": 54,
"message": "TypeError: unsupported operand type(s) for -: 'int' and 'NoneType'"
},
"traceback": [
{
"path": "test_foo.py",
"lineno": 65,
"message": ""
},
{
"path": "test_foo.py",
"lineno": 63,
"message": "in foo"
},
{
"path": "test_foo.py",
"lineno": 63,
"message": "in <listcomp>"
},
{
"path": "test_foo.py",
"lineno": 54,
"message": "TypeError"
}
],
"stdout": "foo\nbar\n",
"stderr": "baz\n",
"log": LOG,
"longrepr": "def test_fail_nested():\n ..."
}
Log
A list of log records. The fields of a log record are the logging.LogRecord
attributes, with the exception that the fields exc_info
and args
are always empty and msg
contains the formatted log message.
You can apply logging.makeLogRecord()
on a log record to convert it back to a logging.LogRecord
object.
Example
[
{
"name": "root",
"msg": "This is a warning.",
"args": null,
"levelname": "WARNING",
"levelno": 30,
"pathname": "/path/to/tests/test_foo.py",
"filename": "test_foo.py",
"module": "test_foo",
"exc_info": null,
"exc_text": null,
"stack_info": null,
"lineno": 8,
"funcName": "foo",
"created": 1519772464.291738,
"msecs": 291.73803329467773,
"relativeCreated": 332.90839195251465,
"thread": 140671803118912,
"threadName": "MainThread",
"processName": "MainProcess",
"process": 31481
},
...
]
Warnings
A list of warnings that occurred during the session. (See the pytest docs on warnings.)
Key | Description |
---|---|
filename |
File name. |
lineno |
Line number. |
message |
Warning message. |
when |
When the warning was captured. ("config" , "collect" or "runtest" as listed here) |
Example
[
{
"code": "C1",
"path": "/path/to/tests/test_foo.py",
"nodeid": "test_foo.py::TestFoo",
"message": "cannot collect test class 'TestFoo' because it has a __init__ constructor"
}
]
Related tools
-
pytest-json has some great features but appears to be unmaintained. I borrowed some ideas and test cases from there.
-
tox has a switch to create a JSON report including a test result summary. However, it just provides the overall outcome without any per-test details.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file pytest-json-report-1.5.0.tar.gz
.
File metadata
- Download URL: pytest-json-report-1.5.0.tar.gz
- Upload date:
- Size: 21.2 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/3.8.0 pkginfo/1.8.2 readme-renderer/34.0 requests/2.27.1 requests-toolbelt/0.9.1 urllib3/1.26.8 tqdm/4.63.0 importlib-metadata/4.11.3 keyring/23.5.0 rfc3986/2.0.0 colorama/0.4.4 CPython/3.10.2
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 2dde3c647851a19b5f3700729e8310a6e66efb2077d674f27ddea3d34dc615de |
|
MD5 | 8b9ba4beb5a4c599b7eaf85b7a5c354c |
|
BLAKE2b-256 | 4fd3765dae9712fcd68d820338908c1337e077d5fdadccd5cacf95b9b0bea278 |
File details
Details for the file pytest_json_report-1.5.0-py3-none-any.whl
.
File metadata
- Download URL: pytest_json_report-1.5.0-py3-none-any.whl
- Upload date:
- Size: 13.2 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/3.8.0 pkginfo/1.8.2 readme-renderer/34.0 requests/2.27.1 requests-toolbelt/0.9.1 urllib3/1.26.8 tqdm/4.63.0 importlib-metadata/4.11.3 keyring/23.5.0 rfc3986/2.0.0 colorama/0.4.4 CPython/3.10.2
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 9897b68c910b12a2e48dd849f9a284b2c79a732a8a9cb398452ddd23d3c8c325 |
|
MD5 | 8387197e84cad4878806b8a1149dbeca |
|
BLAKE2b-256 | 8135d07400c715bf8a88aa0c1ee9c9eb6050ca7fe5b39981f0eea773feeb0681 |