Skip to main content

pytest plugin for building a test suite, using YAML files to extend pytest parameterize functionality.

Project description

Pytest Automation

image CodeFactor Join the chat at

For automating test creation. This plugin lets you send a list of tests (arguments defined in yaml's), to python methods. For creating an agile test suite.

This plugin is compatible to run alongside vanilla pytest tests, without interfering with them. The custom CLI arguments for this plugin won't affect those tests (Including --skip-all).

How to setup the Test Suite

1) Install the plugin

Run the following:

python3 -m pip install pytest-automation

2) Add both required files

It doesn't matter where in your project these exist, but names are case-sensitive, and exactly one of each should exist. If you need to ignore a sub-directory with it's own pytest suite, use --ignore <dir> (More info on ignoring here).

  • pytest-config.yml

    This file defines where each individual yml test gets sent, to which method in the file.

    This also allows you to have multiple types of tests in the same file. (Useful for example, having a test_known_bugs.yml you can exclude from pipelines).

    This file is required to have one test_types key. This holds a list, of each type of test (or test type) you'd like in your suite. Each element in the list, is in the format {"test title": {all_test_info}} (Shown in this example below).

    pytest-config.yml example:

    # Contents of pytest-config.yml
    - For running addition tests:
        required_keys: ["x_add", "y_add"]
        method: test_PythonsAddition
    - For running factorial tests:
        required_keys: factor_num
        required_in_title: test-factor
        method: test_NumpysFactor
            throw_on_negative: False

    Possible filters in each test type:

    • required_keys: The yml test must contain ALL these keys to run with this test type.

    • required_in_title: Check if the test title contains this string (case insensitive). NOTE: With basic values, it's easy to accidentally match new tests later on. Best practice is to use something like "test-[something]", instead of just "[something]".

    Each yml test will go through the test_types list in order, and the following things will happen:

    • ONLY keys that are declared will be checked. Also you can have multiple required_* keys in the same test type, and they ALL have to match for the test to run.

      • Note: This means if it has NO required_* keys, ALL tests will match it, so no tests will continue pass that test type.
    • If the test is matched to that type, the function under method will be looked for in and called.

    • If NO test type is matched, that test will fail, and the next will run.

    In the pytest-config example above, the test_PythonsAddition will only be called, if that yml test contains both the x_add and y_add keys. With the test_NumpysFactor, it'll only be called if that yml test has the factor_num key, AND it's in the test_factorials.yml file.

    Test Type Variables:

    In the pytest-config example above, you can see the following key in the second test type:

        throw_on_negative: False

    This variables key is optional. It'll pass it's contents into each pytest-managers method, under the test_type_vars param. This is useful for declaring url's, endpoints, etc. More info on what arguments get passed to the method here.


    When a yml test is matched with test type, that test type's method is imported from this file, and ran. example

    # Contents of
    from custom_add import run_add_test
    from custom_factor import run_fact_test
    # The methods here matchs the 'method' key in 'pytest-config.yml' example. (Required)
    def test_PythonsAddition(**args):
        # *OR* just run test here:
        test_info = args["test_info"]
        assert test_info["x_add"] + test_info["y_add"] == test_info["answer"]
    def test_NumpysFactor(**args):
        # *OR* just run test here:
        test_factor = args["test_info"]["factor_num"]
        assert factor(test_factor) == args["test_info"]["answer"]

    Like with the first line of each method, it's recommended to have the testing code in another file, and just call it from this one. This helps keeps the suite organized for longer test methods. Even if you import other methods, ONLY the methods defined in this file can be loaded from the method key in pytest-config.yml.

    Args passed into each Test

    Each test in should only accept **args as their one param. That'll allow the plugin to add extra keys in the future, without breaking older tests. The following keys are currently guaranteed:

    • config: A pytest config (ext link) object. For interacting with pytest itself (i.e getting which cli options were used when running the suite)

    • test_info: The parameters from the yml file. This is passed into the python manager, and what makes each test unique. More info here.

    • test_type_vars: How to declare variables for a test type, and not have them hard-coded/duplicated in each test. (i.e. what api endpoint to target). More info here.

3) Write the yaml tests

This is where you define each individual test. Each test is matched to a test type, then ran.

yaml requirements:

  • All yaml names must start with "test_", and end with either ".yml" or ".yaml".

  • The list of tests, must be under a SINGLE tests key, inside the file. If more than one key exists, they'll override each other, and you won't run all your tests.

  • Each test in the list, is a single dict of the format {"test title": {ALL_TEST_INFO}}

writing yaml tests example:

# Contents of test_MyCustomMath.yml
# These examples match the "pytest-config.yml" example, with required_keys above. 

- Basic addition test:
    x_add: 5
    y_add: 7
    answer: 12

- Factorial Basic test:
    factor_num: 4
    answer: 24

- test-factor Factorial special case zero:
    factor_num: 0
    answer: 1

- Negative addition test:
    x_add: 5
    y_add: -7
    answer: -2

The first test gets matched to the addition test type in the pytest-config example, containing the two required keys.

The second and would get matched to the factorial test, except it doesn't have "test-factor" in it's title, like required_in_title says it should, so it doesn't get matched to anything and fails.

The third test does have "test-factor" in it's title, so it runs as normal.

The fourth test gets matched to the addition test type, so it runs with that method in the pytest-config.yml.

IMPORTANT NOTE: Before passing each yml test to their method, the plugin will move the title into the info, with title becoming key. So the title key is reserved:

# Before passing into the test, this test info:
- Negative addition test:
    x_add: 5
    y_add: -7
    answer: -2
# Will automatically become:
- title: Negative addition test
  x_add: 5
  y_add: -7
  answer: -2
# To make accessing each item easier to access.

(Example on how to access the test_info values here).

yaml test philosophy:

One key idea behind organizing tests into yamls, is you can move each individual yml test between files, and the test will still behave as expected.

  • This means you can have "test_known_bugs.yml" to exclude from build pipelines, or "test_prod_only.yml" that only gets executed against your prod environment. etc.

  • This also means we can't decide which test type to run a test with, based on what file it's in. Otherwise, as soon as you moved a test from this file, to "known_bugs.yml", it's behavior might change.

4) Using for extra Flexibility

You're able to use pytest hooks (ext link) to run commands before the suite, add CLI options, etc; by declaring them in a file inside your project.

(NOTE: Fixtures not currently supported, but hopefully coming soon!)

Adding CLI Options

Add the pytest_addoption (ext link) hook for declaring CLI arguments.

# Contents of

def pytest_addoption(parser):
    # addoption is built on top of argparse, same syntax:
    parser.addoption("--api", action="store", default="local",
        help = "Which API to hit when running tests (LOCAL/DEV/TEST/PROD, or url).")

    # Add other 'parser.addoption' calls here for each argument

Then each tests can look at what was the user passed in, through the config.

# Contents of some test file, called by

def test_CheckQueries(**args):
    api = args["config"].getoption("--api")

    # ... Continue to run whatever checks after this ...

Running scripts before/after the Suite

Add the pytest_sessionstart (ext link) or pytest_sessionfinish (ext link) hooks for adding startup/tear down logic.

# Contents of

def pytest_sessionstart(session):
    # Maybe you need a directory for dumping temp files:
    temp_dir = "some/tmp/dir"
    if os.path.isdir(temp_dir):

def pytest_sessionfinish(session, exitstatus):
    # Maybe send a email if the suite fails

Full list of Hooks

You can find the full list here (ext link).

How to run tests

Running the Tests:

pytest <pytest and plugin args here> <PATH> <custom conftest args here>
# Example:
pytest -n auto -s -tb short --df known_bugs . --api devel
  • Common pytest CLI args:

    • -n int => The number of threads to use. Make sure tests are thread-safe. (Default = 1, install pytest-xdist (ext link) to use).

    • -s => If python prints anything, show it to your console.

    • -x => Quit as soon as the first test fails

    • (-v|-vv|-vvv) => How much info to print for each test

    • --tb ("short" | "long" | ...) => How much error to print, when a test fails. (Other options available, more info here (ext link))

    • --ignore DIR => Ignore this directory from your suite. Works both with vanilla pytest tests, and pytest-automation files. Useful if you pull another repo into yours, and it has it's own test suite. (More info here (ext link)).

  • Custom pytest-automation args:

    Filter what tests to run:

    • --only-run-name str, --dont-run-name str (--on str/--dn str) => (Can use multiple times) If the name of the test contains this value, only/don't run accordingly.

    • --only-run-file str, --dont-run-file str (--of str/--df str) => (Can use multiple times) If the file the test is in contains this value, only/don't run accordingly.

    • --only-run-type str, --dont-run-type str (--ot str/--dt str) => (Can use multiple times) Looks at the title in pytest-config.yml, and if it contains this value, only/don't run accordingly.

    • skip-all: Skips all pytest-automation yaml tests. (Doesn't skip vanilla pytest methods).

  • PATH:

    • The path to start collecting tests / files from.

    • Normally just "." for current directory. (i.e. 'pyest . ')

  • Custom conftest CLI args:

    Any arguments you define in your projects file. More info here.

How to Build from Source

1) Create your environment:

# Upgrade pip to latest and greatest
python3 -m pip install --upgrade pip
# Install tool for creating environments
python3 -m pip install virtualenv
# Create the environment
virtualenv --python=python3 ~/PytestAuto-env
# Jump inside it. (You'll need to run this for each new shell)
source ~/PytestAuto-env/bin/activate
  • You should see your terminal start with "(PytestAuto-env)" now.

2) Install the required packages:

# Because of the --python=python3 above, you can now just run 'python'
# Install the packages needed to run
python -m pip install <Path-to-this-repo-root>
# OR if you want to run the test suite:
python -m pip install <Path-to-this-repo-root>/requirements.txt

3) Install it:

  • Or run this after each change to the source.
# NOTE: The "--upgrade" is needed incase it's already installed.
#  (i.e. Don't use cached version).
python -m pip install --upgrade <Path-to-this-repo-root>

Project details

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

pytest-automation-2.0.1.tar.gz (25.2 kB view hashes)

Uploaded source

Built Distribution

pytest_automation-2.0.1-py3-none-any.whl (14.9 kB view hashes)

Uploaded py3

Supported by

AWS AWS Cloud computing Datadog Datadog Monitoring Facebook / Instagram Facebook / Instagram PSF Sponsor Fastly Fastly CDN Google Google Object Storage and Download Analytics Huawei Huawei PSF Sponsor Microsoft Microsoft PSF Sponsor NVIDIA NVIDIA PSF Sponsor Pingdom Pingdom Monitoring Salesforce Salesforce PSF Sponsor Sentry Sentry Error logging StatusPage StatusPage Status page