Skip to main content

A console application tester.

Project description

contest

Build Status PyPI version shields.io Python

A CONsole application TESTer. THIS WHOLE REPO NEEDS AN OVERHAUL!

Introduction

contest is a testing application that exercises a program with configured input and then checks the output with some expected content. Simply: contest validates executable output given specific inputs.

Motivation

I wrote this for a class that I teach to help with grading. Given the configuration driven nature of it (how else should a testing framework/tool work?) contest lets me define multiple test cases for particular programs (assignments) so that not only is grading easier for me, but I can integrate assignments into a grading pipeline so that I can do as little work as possible and my students can get immediate feedback on submissions. Assignments here are usually single-file programs that are not necessarily unit-testable without adding in boilerplate and nonstandard language features. Letting students build normal programs and then verify their behavior is the goal.

Installation

pip install contest

You may install from this repo, clone and simply:

python setup.py install

How It Works

Overview

YAML is the preferred choice of input for contest for a few reasons, but most notably for its easy to follow syntax and allowance of multiline strings (sorry JSON). contest consumes an input file that specifies at least one executable and then whatever additional information is provided. Check out the test skeleton below to see what can be specified; the main ones though are the input and output streams stdout, stderr, and stderr as well and CLI inputs argv. You can specify newly generated files that you expect to be created and even go further and specify custom tests (as Python files) that cover the things simple I/O comparisons do not. Lastly, you may also specify environment variables to be set during the execution; and you may also scrub the environment before adding custom keys. Note that scrubbing the environment may prevent your executable from running altogether! For now you are better off allowing your current environment to persist and simply overwrite what needs to be set. Environment variables specified within a test-case take precedence over what is in your current environment.

Test Skeleton

executable:                 # !!str, name of the executable to use for all tests
test-cases:                 # !!seq, list of all test cases
    - name:                 # !!str, name of the test
      scrub-env:            # !!bool, flag to remove the current environment
      env:                  # !!map, environment variables to set
      resources:            # !!seq, list of resources to copy to the test directory, need to provide a src and dst
      executable:           # !!str, name of the executable to use for this test case only
      argv:                 # !!seq, list of arguments to pass to the executable
      stdin:                # !!str || !!seq inputs to standard input stream
        # * see below
      returncode:           # !!int, expected return code
      stdout:               # !!str || !!seq || !!map, expected output in standard output stream
        # ** see below
      stderr:               # !!str || !!seq || !!map, expected output in standard error stream
        # ** see below
      ofstreams:            # !!seq, list of files to compare
            # *** see below
      extra-tests:          # !!seq, list of additional modules to load for additional tests

* The stdin field can either be a block of text (one entry to stdin per line) or can be provided as a list.

** These fields can either be text blocks, lists, or dictionaries. As a text block or list, it should be the expected output as is. As dictionaries the following fields are allowed:

text:         # !!str, expected output as a string
file:         # !!str, file containing the text to compare against. useful for keeping the size of these files small if desired
empty:        # !!bool, flag to explicitly check if a stream is empty or nonempty. all other checks are ignored
start:        # !!int, 0-indexed line offset to specify where to start comparisons
count:        # !!int, number of lines to compare

All fields are optional, so long as whatever is (or is not) specified makes sense.

*** These fields are dictionaries and in addition to the fields specified above, the following fields are allowed:

test-file:    # !!str, path to file generated, absolute or relative to the executable
exists:       # !!bool, flag to check if the file exists. all other checks are ignored
binary:       # !!bool, flag to indicate the file is binary

Other than test-file, each field is optional, so long as whatever is (or is not) specified makes sense.

Basic Usage

Given some configuration you can run contest using the following:

contest <path to configuration file>

This will parse the configuration and run the specified test case(s). In the configuration file each test case is defined under the test-cases node in the recipe; simply add a new section as desired. You will just need to make sure each test is named uniquely. Here is an example of a test recipe (taken from examples/native_console_app):

executable: hello_world.exe
test-cases:
  - name: standard
    stdin: Lnk2past
    stdout: |
      Hello! What is your name?
      Welcome to the world, Lnk2past!

Let us break down what this is specifying:

  1. The name of the executable to run is

    hello_world.exe
    
  2. We have a single test-case named

    standard
    
  3. We define the input to stdin, which is a single entry:

    Lnk2past
    
  4. We define the output to stdout, which is:

    Hello! What is your name?
    Welcome to the world, Lnk2past!
    

This is really the equivalent of the following in some shell environment:

~/project> ./hello_world.exe
Hello! What is your name?
Lnk2past
Welcome to the world, Lnk2past!

This means that when running the executable hello_world.exe we can expect the input in step 3 to yield the output in step 4. contest does this comparison for you! This allows you to write tests that would reflect actual use cases of your executable. Add as many tests as you like to cover various pathways through your program and to cover the various errors your program may encounter.

Check out the other examples under the examples directory.

Test Directories

contest will run each test-case in a separate directory, and will create those directories in the same directory containing the test recipe. This ensures minimal conflict between test cases. For example, if your test recipe contains test cases "foo" and "bar" and is located in "C:\Users\Lnk2past\MyProject", then you can expect the following directory structure:

C:\Users\Lnk2past\MyProject\
|---src\...
|---include\...
|---contest_recipe.yaml
|---test_output\
    |---foo\...
    |---bar\...

Even if your test-case produces no output on disk, the test-output directory will be created.

Filtering Tests

You can filter your test-recipes to only run a select few. This may be useful during debugging to only run your new test without needing to run the entire test recipe. You can do this via the --filter option. This expects some regular expression to filter on. e.g. we can test only those tests that are marked with a specific keyword in their names, say "tracking", by doing the following:

contest test_recipe.yaml --filter "tracking"

Likewise, you can exclude specific tests in order if they are problematic or if you are focusing on other tests. simply use the --exclude-filters or --exclude for short. So long as you know regex you can do whatever you like for filtering your tests!

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

contest-2024.8.0.tar.gz (13.9 kB view details)

Uploaded Source

Built Distribution

contest-2024.8.0-py3-none-any.whl (12.6 kB view details)

Uploaded Python 3

File details

Details for the file contest-2024.8.0.tar.gz.

File metadata

  • Download URL: contest-2024.8.0.tar.gz
  • Upload date:
  • Size: 13.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.0 CPython/3.12.5

File hashes

Hashes for contest-2024.8.0.tar.gz
Algorithm Hash digest
SHA256 a7ff7573f3cecb3238bdfd16b7e079f7382e8422e5ceb534be78b8077a65d407
MD5 2779e83df7190b7efed0c139684d4666
BLAKE2b-256 6b6d7c5d90f6334e91de54aa5f5e7382c952a68fba9ca36eb0b3297bb5d77d2a

See more details on using hashes here.

File details

Details for the file contest-2024.8.0-py3-none-any.whl.

File metadata

  • Download URL: contest-2024.8.0-py3-none-any.whl
  • Upload date:
  • Size: 12.6 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.0 CPython/3.12.5

File hashes

Hashes for contest-2024.8.0-py3-none-any.whl
Algorithm Hash digest
SHA256 7371591e4c550bc2ba9c4030c528fbdae8e478eb2a6a064ddd4f3fdc5575a7e1
MD5 d4f21fdc7a8ca733460a34babcaa0515
BLAKE2b-256 ff49993da0e6d1a3a92794d82881215bd53c68e760cd351c2b3ab5a6e6a7442a

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page