Skip to main content

Unittesting & Grading of Jupyter Notebooks

Project description

autograde

autograde test autograde on PyPI

autograde is a toolbox for testing Jupyter notebooks. Its features include execution of notebooks (optionally isolated via docker/podman) with consecutive unit testing of the final notebook state. An audit mode allows for refining results (e.g. grading plots by hand). Eventually, autograde can summarize these results in human and machine-readable formats.

setup

Install autograde from PyPI using pip like this

pip install jupyter-autograde

Alternatively, autograde can be set up from source code by cloning this repository and installing it using poetry

git clone https://github.com/cssh-rwth/autograde.git && cd autograde
poetry install

If you intend to use autograde in a sandboxed environment ensure rootless docker or podman are available on your system. So far, only rootless mode is supported!

Usage

Once installed, autograde can be invoked via theautograde command. If you are using a virtual environment (which poetry does implicitly) you may have to activate it first. Alternative methods:

  • path/to/python -m autograde runs autograde with a specific python binary, e.g. the one of your virtual environment.
  • poetry run autograde if you've installed autograde from source

To get an overview over all options available, run

autograde [sub command] --help

Testing

autograde comes with some example files located in the demo/ subdirectory that we will use for now to illustrate the workflow. Run

autograde test demo/test.py demo/notebook.ipynb --target /tmp --context demo/context

What happened? Let's first have a look at the arguments of autograde:

  • demo/test.py a script with test cases we want to apply
  • demo/notebook.ipynb is the a notebook to be tested (here you may also specify a directory to be recursively searched for notebooks)
  • The optional flag --target tells autograde where to store results, /tmp in our case, and the current working directory by default.
  • The optional flag --context specifies a directory that is mounted into the sandbox and may contain arbitrary files or subdirectories. This is useful when the notebook expects some external files to be present such as data sets.

The output is a compressed archive that is named something like results-Member1Member2Member3-XXXXXXXXXX.zip and which has the following contents:

  • artifacts/: directory with all files that where created or modified by the tested notebook as well as rendered matplotlib plots.
  • code.py: code extracted from the notebook including stdout/stderr as comments
  • notebook.ipynb: an identical copy of the tested notebook
  • restults.json: test results

Audit Mode

The interactive audit mode allows for manual refining the result files. This is useful for grading parts that cannot be tested automatically such as plots or text comments.

autograde audit path/to/results

Overview autograde on PyPI

Auditing autograde on PyPI

Report Preview autograde on PyPI

Generate Reports

The report sub command creates human readable HTML reports from test results:

autograde report path/to/result(s)

The report is added to the results archive inplace.

Patch Result Archives

Results from multiple test runs can be merged via the patch sub command:

autograde patch path/to/result(s) /path/to/patch/result(s)

Summarize Multiple Results

In a typical scenario, test cases are not just applied to one notebook but many at a time. Therefore, autograde comes with a summary feature, that aggregates results, shows you a score distribution and has some very basic fraud detection. To create a summary, simply run:

autograde summary path/to/results

Two new files will appear in the result directory:

  • summary.csv: aggregated results
  • summary.html: human readable summary report

Snippets

Work with result archives programmatically

Fix score for a test case in all result archives:

from pathlib import Path

from autograde.backend.local.util import find_archives, traverse_archives


def fix_test(path: Path, faulty_test_id: str, new_score: float):
    for archive in traverse_archives(find_archives(path), mode='a'):
        results = archive.results.copy()
        for faulty_test in filter(lambda t: t.id == faulty_test_id, results.unit_test_results):
            faulty_test.score_max = new_score
            archive.inject_patch(results)


fix_test(Path('...'), '...', 13.37)

Special Test Cases

Ensure a student id occurs at most once:

from collections import Counter

from autograde import NotebookTest

nbt = NotebookTest('demo notebook test')


@nbt.register(target='__TEAM_MEMBERS__', label='check for duplicate student id')
def test_special_variables(team_members):
    id_counts = Counter(member.student_id for member in team_members)
    duplicates = {student_id for student_id, count in id_counts.items() if count > 1}
    assert not duplicates, f'multiple members share same id ({duplicates})'

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

jupyter-autograde-0.4.2.tar.gz (41.5 kB view details)

Uploaded Source

File details

Details for the file jupyter-autograde-0.4.2.tar.gz.

File metadata

  • Download URL: jupyter-autograde-0.4.2.tar.gz
  • Upload date:
  • Size: 41.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.1.12 CPython/3.8.12 Linux/5.11.0-1025-azure

File hashes

Hashes for jupyter-autograde-0.4.2.tar.gz
Algorithm Hash digest
SHA256 33c2567640485e3cf2f9d5c763e66ba5b2385762d9165740ae96076d71513cdf
MD5 34653518ba74f0b71ae981c1670f30a3
BLAKE2b-256 e62e8d626fadb20d016b5c0d0aa1a7ed8c09a0e27e4e266522fd80e8c2e89257

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page