automatic grading of jupyter notebooks
Project description
autograde is a tool that lets you run tests on Jupyter notebooks in an isolated environment and creates both, human and machine readable reports.
setup
Before installing autograde, ensure docker or podman is installed on your system.
Now, in order to install autograde, run pip install jupyter-autograde
. Alternatively, you can install autograde from source by cloning this repository and runing pip install -e .
within it (if your’re developing autograde, run pip install -e .[develop]
instead).
Eventually build the respective container image: python -m autograde build
usage
apply tests
autograde comes with some example files located in the demo/
subdirectory that we will use for now to illustrate the workflow. Run:
python -m autograde test demo/test.py demo/notebook.ipynb --target /tmp --context demo/context
What happened? Let’s first have a look at the arguments of autograde:
demo/test.py
contains the a script with test cases we want applydemo/notebook.ipynb
is the a notebook to be tested (here you may also specify a directory to be recursively searched for notebooks)The optional flag
--target
tells autograde where to store results,/tmp
in our case and the current working directory by default.The optional flag
--context
specifies a directory that is mounted into the sandbox and may arbitrary files or subdirectories. This is useful when the notebook expects some external files to be present.
The output is a compressed archive that is named something like results_[Lastname1,Lastname2,...]_XXXXXXXX.tar.xz
and which has the following contents:
artifacts.tar.xz
: all files that where created by or visible to the tested notebookcode.py
: code extracted from the notebook includingstdout
/stderr
as commentsnotebook.ipynb
: an identical copy of the tested notebooktest_results.csv
: test resultstest_restults.json
: test results, enriched with participant credentials and a summaryreport.rst
: human readable report
summarize results
In a typical scenario, test cases are not just applied to one notebook but many at a time. Therefore, autograde comes with a summary feature, that aggregates results, shows you a score distribution and has some very basic fraud detection. To create a summary, simply run:
python -m autograde summary path/to/results
Three new files will appear in the result directory:
summary.csv
: aggregated resultsscore_distribution.pdf
: a score distribution (without duplicates)similarities.pdf
: similarity heatmap of all notebooks
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for jupyter_autograde-0.1.2-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | f31546c5bf1c0779575d8948d47c2e732e0e2c3ebfefa1b1af8e2679a437aede |
|
MD5 | 612576acfa67e8b7e269629e9249b4c1 |
|
BLAKE2b-256 | 98d9fa71b006686ab97d06145337c402e64e0d4e7f1fe96b2628465d0b91a246 |