automatic grading of jupyter notebooks
Project description
autograde is a tool that lets you run tests on Jupyter notebooks in an isolated environment and creates both, human and machine readable reports.
setup
Before installing autograde, ensure docker or podman is installed on your system.
Now, in order to install autograde, run pip install jupyter-autograde
. Alternatively, you can install autograde from source by cloning this repository and runing pip install -e .
within it (if your’re developing autograde, run pip install -e .[develop]
instead).
Eventually build the respective container image: python -m autograde build
usage
apply tests
autograde comes with some example files located in the demo/
subdirectory that we will use for now to illustrate the workflow. Run:
python -m autograde test demo/test.py demo/notebook.ipynb --target /tmp --context demo/context
What happened? Let’s first have a look at the arguments of autograde:
demo/test.py
contains the a script with test cases we want applydemo/notebook.ipynb
is the a notebook to be tested (here you may also specify a directory to be recursively searched for notebooks)The optional flag
--target
tells autograde where to store results,/tmp
in our case and the current working directory by default.The optional flag
--context
specifies a directory that is mounted into the sandbox and may arbitrary files or subdirectories. This is useful when the notebook expects some external files to be present.
The output is a compressed archive that is named something like results_[Lastname1,Lastname2,...]_XXXXXXXX.tar.xz
and which has the following contents:
artifacts.tar.xz
: all files that where created by or visible to the tested notebookcode.py
: code extracted from the notebook includingstdout
/stderr
as commentsnotebook.ipynb
: an identical copy of the tested notebooktest_results.csv
: test resultstest_restults.json
: test results, enriched with participant credentials and a summaryreport.rst
: human readable report
summarize results
In a typical scenario, test cases are not just applied to one notebook but many at a time. Therefore, autograde comes with a summary feature, that aggregates results, shows you a score distribution and has some very basic fraud detection. To create a summary, simply run:
python -m autograde summary path/to/results
Three new files will appear in the result directory:
summary.csv
: aggregated resultsscore_distribution.pdf
: a score distribution (without duplicates)similarities.pdf
: similarity heatmap of all notebooks
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for jupyter_autograde-0.1.4-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 388f6029cc9c85f6db624b944d3947704052727c57595f413cdafd05f0f2070e |
|
MD5 | 4450699ac1a1b55f73eeac9b39a775a2 |
|
BLAKE2b-256 | da947a9674805d47233f076a6b7612039ecc90d38a45ace47adc34634e1558c8 |