Tools for grading multiple notebook component exercises.
Project description
(Multiple) component marking
Some tools I use when marking homework with with single or multiple Jupyter notebook components.
The notebooks may have questions for manual marking, and plots for marking.
They assume some Canvas](https://www.instructure.com/canvas) conventions of naming files, and grade output CSV format.
The tools consist primarily command line utilities, with some supporting code in a utility library.
Quickstart
For single component submission:
COMPONENTS_DIR=components
mcp-check-unpack
mcp-prepare-components
mcp-find-duplicates $COMPONENTS_DIR/*/*.Rmd
mcp-cp-models
mcp-extract-manual
rnbg-allow-raise $COMPONENTS_DIR/*/*.Rmd --show-error
mcp-extract-plots
mcp-grade-nbs
# Review `<component>/marking/autograde.md`.
# Rerun after any edits.
mcp-grade-nbs
mcp-grade-component
mcp-scale-combine
For multiple component submission:
COMPONENTS_DIR=components
mcp-check-unpack
mcp-prepare-components
mcp-find-duplicates $COMPONENTS_DIR/*/*.Rmd
mcp-cp-models
# For each component
COMPONENT=my_component
rnbg-allow-raise $COMPONENTS_DIR/$COMPONENT/*.Rmd --show-error
mcp-grade-nbs $COMPONENT
# Review `$COMPONENT/marking/autograde.md`.
# Rerun after any edits.
mcp-grade-nbs $COMPONENT
mcp-extract-manual $COMPONENT
mcp-extract-plots $COMPONENT
mcp-grade-component $COMPONENT
# Finally
mcp-scale-combine
Getting set up
Make a virtual environment / Conda environment for running the marking code, and set yourself up in that environment:
python -m virtualenv ~/envs/marking-env
source ~/envs/marking-env/bin/activate
or
conda create --name marking-env
conda activate marking-env
conda install pip
To install locally from the repository, you will need flit:
pip install flit
Then install MCPMark with its dependencies:
cd mcpmark # Directory containing this README
flit install -s
Test all is working as expected with:
pip install -r test-requirements.txt
pytest mcpmark
A typical marking run
- Make sure you have activated the environment above with e.g.
source ~/envs/marking-envorconda activate marking-env - Make a directory for marking, call this
homework1or similar. cd homework1- Download submissions (
.zipfiles for multiple notebook submission,.ipynbfiles for single notebook submission). Download some directory e.g.submissionsin current directory. There should be one.zipfile per student in the case of multiple notebook submissions, or one.ipynbfile per student in case of single submissions. - Download Canvas marks CSV file to this (
homework1) directory. - Edit
assign_config.yaml--- seedoc/for an example. Use thecomponentsfield to name and define components. Each component corresponds to one notebook, so there will be one component for single notebook submissions, and multiple component for multiple notebook submissions. - In what follows below, a "component name" is the name you have given for
a single notebook assignment in the
assign_config.yamlfile. - Develop script to identify notebooks by their content - see
docfor an example, andmcpmark/cli/prepare_components.pyfor code using this script. This allows Mcpmark to check that a student does have a matching notebook for each required component. - Run
mcp-check-unpack. If any errors arise, check and maybe change the submission filenames. - Run
mcp-prepare-components. This will check that all the students in the relevant student files have got matching notebook submissions for all required components. The error message should tell you what is missing. If you discover that the warning is a false positive, and you were not expecting this student to submit (yet), then fill in their ID in theknown_missinglist of theassign_config.yamlfile, to tell Mcpmark not to check their submissions. Then re-runmcp-prepare-components, repeating until you get no errors. - In what follows, you can generally omit the
<component_name>argument when you only have one component. - For items below, assume script
rerunis on the path and has contentswhile true; do $@; done - Per notebook / component:
- Develop tests in
model/<component_name>/testsdirectory. - Test tests with
grade_oknb.py. - Copy tests etc into components directory with
mcp-cp-models - e.g.
mcp-find-duplicates components/my_component/*.Rmdto analyze duplicates, write summary into some file, sayreport.md. - Check notebook execution with
mcp-run-notebooks <path_to_notebooks>. Consider running this with e.g.rerun mcp-run-notebooks components/panderingto continuously test notebooks. - Move any irreparable notebooks to
brokendirectory, and mark inmarking/broken.csvfile. mcp-extract-manual <component_name>(component name optional for single component submissions). Edit notebooks where manual component not found. Maybe e.g.rerun mcp-extract-manual pandering.- Mark generated manual file in
<component>/marking/*_report.md. - Check manual scoring with something like
mcp-manual-scores components/lymphoma/dunleavy_plausible_report.md. Or you can leave that until grading the whole component withmcp-grade-component. mcp-extract-plots <component_name>(component name optional for single component submissions). Editmarked/plot_nbs.ipynbto add marks.- Run auto-grading with
mcp-grade-nbs <component_name>(<component_name>) is optional if a single component. - Review
<component>/marking/autograde.md. - Update any manual fixes with
#M:notation to add / subtract marks. These are lines in code cells / chunks, of form#M: <score_increment>-- e.g.#M: -2.5. They reach the final score viamcp-grade-components. - Final run of
mcp-grade-nbs mcp-grade-component <component_name>; (<component_name>) is optional if a single component.
- Develop tests in
When done:
mcp-scale-combineto rescale the component marks to their out-of figure given inassign_config.yaml, and generate the summary.csvfile. Do this even when there is only one component (in order to do the rescaling).mcp-export-marksto convert the output ofncp-rescale-combinesto a format for import into Canvas.
Utilities
mcputils- various utilities for supporting the scripts.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file mcpmark-0.1.tar.gz.
File metadata
- Download URL: mcpmark-0.1.tar.gz
- Upload date:
- Size: 31.8 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: python-requests/2.28.1
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
8b32be012a8852b4ec3c48f04b638d1597c0a1cf64f5202b21b6201c1cb6c814
|
|
| MD5 |
b609391d83d675883e7fffb39f42ac05
|
|
| BLAKE2b-256 |
46275c5d2538a643db75c5654bb74455620fcd3f6439fa29a2af5101fa082dbb
|
File details
Details for the file mcpmark-0.1-py3-none-any.whl.
File metadata
- Download URL: mcpmark-0.1-py3-none-any.whl
- Upload date:
- Size: 39.1 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: python-requests/2.28.1
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
7d725caa5ba290db02f0b0ccca8bd8407906dd19f8356f29179fedfde0a96b0e
|
|
| MD5 |
715fbe38d32a85cafe70aad1fea8ba41
|
|
| BLAKE2b-256 |
dcb781871707006a829206f825eadd4dfed9fdacebc5916169216aa395c3c27f
|