Skip to main content

Tools for grading multiple notebook component exercises.

Project description

(Multiple) component marking

Some tools I use when marking homework with with single or multiple Jupyter notebook components.

The notebooks may have questions for manual marking, and plots for marking.

They assume some Canvas](https://www.instructure.com/canvas) conventions of naming files, and grade output CSV format.

The tools consist primarily command line utilities, with some supporting code in a utility library.

Quickstart

For single component submission:

COMPONENTS_DIR=components
mcp-check-unpack
mcp-prepare-components
mcp-find-duplicates $COMPONENTS_DIR/*/*.Rmd
mcp-cp-models
mcp-extract-manual
rnbg-allow-raise $COMPONENTS_DIR/*/*.Rmd --show-error
mcp-extract-plots
mcp-grade-nbs
# Review `<component>/marking/autograde.md`.
# Rerun after any edits.
mcp-grade-nbs
mcp-grade-component
mcp-scale-combine

For multiple component submission:

COMPONENTS_DIR=components
mcp-check-unpack
mcp-prepare-components
mcp-find-duplicates $COMPONENTS_DIR/*/*.Rmd
mcp-cp-models
# For each component
    COMPONENT=my_component
    rnbg-allow-raise $COMPONENTS_DIR/$COMPONENT/*.Rmd --show-error
    mcp-grade-nbs $COMPONENT
    # Review `$COMPONENT/marking/autograde.md`.
    # Rerun after any edits.
    mcp-grade-nbs $COMPONENT
    mcp-extract-manual $COMPONENT
    mcp-extract-plots $COMPONENT
    mcp-grade-component $COMPONENT
# Finally
mcp-scale-combine

Getting set up

Make a virtual environment / Conda environment for running the marking code, and set yourself up in that environment:

python -m virtualenv ~/envs/marking-env
source ~/envs/marking-env/bin/activate

or

conda create --name marking-env
conda activate marking-env
conda install pip

To install locally from the repository, you will need flit:

pip install flit

Then install MCPMark with its dependencies:

cd mcpmark  # Directory containing this README
flit install -s

Test all is working as expected with:

pip install -r test-requirements.txt
pytest mcpmark

A typical marking run

  • Make sure you have activated the environment above with e.g. source ~/envs/marking-env or conda activate marking-env
  • Make a directory for marking, call this homework1 or similar.
  • cd homework1
  • Download submissions (.zip files for multiple notebook submission, .ipynb files for single notebook submission). Download some directory e.g. submissions in current directory. There should be one .zip file per student in the case of multiple notebook submissions, or one .ipynb file per student in case of single submissions.
  • Download Canvas marks CSV file to this (homework1) directory.
  • Edit assign_config.yaml --- see doc/ for an example. Use the components field to name and define components. Each component corresponds to one notebook, so there will be one component for single notebook submissions, and multiple component for multiple notebook submissions.
  • In what follows below, a "component name" is the name you have given for a single notebook assignment in the assign_config.yaml file.
  • Develop script to identify notebooks by their content - see doc for an example, and mcpmark/cli/prepare_components.py for code using this script. This allows Mcpmark to check that a student does have a matching notebook for each required component.
  • Run mcp-check-unpack. If any errors arise, check and maybe change the submission filenames.
  • Run mcp-prepare-components. This will check that all the students in the relevant student files have got matching notebook submissions for all required components. The error message should tell you what is missing. If you discover that the warning is a false positive, and you were not expecting this student to submit (yet), then fill in their ID in the known_missing list of the assign_config.yaml file, to tell Mcpmark not to check their submissions. Then re-run mcp-prepare-components, repeating until you get no errors.
  • In what follows, you can generally omit the <component_name> argument when you only have one component.
  • For items below, assume script rerun is on the path and has contents while true; do $@; done
  • Per notebook / component:
    • Develop tests in model/<component_name>/tests directory.
    • Test tests with grade_oknb.py.
    • Copy tests etc into components directory with mcp-cp-models
    • e.g. mcp-find-duplicates components/my_component/*.Rmd to analyze duplicates, write summary into some file, say report.md.
    • Check notebook execution with mcp-run-notebooks <path_to_notebooks>. Consider running this with e.g. rerun mcp-run-notebooks components/pandering to continuously test notebooks.
    • Move any irreparable notebooks to broken directory, and mark in marking/broken.csv file.
    • mcp-extract-manual <component_name> (component name optional for single component submissions). Edit notebooks where manual component not found. Maybe e.g. rerun mcp-extract-manual pandering.
    • Mark generated manual file in <component>/marking/*_report.md.
    • Check manual scoring with something like mcp-manual-scores components/lymphoma/dunleavy_plausible_report.md. Or you can leave that until grading the whole component with mcp-grade-component.
    • mcp-extract-plots <component_name> (component name optional for single component submissions). Edit marked/plot_nbs.ipynb to add marks.
    • Run auto-grading with mcp-grade-nbs <component_name> (<component_name>) is optional if a single component.
    • Review <component>/marking/autograde.md.
    • Update any manual fixes with #M: notation to add / subtract marks. These are lines in code cells / chunks, of form #M: <score_increment> -- e.g. #M: -2.5. They reach the final score via mcp-grade-components.
    • Final run of mcp-grade-nbs
    • mcp-grade-component <component_name>; (<component_name>) is optional if a single component.

When done:

  • mcp-scale-combine to rescale the component marks to their out-of figure given in assign_config.yaml, and generate the summary .csv file. Do this even when there is only one component (in order to do the rescaling).
  • mcp-export-marks to convert the output of ncp-rescale-combines to a format for import into Canvas.

Utilities

  • mcputils - various utilities for supporting the scripts.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

mcpmark-0.1.tar.gz (31.8 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

mcpmark-0.1-py3-none-any.whl (39.1 kB view details)

Uploaded Python 3

File details

Details for the file mcpmark-0.1.tar.gz.

File metadata

  • Download URL: mcpmark-0.1.tar.gz
  • Upload date:
  • Size: 31.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: python-requests/2.28.1

File hashes

Hashes for mcpmark-0.1.tar.gz
Algorithm Hash digest
SHA256 8b32be012a8852b4ec3c48f04b638d1597c0a1cf64f5202b21b6201c1cb6c814
MD5 b609391d83d675883e7fffb39f42ac05
BLAKE2b-256 46275c5d2538a643db75c5654bb74455620fcd3f6439fa29a2af5101fa082dbb

See more details on using hashes here.

File details

Details for the file mcpmark-0.1-py3-none-any.whl.

File metadata

  • Download URL: mcpmark-0.1-py3-none-any.whl
  • Upload date:
  • Size: 39.1 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: python-requests/2.28.1

File hashes

Hashes for mcpmark-0.1-py3-none-any.whl
Algorithm Hash digest
SHA256 7d725caa5ba290db02f0b0ccca8bd8407906dd19f8356f29179fedfde0a96b0e
MD5 715fbe38d32a85cafe70aad1fea8ba41
BLAKE2b-256 dcb781871707006a829206f825eadd4dfed9fdacebc5916169216aa395c3c27f

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page