Skip to main content

Program for automatic grading of Python code.

Project description


Tool for grading a directory of python submissions using a scheme file containing exercises. The grader supports multiple test cases and analyses code style using PyLint.

Build Status

Read the full documentation at


Markingpy should be installed using the pip installation tool.

pip install markingpy

Example usage

There are two steps to using markingpy. First is to create the marking scheme file, which should be of the following form:

from markingpy import mark_scheme, exercise, PyLintChecker

# If you want to use timing tests, use 
from markingpy import TimingCase

# Initialise mark scheme configuration.
ms = mark_scheme(
    linter=PyLintChecker(),  # add a linter to the marking process
    submissions_path='submissions' # Directory to search for submissions

@exercise(name='Exercise 1',
	  descr='Short description for feedback')
def ex_1_function(arg1, arg2):
	Model solution for exercise 1.

# Now add the call test components

ex_1_function.add_test_call((1, 1), marks=1), 2), marks=1)

# Add a timing test using the following
cases = [
	TimingCase((1, 1), {}, 1),
	TimingCase((10, 10), {}, 3),
ex_1_function.timing_test(cases, marks=2)

def custom_test():
	Define a custom test. This test determines whether the
	submission function does any type checking during
	execution. The test is passed if the function raises
	a TypeError.

	Custom functions should return True or False for success
	or failure, respectively.

	# Use the function ex_1_function in the test
	# this will be replaced by the submission function
	# during testing
		out = ex_1_function(1.0, 2.0)
	except TypeError:
		return True
	return False

Once the marking scheme has been created, in or similar, use the command line tool to begin grading submissions:

markingy run

The results and feedback will be generated and stored in a database, and can be retrieved using the command

markingpy dump directory

which will dump feedback files (.txt) into directory for each submission.

Development setup

Clone the repository and run make install. Markingpy uses pipenv to handle its dependencies.

Additional Disclaimer

This is my first Python package that I have "released" (i.e. put on PyPI), and I would be grateful for any feedback and constructive criticisms. Contributions are certainly welcome in all forms; see below.


  1. Fork it (
  2. Create your feature branch (git checkout -b feature/name)
  3. Commit your changes (`git commit -m 'Add some feature')
  4. Push to the branch (git push origin feature/name)
  5. Create a new Pull Request.

Release History

  • 1.0.0

    • Reworked grader system. Tests can now be run in separate processes to provide better isolation.
    • Reworked command line interface.
    • Reorganised and improved test suite.
    • Greatly simplified marking scheme creation and exercise association.
  • 0.2.0

    • Added support to tests on arbitrary objects and their methods.
    • Various improvements to the code base. Added descriptor support for test classes and exercise classes.
    • Expanded the documentation.
    • Implemented finder system for locating submissions, which currently supports loading submissions from directory (as before) and SQLite databases. Planned support for server loading.
  • 0.1.0

    • First release


Sam Morley - -

Distributed under the GPLV3 license. See LICENSE for more information.

Project details

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

markingpy-1.0.0.tar.gz (30.7 kB view hashes)

Uploaded source

Built Distribution

markingpy-1.0.0-py3-none-any.whl (59.5 kB view hashes)

Uploaded py3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page