Skip to main content

Program for automatic grading of Python code.

Project description

markingpy

Tool for grading a directory of python submissions using a scheme file containing exercises. The grader supports multiple test cases and analyses code style using PyLint.

Build Status

Read the full documentation at https://markingpy.readthedocs.io.

Installation

Markingpy should be installed using the pip installation tool.

pip install markingpy

Example usage

There are two steps to using markingpy. First is to create the marking scheme file, which should be of the following form:

from markingpy import mark_scheme, exercise

# If you want to use timing tests, use 
from markingpy import TimingCase

# Initialise mark scheme configuration.
ms = mark_scheme(
	style_marks=10, # Number of marks for coding style
	submissions_path='submissions' # Directory to search for submissions
)


@exercise(name='Exercise 1',
	  descr='Short description for feedback')
def ex_1_function(arg1, arg2):
	"""
	Model solution for exercise 1.
	"""
	pass

# Now add the call test components

ex_1_function.add_test_call((1, 1), marks=1)
ex_1_function.add_test.call((1, 2), marks=1)

# Add a timing test using the following
cases = [
	TimingCase((1, 1), {}, 1),
	TimingCase((10, 10), {}, 3),
]
ex_1_function.timing_test(cases, marks=2)


@ex_1_function.test
def custom_test()
	"""
	Define a custom test. This test determines whether the
	submission function does any type checking during
	execution. The test is passed if the function raises
	a TypeError.

	Custom functions should return True or False for success
	or failure, respectively.
	"""

	# Use the function ex_1_function in the test
	# this will be replaced by the sumbission function
	# during testing
	try:
		out = ex_1_function(1.0, 2.0)
	except TypeError:
		return True
	return False

Once the marking scheme has been created, in scheme.py or similar, use the command line tool to begin grading submissions:

markingy scheme.py run

The results and feedback will be generated and stored in a database, and can be retrieved using the command

markingpy scheme.py dump directory

which will dump feedback files (.txt) into directory for each submission.

Development setup

Clone the repository and run make install. Markingpy uses pipenv to handle its dependencies.

Additional Disclaimer

This is my first Python package that I have "released" (i.e. put on PyPI), and I would be grateful for any feedback and constructive criticisms. Contributions are certainly welcome in all forms; see below.

Contributing

  1. Fork it (https://github.com/inakleinbottle/markingpy/fork)
  2. Create your feature branch (git checkout -b feature/name)
  3. Commit your changes (`git commit -am 'Add some feature')
  4. Push to the branch (git push origin feature/name)
  5. Create a new Pull Request.

Release History

  • 0.2.0

    • Added support to tests on arbitrary objects and their methods.
    • Various improvements to the code base. Added descriptor support for test classes and exercise classes.
    • Expanded the documentation.
    • Implemented finder system for locating submissions, which currently supports loading submissions from directory (as before) and SQLite databases. Planned support for server loading.
  • 0.1.0

    • First release

Meta

Sam Morley - sam@inakleinbottle.com - https://inakelinbottle.com

Distributed under the GPLV3 license. See LICENSE for more information.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

markingpy-0.2.0.tar.gz (29.8 kB view hashes)

Uploaded Source

Built Distribution

markingpy-0.2.0-py3-none-any.whl (52.0 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page