Skip to main content

No project description provided

Project description

mast-tool

GitHub Workflow Status Docker Image Size (latest by date) GitHub license

Microprocessors and Personal Computers ARM subroutine tester tool.

The MAST tool is originally a fork from João Damas' Automatic Observation and (grade) Calculation for (subroutine) Operations tool. It is a tool to automate student's grading in the assignments done during the Microprocessor and Personal Computers course unit.

Differences with the original tool

To ease the communication between the backend server and the tool the output demanded changes. Output .txt and .csv files are now combined in a more complete .json file. Structure of the .zip input file is simplified. Unsupported data types such as long and double are now supported. A new input parameter - weight - is introduced.


Table of contents

1. Installation

Using Docker:

$ docker pull luist188/mast-tool

2. Running

  1. Place the input files inside any directory.
  2. Run the image with a shared volume pointing to the input directory: docker run -v input:destination -it luist188/mast-tool (you can learn more about docker run usage here)
  3. Run the alias command (assure you are using /bin/bash) mast or run python main.py in the tool's source.

3. Usage

$ mast [-h] -sr SR -t T -sm SM [SM ...] [-gfd GFD] [-ffd FFD] [-grf GRF] [-tout TOUT] [-fpre FPRE]

$ mast [args]

Options:
  --help, -h                Show help                                         [boolean]
  -sr <subroutines.yaml>    .yaml file containing subroutine declaration      [required] [string]
  -t <tests.yaml>           .yaml file containing the test cases              [required] [string]
  -sm <submission.zip...>   .zip files containing user submission             [required] [string array]
  -gfd <directory>          path to the directory to store temporary files
    (e.g., compiled binaries)                                                 [default:grading] [string]
  -ffd <directory>          path to the directory to store the grading for
    each submission                                                           [default:feedback] [string]
  -tout <timeout>           float timeout value                               [default:2.0] [float]
  -fpre <precision>         floating point threshold for comparing floating
    points in test cases                                                      [default:1e-6] [float]

4. File syntax and structure

4.1. Available data types

4.1.1. Primitive data types

  • int
  • long
  • float
  • double
  • char

4.1.2. Array data types

  • char*/string
  • array int
  • array long
  • array float
  • array double
  • array char

4.2. subroutines.yaml

The input file for the subroutine declaration has to follow a specific structure and syntax described as follows:

foo: 
  params: 
    - int
    - array char
    - array int
    - array int
  return: 
    - int
    - array int

bar: 
  params: 
    - long
  return: 
    - long

The subroutine name has to match the .s to test and is case insensitive. Thus, the subroutine foo or bar is going to check any .s file that matches its name case insensitive. All subroutines must contain an array of parameters, params, and an array of returns, return.

4.3. tests.yaml

The input file for the test cases declaration has to follow a specific structure and syntax described as follows:

bar:
  - inputs:
    - 6
    outputs: 
    - 36
    weight: 0.5
  - inputs:
    - 5
    outputs: 
    - 25
    weight: 0.5

The root declaration of a test case must match the name declared in the subroutines.yaml file. Test cases have an array of inputs that has a list of outputs and a test weight. The sum of the test weights must be 1.0.

4.4. submission.zip

The submission zip file must contain a .s file in its root. For example, for the subroutine foo and bar the zip structure should be as follows:

submission.zip
├── foo.s
└── bar.s

5. Results

For each submission file a .json file is created in the feedback directory with the same name of the .zip file. The file contains all information about compilation status and test cases. In addition, a simplified version of the result of all submissions is created in a result.json. The content of the files look as follows:

File submission.json

[
    {
        "name": "foo",
        "compiled": true,
        "ok": true,
        "passed_count": 2,
        "test_count": 2,
        "score": 1,
        "tests": [
            {
                "weight": 1,
                "run": true,
                "input": [
                    6,
                    ["-", "+", "+", "-", "-", "+"],
                    [1, 2, 3, 0, 1, -25],
                    [13, 2, 8, 4, 5, 25]
                ],
                "output": [
                    "0",
                    ["12", "4", "11", "4", "4", "0"]
                ],
                "passed": true
            }
        ]
    },
    {
        "name": "bar",
        "compiled": true,
        "ok": true,
        "passed_count": 2,
        "test_count": 2,
        "score": 1,
        "tests": [
            {
                "weight": 0.5,
                "run": true,
                "input": [
                    6
                ],
                "output": [
                    "36"
                ],
                "passed": true
            },
            {
                "weight": 0.5,
                "run": true,
                "input": [
                    5
                ],
                "output": [
                    "25"
                ],
                "passed": true
            }
        ]
    }
]

File result.json

[
    {
        "submission_name": "submission",
        "subroutines": [
            {
                "name": "foo",
                "score": 0
            },
            {
                "name": "bar",
                "score": 0.5
            }
        ]
    },
    {
        "submission_name": "submission2",
        "subroutines": [
            {
                "name": "foo",
                "score": 1
            },
            {
                "name": "bar",
                "score": 1
            }
        ]
    }
]

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

arm64_tester-1.2.4.tar.gz (21.1 kB view details)

Uploaded Source

Built Distribution

arm64_tester-1.2.4-py3-none-any.whl (20.6 kB view details)

Uploaded Python 3

File details

Details for the file arm64_tester-1.2.4.tar.gz.

File metadata

  • Download URL: arm64_tester-1.2.4.tar.gz
  • Upload date:
  • Size: 21.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.8.0 pkginfo/1.8.2 readme-renderer/32.0 requests/2.27.1 requests-toolbelt/0.9.1 urllib3/1.26.7 tqdm/4.62.3 importlib-metadata/4.11.2 keyring/18.0.1 rfc3986/2.0.0 colorama/0.4.3 CPython/3.8.5

File hashes

Hashes for arm64_tester-1.2.4.tar.gz
Algorithm Hash digest
SHA256 b011f3031242077b2375470ada42c0328d4115679f9b9131671947c26eeb7008
MD5 d844033d34b93fb7a31a44e493a3cb96
BLAKE2b-256 d2339818c7264085e7c8c46cb63dfd362a8d6a2e13f85adfd4b6667a8e981722

See more details on using hashes here.

File details

Details for the file arm64_tester-1.2.4-py3-none-any.whl.

File metadata

  • Download URL: arm64_tester-1.2.4-py3-none-any.whl
  • Upload date:
  • Size: 20.6 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.8.0 pkginfo/1.8.2 readme-renderer/32.0 requests/2.27.1 requests-toolbelt/0.9.1 urllib3/1.26.7 tqdm/4.62.3 importlib-metadata/4.11.2 keyring/18.0.1 rfc3986/2.0.0 colorama/0.4.3 CPython/3.8.5

File hashes

Hashes for arm64_tester-1.2.4-py3-none-any.whl
Algorithm Hash digest
SHA256 42be6a01158817e96bc5f9e91417f14738068a6460f8f641dd10717b47b90a15
MD5 4dfb15cec579944575649d3e7d1dc392
BLAKE2b-256 d20a975e4ddf1c6a9bea509ef15c6ee2014949a40325a7d2b82d51673fc895a2

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page