Skip to main content

No project description provided

Project description

Grader

A distributed grading system with support for various services and task processing.

Local Installation with Poetry

Prerequisites

  • Python 3.10 or higher
  • Poetry package manager

Installation Steps

  1. Install Poetry if you haven't already:
curl -sSL https://install.python-poetry.org | python3 -
  1. Clone the repository and install dependencies:
git clone <repository-url>
cd grader
poetry install
  1. Activate the virtual environment:
poetry shell

Building and Running the Grader Service with Docker

Prerequisites

  • Docker
  • Docker Compose

Building the Image

./bin/build-tool build-app-image

The Dockerfile uses multi-stage builds to optimize the image size and build process:

  • Base stage: Sets up Python and essential tools
  • Exporter stage: Generates requirements.txt from poetry
  • Builder stage: Builds the Python wheel
  • Final stage: Creates the production image

Deploying Local Compose

Minimal Setup (for development)

cd docker
docker compose up -d postgres rabbitmq

This starts the essential services:

  • PostgreSQL (available at localhost:5432)
  • RabbitMQ (available at localhost:5672, management UI at localhost:15672)

Full Setup

cd docker
docker compose --profile=grader up -d

This starts all services including:

  • PostgreSQL
  • RabbitMQ
  • pgAdmin (available at localhost:5050)
  • Grader API service (available at localhost:8080)
  • Grader FastStream worker for task processing

Environment Variables

The services are pre-configured with default development credentials:

  • PostgreSQL: user=postgres, password=postgres, db=grader
  • RabbitMQ: user=admin, password=admin
  • pgAdmin: email=admin@admin.com, password=admin

Additionally, the Docker Compose sets the following environment variables for the grader services:

  • GRADER_DB_CONN=postgresql+asyncpg://postgres:postgres@postgres:5432/grader
  • GRADER_FASTSTREAM_BROKER=amqp://admin:admin@rabbitmq:5672/
  • GRADER_FASTSTREAM_BROKER_QUEUE=grader-queue
  • GRADER_FASTSTREAM_MAX_CONCURRENCY=1

Running Tests

Start Required Services for Testing

docker compose up -d postgres rabbitmq

Run Tests

poetry run pytest -s ./tests

Deploying on Kubernetes

Prerequisites

  • kubectl configured with your cluster
  • Helm 3.x

Installation Steps

  1. Add required Helm repositories:
helm repo update
  1. Install the grader chart:
cd k8s
helm upgrade --install --create-namespace --namespace=grader grader ./grader-chart -f grader-values.yaml
  1. Install dependencies (if needed):
# Install HDFS
helm upgrade --install --create-namespace --namespace=grader-hdfs hdfs ./hdfs-chart -f hdfs-values.yaml

# Install ClickHouse
helm upgrade --install --create-namespace --namespace=grader-clickhouse clickhouse ./ch-chart -f ch-values.yaml

Verify Installation

kubectl get pods -l app=grader

Creating student workspace

To create a workspace for a student, you'll need to use the Workspace Helm chart. This will set up a dedicated namespace with appropriate resources and permissions for the student.

  1. Create values file for the student (e.g., student-values.yaml):
user: "student-username"  # Replace with actual username
student_dir_path: "/mnt/ess_storage/DN_1/students/student-username"  # Replace with actual path
ns_cpu_limit: "8"
ns_mem_limit: "64Gi"
jupyter_cpu_limits: "4"
jupyter_mem_limits: "32Gi"
shared_data_path: "/mnt/ess_storage/DN_1/students/shared-data"
  1. Install the workspace for the student:
cd k8s
helm upgrade --install --create-namespace workspace-student-username ./Workspace -f student-values.yaml

This will create:

  • A dedicated namespace for the student
  • Resource quotas and limits
  • PersistentVolumes and PersistentVolumeClaims for student data
  • A Jupyter notebook deployment with Spark support
  • Required RBAC permissions
  • Access to shared data volume (read-only)

The student will have access to:

  • Jupyter notebook environment with Spark support
  • Personal storage space
  • Shared data directory (read-only)
  • Limited compute resources as specified in the values file

Access Services

The services will be available at:

Using the CLI

The grader provides a comprehensive command-line interface for managing tasks, running checks, and controlling the grader service. Here are the main command groups and their functionality:

Global Options

graderctl --verbose  # Enable verbose logging for all commands

Task Management

  1. Submit a new task:
graderctl task submit \
  --check-type clickhouse \  # Type of check to perform
  --user-id student123 \     # Student ID
  --name "Lab 1 Check" \     # Optional task name
  --tag "lab1" \            # Optional tag for grouping
  --args '{"key": "value"}' \ # Checker arguments as JSON
  --wait 60                  # Wait for completion (timeout in seconds)

You can also provide arguments from a file:

graderctl task submit -t clickhouse -u student123 --args-file args.json
  1. List tasks with filtering:
graderctl task list \
  --user-id student123 \  # Filter by user
  --tag lab1 \           # Filter by tag
  --status COMPLETED     # Filter by status
  1. Get task details:
graderctl task get \
  --task-id <task-id> \
  --json-file task.json \     # Save task info to JSON
  --report-file report.md     # Save report to Markdown
  1. Cancel a running task:
graderctl task cancel --task-id <task-id>
  1. Delete a task:
graderctl task delete --task-id <task-id>
  1. Get task report:
graderctl task report \
  --task-id <task-id> \
  --output-file report.md

Checker Commands

  1. Run ClickHouse checker directly:
graderctl checker clickhouse \
  --host localhost \         # ClickHouse host
  --user admin \            # Admin username
  --student student123 \    # Student to check
  --cluster-name main_cluster \     # Cluster name
  --output report.md        # Output report path
  1. Run a custom checker:
graderctl checker run \
  --checker grader.checking.ch_checker.ClickHouseChecker \
  --arguments args.json \
  --output report.md

Serve Commands

  1. Start the API server:
graderctl serve start-api \
  --host 0.0.0.0 \    # Host to bind to
  --port 8080 \       # Port to listen on
  --reload            # Enable auto-reload
  1. Start the FastStream worker for task processing:
graderctl serve start-faststream \
  --create-tables     # Create database tables before starting

Kubernetes Operations

  1. Show installation instructions:
graderctl k8s info
  1. Generate installation script:
graderctl k8s install-script --output install.sh

Environment Variables

Contributing

Please refer to our contributing guidelines for information on how to propose changes and contribute to the project.

License

[Add your license information here]

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

k8s_grader-0.1.2.tar.gz (57.2 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

k8s_grader-0.1.2-py3-none-any.whl (63.8 kB view details)

Uploaded Python 3

File details

Details for the file k8s_grader-0.1.2.tar.gz.

File metadata

  • Download URL: k8s_grader-0.1.2.tar.gz
  • Upload date:
  • Size: 57.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.8.2 CPython/3.12.3 Linux/6.11.0-21-generic

File hashes

Hashes for k8s_grader-0.1.2.tar.gz
Algorithm Hash digest
SHA256 950e72bf9791ed728bc42012976c6f8820bbc53361eeb53bdf64e723928f6c26
MD5 fd86528752f27c8c42a56e3f64cbcb25
BLAKE2b-256 1d2818bcef962f74e9c0d6e46542e8f60943969df0d4953f741249b6e2c3b5f2

See more details on using hashes here.

File details

Details for the file k8s_grader-0.1.2-py3-none-any.whl.

File metadata

  • Download URL: k8s_grader-0.1.2-py3-none-any.whl
  • Upload date:
  • Size: 63.8 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.8.2 CPython/3.12.3 Linux/6.11.0-21-generic

File hashes

Hashes for k8s_grader-0.1.2-py3-none-any.whl
Algorithm Hash digest
SHA256 6177175c9475b9f6061372a8d5c586496c49a8c88533c19ff0e1b732dcddef9d
MD5 ef58140b894bee75586debb24f8fcbf3
BLAKE2b-256 52b2f00e55807af8df24b83df7541a33b0eed5d63eec5e8ffaddf5af0ff8f567

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page