Skip to main content

A tool for processing BYU CS code recording files.

Project description

code_recorder_processor

CI

This contains code to process and verify the *.recorder.jsonl.gz files that are produced by the jetbrains-recorder.

Installation

Install the package and its dependencies using Poetry:

poetry install

Usage

The processor can be run using the cr_proc command with recording file(s) and a template:

poetry run cr_proc <path-to-jsonl-file> <path-to-template-file>

Batch Processing

You can process multiple recording files at once (e.g., for different students' submissions):

# Process multiple files
poetry run cr_proc file1.jsonl.gz file2.jsonl.gz template.py

# Using glob patterns
poetry run cr_proc recordings/*.jsonl.gz template.py

When processing multiple files:

  • Each recording is processed independently (for different students/documents)
  • Time calculations and verification are done separately for each file
  • A combined time report is shown at the end summarizing total editing time across all recordings
  • Results can be output to individual files using --output-dir

Arguments

  • <path-to-jsonl-file>: Path(s) to compressed JSONL file(s) (*.recorder.jsonl.gz) produced by the jetbrains-recorder. Supports multiple files and glob patterns like recordings/*.jsonl.gz
  • <path-to-template-file>: Path to the initial template file that was recorded

Options

  • -t, --time-limit MINUTES: (Optional) Maximum allowed time in minutes between the first and last edit in the recording. Applied individually to each recording file and also to the combined total in batch mode. If the elapsed time exceeds this limit, the recording is flagged as suspicious.
  • -d, --document DOCUMENT: (Optional) Document path or filename to process from the recording. Defaults to the document whose extension matches the template file.
  • -o, --output-json OUTPUT_JSON: (Optional) Path to output JSON file with verification results (time info and suspicious events). In batch mode, creates a single JSON file containing all recordings plus the combined time report.
  • -f, --output-file OUTPUT_FILE: (Optional) Write reconstructed code to specified file instead of stdout. For single files only.
  • --output-dir OUTPUT_DIR: (Optional) Directory to write reconstructed code files in batch mode. Files are named based on input recording filenames.
  • --submitted-file SUBMITTED_FILE: (Optional) Path to the submitted final file to verify against the reconstructed output. If provided, the reconstructed code will be compared to this file and differences will be reported.
  • --submitted-dir SUBMITTED_DIR: (Optional) Directory containing submitted files to verify against the reconstructed output. For each recording file, the corresponding submitted file will be found by matching the filename (e.g., homework0-ISC.recording.jsonl.gz will match homework0-ISC.py). Cannot be used with --submitted-file.
  • -s, --show-autocomplete-details: (Optional) Show individual auto-complete events in addition to aggregate statistics.
  • -p, --playback: (Optional) Play back the recording in real-time, showing code evolution.
  • --playback-speed SPEED: (Optional) Playback speed multiplier (1.0 = real-time, 2.0 = 2x speed, 0.5 = half speed).

Examples

Basic usage:

poetry run cr_proc homework0.recording.jsonl.gz homework0.py

With time limit flag:

poetry run cr_proc homework0.recording.jsonl.gz homework0.py --time-limit 30

Batch processing with output directory:

poetry run cr_proc recordings/*.jsonl.gz template.py --output-dir output/

Save JSON results:

poetry run cr_proc student1.jsonl.gz student2.jsonl.gz template.py -o results/

Verify against a single submitted file:

poetry run cr_proc homework0.recording.jsonl.gz homework0.py --submitted-file submitted_homework0.py

Verify against submitted files in a directory (batch mode):

poetry run cr_proc recordings/*.jsonl.gz template.py --submitted-dir submissions/

This will process each recording independently and flag any that exceed 30 minutes.

The processor will:

  1. Load the recorded events from the JSONL file
  2. Verify that the initial event matches the template (allowances for newline differences are made)
  3. Reconstruct the final file state by applying all recorded events
  4. Output the reconstructed file contents to stdout

Output

Reconstructed code files are written to disk using -f/--output-file (single file) or --output-dir (batch mode). The processor does not output reconstructed code to stdout.

Verification information, warnings, and errors are printed to stderr, including:

  • The document path being processed
  • Time information (elapsed time, time span) for each recording
  • Suspicious copy-paste and AI activity indicators for each file
  • Batch summary showing:
    • Verification status of all processed files
    • Combined time report (total editing time across all recordings)
    • Time limit violations if applicable

Suspicious Activity Detection

The processor automatically detects and reports three types of suspicious activity patterns:

1. Time Limit Exceeded

When the --time-limit flag is specified, the processor flags recordings where the elapsed time between the first and last edit exceeds the specified limit. This can indicate unusually long work sessions or potential external assistance.

Each recording file is checked independently against the time limit. In batch mode, the combined total time is also checked against the limit.

Example warning (single file):

Elapsed editing time: 45.5 minutes
Time span (first to last edit): 62.30 minutes

Time limit exceeded!
  Limit: 30 minutes
  First edit: 2025-01-15T10:00:00+00:00
  Last edit: 2025-01-15T11:02:18+00:00

Example warning (batch mode combined report):

================================================================================
BATCH SUMMARY: Processed 3 files
================================================================================
Verified: 3/3

COMBINED TIME REPORT (3 recordings):
Total elapsed editing time: 65.5 minutes
Overall time span: 120.45 minutes

Time limit exceeded!
  Limit: 60 minutes

2. External Copy-Paste (Multi-line Pastes)

The processor flags multi-line additions (more than one line) that do not appear to be copied from within the document itself. These indicate content pasted from external sources.

Example warning:

Event #15 (multi-line external paste): 5 lines, 156 chars - newFragment: def helper_function():...

3. Rapid One-line Pastes (AI Indicator)

When 3 or more single-line pastes occur within a 1-second window, this is flagged as a potential AI activity indicator. Human typing does not typically produce this pattern; rapid sequential pastes suggest automated code generation.

Example warning:

Events #42-#44 (rapid one-line pastes (AI indicator)): 3 lines, 89 chars

JSON Output Format

The --output-json flag generates JSON files with verification results using a consistent format for both single file and batch modes, making it easier for tooling to consume.

JSON Structure

All JSON output follows this unified format:

  • batch_mode: Boolean indicating if multiple files were processed
  • total_files: Number of files processed
  • verified_count: How many files passed verification
  • all_verified: Whether all files passed
  • combined_time_info: Time information (present in both modes):
    • Single file: Contains time info for that file
    • Batch mode: Contains combined time report with:
      • minutes_elapsed: Total editing time across all recordings
      • overall_span_minutes: Time span from first to last edit
      • file_count: Number of recordings
      • exceeds_limit: Whether combined time exceeds the limit
  • files: Array of individual results for each recording

Single file example:

{
  "batch_mode": false,
  "total_files": 1,
  "verified_count": 1,
  "all_verified": true,
  "combined_time_info": {
    "minutes_elapsed": 15.74,
    "first_timestamp": "2026-01-15T01:21:35.360168Z",
    "exceeds_limit": false
  },
  "files": [
    {
      "jsonl_file": "recording.jsonl.gz",
      "document": "/path/to/homework.py",
      "verified": true,
      "time_info": { ... },
      "suspicious_events": [ ... ],
      "reconstructed_code": "..."
    }
  ]
}

Batch file example:

{
  "batch_mode": true,
  "total_files": 2,
  "verified_count": 2,
  "all_verified": true,
  "combined_time_info": {
    "minutes_elapsed": 31.24,
    "overall_span_minutes": 18739.29,
    "file_count": 2,
    "exceeds_limit": false
  },
  "files": [ /* individual results for each file */ ]
}

Error Handling

If verification fails (the recorded initial state doesn't match the template), the processor will:

  • Print an error message to stderr
  • Display a diff showing the differences
  • Exit with status code 1

If file loading or processing errors occur, the processor will:

  • Print a descriptive error message to stderr
  • Exit with status code 1

Future Ideas

  • Check for odd typing behavior

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

cr_proc-0.1.19.tar.gz (35.0 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

cr_proc-0.1.19-py3-none-any.whl (37.0 kB view details)

Uploaded Python 3

File details

Details for the file cr_proc-0.1.19.tar.gz.

File metadata

  • Download URL: cr_proc-0.1.19.tar.gz
  • Upload date:
  • Size: 35.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for cr_proc-0.1.19.tar.gz
Algorithm Hash digest
SHA256 bfe2ebcc37430630e41c4d44fc82bd626cf39ed44ac25d5b5c762855dc1093a2
MD5 a54136f5df5b546ebbbc3fb19726eeb2
BLAKE2b-256 7982d0c6e13423aacee66ac6b506e432344707e67818c0d6441f93d1ee91b9a9

See more details on using hashes here.

File details

Details for the file cr_proc-0.1.19-py3-none-any.whl.

File metadata

  • Download URL: cr_proc-0.1.19-py3-none-any.whl
  • Upload date:
  • Size: 37.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for cr_proc-0.1.19-py3-none-any.whl
Algorithm Hash digest
SHA256 76953f027e77cc8944ebc17c41075193053355875df1c7dcfc4155b3e0709268
MD5 6402ac9d4265c26116aa183e2025a048
BLAKE2b-256 537ceecffaea1e98807eac1a442ee6b01ccbed25e857b242ba6a474224c0eef4

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page