This repository provides a modular framework for generating, validating, merging, uploading, and fine-tuning OpenAI GPT-4o-mini models using structured JSONL datasets.
Project description
AI Essay Evaluator
Documentation: https://ai-essay-evaluator.readthedocs.io
Source Code: https://github.com/markm-io/ai-essay-evaluator
A comprehensive Python framework for automated essay evaluation using OpenAI's GPT models. This tool enables educators to grade student essays at scale with customizable scoring rubrics, fine-tune models with their own grading data, and generate detailed feedback across multiple scoring dimensions.
Features
- Automated Essay Grading - Evaluate student essays using fine-tuned OpenAI GPT-4o-mini models
- Multiple Scoring Formats - Choose from extended (multi-dimensional), item-specific, or short scoring formats
- Custom Model Training - Generate training datasets and fine-tune models with your own grading examples
- Project Folder Mode - Simple folder structure for organizing essays, rubrics, and prompts
- Cost Tracking - Built-in token usage and cost analysis for OpenAI API calls
- Batch Processing - Grade hundreds of essays with progress tracking and async processing
- Multi-Pass Grading - Run multiple grading passes for consistency checking
- Rate Limit Handling - Automatic retry logic and adaptive rate limiting
- Comprehensive Logging - Async logging for debugging and auditing
Quick Start
Installation
Install via pip:
pip install ai-essay-evaluator
Or using uv (recommended for development):
uv pip install ai-essay-evaluator
Basic Usage
- Set up your project folder:
my_project/
├── input.csv # Student responses
├── question.txt # Essay prompt
├── story/ # Story files
│ └── story1.txt
└── rubric/ # Rubric files
└── rubric1.txt
- Run the evaluator:
python -m ai_essay_evaluator evaluator grader \
--project-folder ./my_project \
--scoring-format extended \
--api-key YOUR_OPENAI_API_KEY
- Check results in
my_project/output/
Training Your Own Model
# Generate training data from graded examples
python -m ai_essay_evaluator trainer generate \
--story-folder ./training/story \
--question ./training/question.txt \
--rubric ./training/rubric.txt \
--csv ./training/graded_samples.csv \
--output training.jsonl \
--scoring-format extended
# Validate and fine-tune
python -m ai_essay_evaluator trainer validate --file training.jsonl
python -m ai_essay_evaluator trainer fine-tune \
--file training.jsonl \
--scoring-format extended \
--api-key YOUR_OPENAI_API_KEY
For detailed documentation, visit the full usage guide.
Contributors ✨
Thanks goes to these wonderful people (emoji key):
Mark Moreno 💻 🤔 📖 |
This project follows the all-contributors specification. Contributions of any kind welcome!
Credits
This package was created with Copier and the browniebroke/pypackage-template project template.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file ai_essay_evaluator-1.1.0.tar.gz.
File metadata
- Download URL: ai_essay_evaluator-1.1.0.tar.gz
- Upload date:
- Size: 39.5 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
4565b47bbdc066a0a3b4cfbb6efb3064aa532522fb263de2aeb9b2aa52c8ec9b
|
|
| MD5 |
f8c96eb8c5e220ded41a1832d584e3a8
|
|
| BLAKE2b-256 |
5ecc8f3042b6088688d7aab18c8b5aa7a4ec4ef6a5ddf1a65b719f5c97a2d64a
|
Provenance
The following attestation bundles were made for ai_essay_evaluator-1.1.0.tar.gz:
Publisher:
ci.yml on markm-io/ai-essay-evaluator
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
ai_essay_evaluator-1.1.0.tar.gz -
Subject digest:
4565b47bbdc066a0a3b4cfbb6efb3064aa532522fb263de2aeb9b2aa52c8ec9b - Sigstore transparency entry: 574175109
- Sigstore integration time:
-
Permalink:
markm-io/ai-essay-evaluator@472611785e4793221d4d4f4d267a66877ad6e423 -
Branch / Tag:
refs/heads/main - Owner: https://github.com/markm-io
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
self-hosted -
Publication workflow:
ci.yml@472611785e4793221d4d4f4d267a66877ad6e423 -
Trigger Event:
push
-
Statement type:
File details
Details for the file ai_essay_evaluator-1.1.0-py3-none-any.whl.
File metadata
- Download URL: ai_essay_evaluator-1.1.0-py3-none-any.whl
- Upload date:
- Size: 27.2 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
1097f49997491ec33b7da3907dee7127975ec3260df880f1040828710a03a870
|
|
| MD5 |
836d5daf81084d8ebe71f55f90e83616
|
|
| BLAKE2b-256 |
6186822a628005123ab2cd7b46694b55577b18b07099b8f38c728291a6b1449f
|
Provenance
The following attestation bundles were made for ai_essay_evaluator-1.1.0-py3-none-any.whl:
Publisher:
ci.yml on markm-io/ai-essay-evaluator
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
ai_essay_evaluator-1.1.0-py3-none-any.whl -
Subject digest:
1097f49997491ec33b7da3907dee7127975ec3260df880f1040828710a03a870 - Sigstore transparency entry: 574175130
- Sigstore integration time:
-
Permalink:
markm-io/ai-essay-evaluator@472611785e4793221d4d4f4d267a66877ad6e423 -
Branch / Tag:
refs/heads/main - Owner: https://github.com/markm-io
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
self-hosted -
Publication workflow:
ci.yml@472611785e4793221d4d4f4d267a66877ad6e423 -
Trigger Event:
push
-
Statement type: