Evaluate Azure Data Factory submissions from JSON/ZIP/folder inputs with comprehensive AI-driven grading and professional reporting.
Project description
ADFMentor
A Python package for evaluating Azure Data Factory (ADF) submissions with comprehensive AI-driven grading.
Provides accurate, fair assessment of ADF implementations including architecture, design quality, error handling, and best practices.
Features
- AI-Driven Grading: Comprehensive evaluation covering architecture, design, error handling, parameterization, and best practices
- Full Artifact Context: Sends full sanitized ADF JSON artifacts to the AI grader instead of only a compressed summary
- v1.0.0 Major Release: Complete API redesign with new
evaluate_adf()method and professional reporting - ADF JSON validation for ARM templates and repo-export style files
- Safe ZIP extraction with size and file-count limits
- Recursive discovery of ADF JSON artifacts
- Clear, actionable feedback with specific recommendations
- Secure API key management via environment variables
Installation
pip install ADFMentor
Quick Start
from ADFMentor import ADFMentor
mentor = ADFMentor(api_key="your-api-key", model_name="gemini-2.0-flash-exp")
result = mentor.evaluate_adf(
submission_path="path/to/submission.zip",
question="Grade this ADF assignment"
)
print(f"Score: {result['score']}/100")
print(f"Feedback:\n{result['feedback']}")
print(f"Metadata:\n{result['metadata_report']}")
custom_rubric = """
Evaluate focus on: error handling, parameterization,
security practices, and performance optimization.
Score 0-100 with detailed feedback.
"""
result_custom = mentor.evaluate_adf(
submission_path="path/to/submission.zip",
prompt=custom_rubric,
question="Assess ADF quality"
)
result_fast = mentor.evaluate_adf(
submission_path="path/to/submission.zip",
use_rule_based=True
)
Environment Setup
- Get a free Google API key: https://makersuite.google.com/app/apikey
- Copy
.env.exampleto.env:cp .env.example .env
- Edit
.envand add your API key:GEMINI_API_KEY=your_actual_key_here - Important: Never commit
.envto git (already protected in.gitignore)
Test files automatically load from .env.
Supported Formats
.json- ARM templates or repo-export JSON files.zip- ADF export packages (may include.txtnotes)- folder path - ADF project directory
.txt- Optional written explanations (AI grading only)
AI Grading Evaluation
The system uses Gemini AI (default: gemini-2.0-flash-exp) for comprehensive evaluation including:
-
Full ADF JSON documents (sanitized for obvious credential-like secrets)
-
Metadata summary for faster navigation and cross-checking
-
Optional companion
.txtnotes included alongside the JSON artifacts -
Architecture & Design: Pipeline organization, activity flow, component relationships
-
Error Handling: Exception handling, retry logic, resilience patterns
-
Parameterization: Runtime flexibility, hardcoded values, reusability
-
Best Practices: Naming conventions, schema definitions, performance, security
-
Completeness: All required components, connections, configurations
-
Code Quality: JSON structure, documentation, clarity
Returns: Score (0-100) + detailed feedback with specific improvements.
Security Validation
Safe ZIP extraction with:
- 200 file limit
- 50MB total size limit
- 10MB per-file limit
- Path traversal protection
Custom Rubrics
Define custom prompts for specialized evaluations:
from ADFMentor import ADFMentor
import os
mentor = ADFMentor(api_key=os.getenv("GEMINI_API_KEY"))
custom_prompts = {
"pipeline": """Evaluate pipeline logic correctness (40 pts), activity selection (30 pts),
performance optimization (20 pts), and completeness (10 pts).
Provide concise feedback.""",
"architecture": """Evaluate architecture design (40 pts), error handling (30 pts),
security practices (20 pts), and documentation (10 pts).
Provide concise feedback."""
}
result = mentor.evaluate_adf(
submission_path="submission.zip",
prompt=custom_prompts["pipeline"],
question="Evaluate the pipeline"
)
Default AI grading works without custom prompts (recommended).
ADFMentor/
__init__.py
ADFMentorInternal/
core.py
models/
gemini.py
utils/
adf_validator.py
adf_processor.py
adf_evaluator.py
API
ADFMentor
ADFMentor(api_key, model_name="gemini-2.0-flash-exp")evaluate_adf(submission_path, prompt=None, question=None)
Returns:
{
"score": 0,
"feedback": "...",
"metadata_report": "..."
}
Notes
- For
.txtsubmissions, providepromptto enable AI grading - For
.zip/folder submissions,.txtfiles are treated as optional companion notes
License
MIT
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file adfmentor-1.0.2.tar.gz.
File metadata
- Download URL: adfmentor-1.0.2.tar.gz
- Upload date:
- Size: 28.3 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.10
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
f5937115077fcdb440e095b87fb22080f12e04d461f586b10e936165341cb8e8
|
|
| MD5 |
8f4a9a0de49ac8e9123239d4e21f5c9a
|
|
| BLAKE2b-256 |
b0f86270fc8b0960206fd353b83f31a7c9138755494f2df3ababe07811fa2a6f
|
File details
Details for the file adfmentor-1.0.2-py3-none-any.whl.
File metadata
- Download URL: adfmentor-1.0.2-py3-none-any.whl
- Upload date:
- Size: 19.3 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.10
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
9c233ffcdfb11a60bd699e43ad2aa09ea31491942d8997a3731cd72f3704b760
|
|
| MD5 |
5831bb82f95ce545333cab2a61dc83ee
|
|
| BLAKE2b-256 |
cd62be18349c339b6a84e451ff595bfa1149516268d522d003c278266e3a6751
|