Orchestrate AI-powered code maintenance workflows with sequential or conditional task execution
Project description
Prompter
Orchestrate AI-powered code maintenance at scale
A Python tool for orchestrating AI-powered code maintenance workflows using Claude Code SDK.
๐ Resources: GitHub Repository | Examples | System Prompt
Requirements
- Python 3.11 or higher
- Claude Code SDK
Installation
Install from PyPI:
pip install claude-code-prompter
Or install from source:
# Install the package
pip install -e .
# Install with development dependencies
pip install -e ".[dev]"
How It Works
Prompter supports two execution modes:
1. Sequential Execution (Default)
Tasks execute one after another in the order they're defined. Use on_success = "next" and on_failure = "retry" for traditional sequential workflows.
[[tasks]]
name = "lint"
on_success = "next" # Continue to the next task in order
[[tasks]]
name = "test"
on_success = "next" # Continue to the next task
[[tasks]]
name = "build"
on_success = "stop" # End execution
2. Conditional Workflows (Task Jumping)
Tasks can jump to specific named tasks, enabling complex branching logic. Perfect for error handling, conditional deployments, and dynamic workflows.
[[tasks]]
name = "build"
on_success = "test" # Jump to 'test' task
on_failure = "fix_build" # Jump to 'fix_build' on failure
[[tasks]]
name = "fix_build"
on_success = "build" # Retry build after fixing
โ ๏ธ Warning: Infinite Loop Protection
When using task jumping, be careful not to create infinite loops. Prompter automatically detects and prevents infinite loops by tracking executed tasks. If a task tries to execute twice in the same run, it will be skipped with a warning. Always ensure your task flows have a clear termination condition.
AI-Powered Project Analysis (New in v0.7.0)
Prompter can now analyze your project using Claude and automatically generate a customized configuration file tailored to your specific codebase.
How It Works
The --init command:
- Scans your project to detect languages, frameworks, and tools
- Analyzes code quality to identify improvement opportunities
- Generates specific tasks based on your project's needs
- Creates a ready-to-use configuration with proper verification commands
Examples
# Analyze current directory and generate prompter.toml
prompter --init
# Generate configuration with a custom name
prompter --init my-workflow.toml
Supported Languages
The AI analyzer can detect and generate configurations for:
- Python (pytest, mypy, ruff, black)
- JavaScript/TypeScript (jest, eslint, prettier)
- Rust (cargo test, clippy, rustfmt)
- Go (go test, golint, gofmt)
- And more...
What Gets Analyzed
- Build Systems: make, npm, cargo, gradle, etc.
- Test Frameworks: pytest, jest, cargo test, go test, etc.
- Linters: ruff, eslint, clippy, golint, etc.
- Type Checkers: mypy, tsc, etc.
- Code Issues: failing tests, linting errors, type issues
- Security: outdated dependencies, known vulnerabilities
- Documentation: missing docstrings, outdated READMEs
Quick Start
-
Let AI analyze your project and generate a customized configuration:
prompter --initThis will:
- Detect your project's language and tools automatically
- Identify specific issues that need fixing
- Generate tasks tailored to your codebase
-
Review and customize the generated configuration (
prompter.toml):- The AI will show you what it found and ask for confirmation
- You can modify task prompts and commands as needed
- Adjust retry settings and flow control
-
Test your configuration with a dry run:
prompter prompter.toml --dry-run
-
Run the tasks when ready:
prompter prompter.toml
Usage
Basic Commands
# AI-powered configuration generation (analyzes your project)
prompter --init # Analyze project and create prompter.toml
prompter --init my-config.toml # Create with custom name
# Run all tasks from a configuration file
prompter config.toml
# Dry run to see what would be executed without making changes
prompter config.toml --dry-run
# Run a specific task by name
prompter config.toml --task fix_warnings
# Check current status and progress
prompter --status
# Clear saved state for a fresh start
prompter --clear-state
# Enable verbose output for debugging
prompter config.toml --verbose
# Enable extensive diagnostic logging (new in v0.3.0)
prompter config.toml --debug
# Save logs to a file
prompter config.toml --log-file debug.log
# Combine debug mode with log file for comprehensive diagnostics
prompter config.toml --debug --log-file debug.log
Common Use Cases
1. Code Modernization
# Create a config file for updating deprecated APIs
cat > modernize.toml << EOF
[settings]
working_directory = "/path/to/your/project"
[[tasks]]
name = "update_apis"
prompt = "Update all deprecated API calls to their modern equivalents"
verify_command = "python -m py_compile *.py"
on_success = "next"
on_failure = "retry"
max_attempts = 2
[[tasks]]
name = "add_type_hints"
prompt = "Add missing type hints to all functions and methods"
verify_command = "mypy --strict ."
on_success = "stop"
EOF
# Run the modernization
prompter modernize.toml
2. Documentation Updates
# Keep docs in sync with code changes
cat > docs.toml << EOF
[[tasks]]
name = "update_docstrings"
prompt = "Update all docstrings to match current function signatures and behavior"
verify_command = "python -m doctest -v *.py"
[[tasks]]
name = "update_readme"
prompt = "Update README.md to reflect recent API changes and new features"
verify_command = "markdownlint README.md"
EOF
prompter docs.toml --dry-run # Preview changes first
prompter docs.toml # Apply changes
3. Code Quality Improvements
# Fix linting issues and improve code quality
cat > quality.toml << EOF
[[tasks]]
name = "fix_linting"
prompt = "Fix all linting errors and warnings reported by flake8 and pylint"
verify_command = "flake8 . && pylint *.py"
on_failure = "retry"
max_attempts = 3
[[tasks]]
name = "improve_formatting"
prompt = "Improve code formatting and add missing blank lines for better readability"
verify_command = "black --check ."
EOF
prompter quality.toml
State Management
Prompter automatically tracks your progress:
# Check what's been completed
prompter --status
# Example output:
# Session ID: 1703123456
# Total tasks: 3
# Completed: 2
# Failed: 0
# Running: 0
# Pending: 1
# Resume from where you left off
prompter config.toml # Automatically skips completed tasks
# Start fresh if needed
prompter --clear-state
prompter config.toml
Advanced Configuration
Task Dependencies and Flow Control
[settings]
working_directory = "/path/to/project"
check_interval = 30
max_retries = 3
# Task that stops on failure
[[tasks]]
name = "critical_fixes"
prompt = "Fix any critical security vulnerabilities"
verify_command = "safety check"
on_failure = "stop" # Don't continue if this fails
max_attempts = 1
# Task that continues despite failures
[[tasks]]
name = "optional_cleanup"
prompt = "Remove unused imports and variables"
verify_command = "autoflake --check ."
on_failure = "next" # Continue to next task even if this fails
# Task with custom timeout
[[tasks]]
name = "slow_operation"
prompt = "Refactor large legacy module"
verify_command = "python -m unittest discover"
timeout = 600 # 10 minutes - task will be terminated if it exceeds this
# Task without timeout (runs until completion)
[[tasks]]
name = "thorough_analysis"
prompt = "Perform comprehensive security audit"
verify_command = "security-scan --full"
# No timeout specified - Claude Code runs without time limit
Task Jumping and Conditional Workflows
# Jump to specific tasks based on success/failure
[[tasks]]
name = "build"
prompt = "Build the project"
verify_command = "test -f dist/app.js"
on_success = "test" # Jump to 'test' task on success
on_failure = "fix_build" # Jump to 'fix_build' task on failure
[[tasks]]
name = "fix_build"
prompt = "Fix build errors and warnings"
verify_command = "test -f dist/app.js"
on_success = "test" # Jump back to 'test' after fixing
on_failure = "stop" # Stop if we can't fix the build
max_attempts = 2
[[tasks]]
name = "test"
prompt = "Run the test suite"
verify_command = "npm test"
on_success = "deploy" # Continue to deploy
on_failure = "fix_tests" # Jump to fix_tests on failure
[[tasks]]
name = "fix_tests"
prompt = "Fix failing tests"
verify_command = "npm test"
on_success = "deploy" # Continue to deploy after fixing
on_failure = "stop" # Stop if tests can't be fixed
max_attempts = 1
[[tasks]]
name = "deploy"
prompt = "Deploy to staging environment"
verify_command = "curl -f http://staging.example.com/health"
on_success = "stop" # All done!
on_failure = "rollback" # Jump to rollback on failure
[[tasks]]
name = "rollback"
prompt = "Rollback the deployment"
verify_command = "curl -f http://staging.example.com/health"
on_success = "stop"
on_failure = "stop"
This creates a workflow where:
- Build failures jump to a fix task, then retry testing
- Test failures jump to a fix task, then continue to deployment
- Deployment failures trigger a rollback
- Tasks are skipped if not referenced in the flow
โ ๏ธ Avoiding Infinite Loops
When designing conditional workflows, be mindful of potential infinite loops:
Bad Example (Infinite Loop):
[[tasks]]
name = "task_a"
on_success = "task_b"
[[tasks]]
name = "task_b"
on_success = "task_a" # Creates infinite loop!
Good Example (With Exit Condition):
[[tasks]]
name = "retry_task"
prompt = "Try to fix the issue"
verify_command = "test -f success_marker"
on_success = "next" # Exit the loop on success
on_failure = "retry_task" # Retry on failure
max_attempts = 1 # Important: limits retries per execution
Loop Protection: By default, Prompter prevents infinite loops by tracking which tasks have been executed. If a task attempts to run twice in the same session, it will be skipped with a warning log.
Allowing Infinite Loops: For use cases like continuous monitoring or polling, you can enable infinite loops:
[settings]
allow_infinite_loops = true
[[tasks]]
name = "monitor"
prompt = "Check system status"
verify_command = "systemctl is-active myservice"
on_success = "wait"
on_failure = "alert"
[[tasks]]
name = "wait"
prompt = "Wait before next check"
verify_command = "sleep 60"
on_success = "monitor" # Loop back to monitoring
When allow_infinite_loops = true, tasks can execute multiple times. A safety limit of 1000 iterations prevents runaway loops.
Multiple Project Workflow
# Process multiple projects in sequence
for project in project1 project2 project3; do
cd "$project"
prompter ../shared-config.toml --verbose
cd ..
done
Configuration
Create a TOML configuration file with your tasks:
[settings]
check_interval = 30
max_retries = 3
working_directory = "/path/to/project"
[[tasks]]
name = "fix_warnings"
prompt = "Fix all compiler warnings in the codebase"
verify_command = "make test"
verify_success_code = 0
on_success = "next"
on_failure = "retry"
max_attempts = 3
timeout = 300
Configuration Reference
Settings (Optional)
working_directory: Base directory for command execution (default: current directory)check_interval: Seconds to wait between task completion and verification (default: 3600)max_retries: Global retry limit for all tasks (default: 3)allow_infinite_loops: Allow tasks to execute multiple times in the same run (default: false)
Task Fields
name(required): Unique identifier for the task. Cannot use reserved words:next,stop,retry,repeatprompt(required): Instructions for Claude Code to executeverify_command(required): Shell command to verify task successverify_success_code: Expected exit code for success (default: 0)on_success: Action when task succeeds -"next","stop","repeat", or any task name (default: "next")on_failure: Action when task fails -"retry","stop","next", or any task name (default: "retry")max_attempts: Maximum retry attempts for this task (default: 3)timeout: Task timeout in seconds (optional, no timeout if not specified)
Note on Task Jumping: When using task names in
on_successoron_failure, ensure your workflow has exit conditions to prevent infinite loops. Prompter will skip tasks that have already executed to prevent infinite loops.
Environment Variables
Prompter supports the following environment variables for additional configuration:
PROMPTER_INIT_TIMEOUT: Sets the timeout (in seconds) for AI analysis during--initcommand (default: 120)# Increase timeout for large projects PROMPTER_INIT_TIMEOUT=300 prompter --init # Set a shorter timeout for smaller projects PROMPTER_INIT_TIMEOUT=60 prompter --init
Examples and Templates
The project includes ready-to-use workflow templates in the examples/ directory:
- examples/bdd-workflow.toml: Automated BDD scenario implementation
- refactor-codebase.toml: Safe code refactoring with testing
- security-audit.toml: Security scanning and remediation
Find these examples in the GitHub repository.
AI-Assisted Configuration Generation
For complex workflows, you can use AI assistance to generate TOML configurations. We provide a comprehensive system prompt that helps AI assistants understand all the intricacies of the prompter tool.
Using the System Prompt
-
Get the system prompt from the GitHub repository
-
Ask your AI assistant (Claude, ChatGPT, etc.):
[Paste the system prompt] Now create a prompter TOML configuration for: [describe your workflow] -
The AI will generate a properly structured TOML that:
- Breaks down complex tasks to avoid JSON parsing issues
- Uses appropriate verification commands
- Implements proper error handling
- Follows best practices for the tool
-
Validate the generated TOML:
# Test configuration without executing anything prompter generated-config.toml --dry-run # This will: # - Validate TOML syntax # - Check all required fields # - Display what would be executed # - Show any configuration errors
Important: Avoiding Claude SDK Limitations
The Claude SDK currently has a JSON parsing bug with large responses. To avoid this:
- Keep prompts focused and concise - Each task should have a single, clear objective
- Break complex workflows into smaller tasks - This is better for reliability anyway
- Avoid asking Claude to echo large files - Use specific, targeted instructions
- Use the
--debugflag if you encounter issues to see detailed error messages
Example of breaking down a complex task:
โ Bad (too complex, might fail):
[[tasks]]
name = "refactor_everything"
prompt = """
Analyze the entire codebase, identify all issues, fix all problems,
update all tests, improve documentation, and commit everything.
"""
โ Good (focused tasks):
[[tasks]]
name = "analyze_code"
prompt = "Identify the top 3 refactoring opportunities in the codebase"
verify_command = "test -f refactoring_plan.md"
[[tasks]]
name = "refactor_duplicates"
prompt = "Extract the most common duplicate code into shared utilities"
verify_command = "python -m py_compile **/*.py"
[[tasks]]
name = "run_tests"
prompt = "Run all tests and report any failures"
verify_command = "pytest"
Troubleshooting
Common Issues
-
"JSONDecodeError: Unterminated string" - Your prompt is generating responses that are too large
- Solution: Break down the task into smaller, focused prompts
- Use
--debugto see the full error details
-
Task keeps retrying - The verify_command might not be testing the right thing
- Solution: Ensure verify_command actually validates what the task accomplished
-
"State file corrupted" - Rare issue with interrupted execution
- Solution: Run
prompter --clear-stateto start fresh
- Solution: Run
-
"Unescaped '' in a string" - TOML parsing error with backslashes in strings
- Solution: In TOML, backslashes must be escaped. Use one of these approaches:
- Double backslashes:
path = "C:\\Users\\name\\project" - Single quotes:
path = 'C:\Users\name\project' - Triple quotes:
path = '''C:\Users\name\project'''
- Double backslashes:
- The error message now shows the exact line and column with helpful context
- Solution: In TOML, backslashes must be escaped. Use one of these approaches:
Debug Mode
Run with extensive logging to diagnose issues:
prompter config.toml --debug --log-file debug.log
This provides:
- Detailed execution traces
- Claude SDK interaction logs
- State transition information
- Timing data for each operation
License
MIT
โญโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฎ
โ > โโโ โข โข โข โโโ โ โ
โ prompt tasks verify โ
โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file claude_code_prompter-0.7.3.tar.gz.
File metadata
- Download URL: claude_code_prompter-0.7.3.tar.gz
- Upload date:
- Size: 63.2 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.12.9
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
0b97a93ab8670e0b35b21cacb48c3c873b7a67f285015c847562d5e325fd7ec0
|
|
| MD5 |
3c21650f25f0841aeb5abd536a04bc27
|
|
| BLAKE2b-256 |
93963b4c2e305e1caee7947ac9a84d478b237bac258d50e31b2396e7b8a1c5b7
|
Provenance
The following attestation bundles were made for claude_code_prompter-0.7.3.tar.gz:
Publisher:
publish.yml on baijum/prompter
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
claude_code_prompter-0.7.3.tar.gz -
Subject digest:
0b97a93ab8670e0b35b21cacb48c3c873b7a67f285015c847562d5e325fd7ec0 - Sigstore transparency entry: 251693621
- Sigstore integration time:
-
Permalink:
baijum/prompter@d101d6472f63e32d7a93c9fccd41ca8539a14455 -
Branch / Tag:
refs/tags/v0.7.3 - Owner: https://github.com/baijum
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@d101d6472f63e32d7a93c9fccd41ca8539a14455 -
Trigger Event:
release
-
Statement type:
File details
Details for the file claude_code_prompter-0.7.3-py3-none-any.whl.
File metadata
- Download URL: claude_code_prompter-0.7.3-py3-none-any.whl
- Upload date:
- Size: 41.2 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.12.9
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
56f26f35e000e990151fbd13f8321ab2003e56a0f286e857a9930fb67a68aa16
|
|
| MD5 |
b929e7c87f97340c7863cc9c86a66293
|
|
| BLAKE2b-256 |
e95992d798578bd9ae1baa37766beb1ac015b1bd629a1c39e0742e4d4268b244
|
Provenance
The following attestation bundles were made for claude_code_prompter-0.7.3-py3-none-any.whl:
Publisher:
publish.yml on baijum/prompter
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
claude_code_prompter-0.7.3-py3-none-any.whl -
Subject digest:
56f26f35e000e990151fbd13f8321ab2003e56a0f286e857a9930fb67a68aa16 - Sigstore transparency entry: 251693634
- Sigstore integration time:
-
Permalink:
baijum/prompter@d101d6472f63e32d7a93c9fccd41ca8539a14455 -
Branch / Tag:
refs/tags/v0.7.3 - Owner: https://github.com/baijum
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@d101d6472f63e32d7a93c9fccd41ca8539a14455 -
Trigger Event:
release
-
Statement type: