AI-driven development workflow orchestrator
Project description
Galangal Orchestrate
Turn AI coding assistants into structured development workflows.
Galangal Orchestrate wraps Claude Code CLI to execute a deterministic, multi-stage development pipeline. Instead of open-ended AI coding sessions, you get a structured workflow with approval gates, validation, and automatic rollback.
Why Use This?
When you ask an AI to "add user authentication", you get whatever the AI decides to build. With Galangal:
- PM Stage - AI writes requirements, you approve before any code is written
- Design Stage - AI proposes architecture, you approve the approach
- Dev Stage - AI implements according to approved specs
- Test Stage - AI writes tests, validation ensures they pass
- QA Stage - AI verifies requirements are met
- Review Stage - AI reviews its own code for issues
- Docs Stage - AI updates documentation
If anything fails, the workflow automatically rolls back to the appropriate fix point with context about what went wrong.
Workflow Architecture
flowchart TD
START([Start Task]) --> PM[PM Stage]
PM --> PM_GATE{Plan<br/>Approved?}
PM_GATE -->|Yes| DESIGN[Design Stage]
PM_GATE -->|No| PM
DESIGN --> DESIGN_GATE{Design<br/>Approved?}
DESIGN_GATE -->|Yes| PREFLIGHT[Preflight]
DESIGN_GATE -->|No| DESIGN
PREFLIGHT --> DEV[Development]
DEV --> DEV_VAL{Validation<br/>Passes?}
DEV_VAL -->|Yes| MIGRATION
DEV_VAL -->|No| DEV
MIGRATION[Migration*] --> TEST[Test Stage]
TEST --> TEST_VAL{Tests<br/>Pass?}
TEST_VAL -->|Yes| TEST_GATE
TEST_VAL -->|No| DEV
TEST_GATE[Test Gate*] --> CONTRACT[Contract*]
CONTRACT --> QA[QA Stage]
QA --> BENCHMARK[Benchmark*]
BENCHMARK --> SECURITY[Security]
SECURITY --> REVIEW[Review]
REVIEW --> DOCS[Documentation]
DOCS --> COMPLETE([Complete])
%% Rollback paths
QA -.->|Fail| DEV
SECURITY -.->|Fail| DEV
REVIEW -.->|Fail| DEV
style PM fill:#e1f5fe
style DESIGN fill:#e1f5fe
style DEV fill:#fff3e0
style TEST fill:#fff3e0
style QA fill:#e8f5e9
style REVIEW fill:#e8f5e9
style DOCS fill:#e8f5e9
*Conditional stages - skipped automatically if not relevant to the task.
Requirements
- Python 3.10+
- Claude Code CLI installed (
claudecommand available) - Claude Pro or Max subscription
- Git
Installation
pip install galangal-orchestrate
# With pipx (recommended for CLI tools)
pipx install galangal-orchestrate
The install includes mistake tracking - a feature that remembers common AI errors in your repo and warns about them in future tasks. It uses PyTorch for local embeddings (~2GB total install size).
Quick Start
# Initialize in your project
cd your-project
galangal init
# Verify your setup (optional but recommended)
galangal doctor
# Start a task
galangal start "Add user authentication with JWT tokens"
# Resume after a break
galangal resume
# Check current status
galangal status
Verifying Your Setup
Run galangal doctor to verify your environment is properly configured:
$ galangal doctor
Galangal Doctor v0.17.1
✓ Python 3.10+: 3.12.1
✓ Git installed: 2.43.0
✓ Git configured: Your Name <you@example.com>
✓ Claude CLI: claude-code 1.0.0
⚠ GitHub CLI: Not installed (optional)
✓ Config file: Valid (my-project)
✓ Tasks directory: Writable (galangal-tasks/)
✓ Mistake tracking: Enabled
All required checks passed (1 optional warning)
This checks Python version, Claude CLI, Git configuration, and optional features like GitHub CLI.
Workflow Stages
| Stage | Purpose | Output |
|---|---|---|
| PM | Requirements & planning | SPEC.md, PLAN.md, STAGE_PLAN.md |
| DESIGN | Architecture design | DESIGN.md |
| PREFLIGHT | Environment validation | PREFLIGHT_REPORT.md |
| DEV | Implementation | Code changes |
| MIGRATION* | Database migration checks | MIGRATION_REPORT.md |
| TEST | Test implementation | TEST_PLAN.md, TEST_SUMMARY.md |
| TEST_GATE* | Verify configured test suites pass | TEST_GATE_RESULTS.md |
| CONTRACT* | API contract validation | CONTRACT_REPORT.md |
| QA | Quality assurance | QA_REPORT.md |
| BENCHMARK* | Performance validation | BENCHMARK_REPORT.md |
| SECURITY | Security review | SECURITY_CHECKLIST.md |
| REVIEW | Code review | REVIEW_NOTES.md |
| DOCS | Documentation | DOCS_REPORT.md |
*Conditional stages - skipped automatically if not relevant
Validation Artifacts
When validation commands run (tests, linters, etc.), Galangal creates debugging artifacts:
- VALIDATION_REPORT.md - Full output from all validation commands, useful for debugging failures
- TEST_SUMMARY.md - Concise test results (pass/fail counts, failed test names, coverage) included in downstream stage prompts
These artifacts help you understand what failed without digging through logs, and give downstream stages (QA, REVIEW) context about test results without bloating prompts with verbose output.
Test Gate
The TEST_GATE stage is an optional quality gate that runs configured test suites mechanically (no AI). It runs after the TEST stage and before QA. All configured tests must pass for the workflow to proceed.
Why use Test Gate?
- Ensures specific test suites always pass before QA
- Separates "writing tests" (TEST stage) from "verifying tests pass" (TEST_GATE)
- QA can skip running automated tests and focus on exploratory testing
- Provides a clear, repeatable verification step
Configuration:
# .galangal/config.yaml
test_gate:
enabled: true
fail_fast: true # Stop on first failure (default: true)
tests:
- name: "unit tests"
command: "npm test"
timeout: 300 # Optional, defaults to 5 minutes
- name: "integration tests"
command: "pytest tests/integration -v"
- name: "e2e tests"
command: "cd frontend && npm run e2e"
timeout: 600 # 10 minutes for slower tests
Behavior:
- Runs each test command in sequence
- Creates
TEST_GATE_RESULTS.mdwith detailed output - On success: proceeds to CONTRACT/QA stages
- On failure: rolls back to DEV with context about which tests failed
- QA prompt is automatically updated to skip re-running these tests
Skip conditions:
test_gate.enabled: false(default)- No tests configured in
test_gate.tests - DOCS task type (no code changes)
- Manual skip artifact (
TEST_GATE_SKIP.md)
Task Types
Choose the right workflow for your task:
| Type | Stages | When to Use |
|---|---|---|
| Feature | All stages | New functionality |
| Bug Fix | PM → PREFLIGHT → DEV → TEST → TEST_GATE → QA → REVIEW → SUMMARY | Fixing bugs |
| Refactor | PM → DESIGN → PREFLIGHT → DEV → TEST → TEST_GATE → REVIEW → SUMMARY | Code restructuring |
| Chore | PM → PREFLIGHT → DEV → TEST → TEST_GATE → REVIEW → SUMMARY | Config, dependencies |
| Docs | PM → DOCS → SUMMARY | Documentation only |
| Hotfix | PM → DEV → TEST → TEST_GATE → SUMMARY | Critical fixes |
The PM stage can further customize which stages run based on task analysis.
Interactive Controls
During workflow execution:
| Key | Action | Description |
|---|---|---|
^Q |
Quit | Pause and exit (resume later with galangal resume) |
^I |
Interrupt | Stop current stage, give feedback, rollback to DEV |
^N |
Skip | Skip current stage, advance to next |
^B |
Back | Go back to previous stage |
^E |
Edit | Pause for manual editing, press Enter to resume |
Interrupt with Feedback (^I)
When you see the AI doing something wrong mid-stage:
- Press
^Ito interrupt immediately - Enter feedback describing what needs to be fixed
- Workflow rolls back to DEV with your feedback as context
- A
ROLLBACK.mdartifact is created for the AI to reference
Manual Edit Pause (^E)
Need to make a quick fix yourself?
- Press
^Eto pause - Make edits in your editor
- Press Enter to resume the current stage
PM-driven Stage Planning
After analyzing your task, the PM stage outputs a STAGE_PLAN.md recommending which optional stages to run or skip:
# Stage Plan
## Recommendations
| Stage | Action | Reason |
|-------|--------|--------|
| MIGRATION | skip | No database changes detected |
| CONTRACT | skip | Internal refactor, no API changes |
| SECURITY | run | Handling user authentication input |
| BENCHMARK | skip | UI-only changes, no performance impact |
The progress bar updates dynamically to show only relevant stages.
Workflow Preview
After PM approval, you'll see a preview showing exactly which stages will run and why others are skipped:
Workflow Preview
Stages to run:
PM → DESIGN → PREFLIGHT → DEV → TEST → QA → REVIEW → DOCS
Skipping:
MIGRATION (no files match: **/migrations/*)
CONTRACT (no files match: **/api/*, **/openapi.*)
BENCHMARK (task type: bug_fix)
SECURITY (PM: simple UI change, no security impact)
Controls during execution:
^N Skip stage ^B Back ^E Pause for edit ^I Interrupt
Skip reasons include:
- Task type - Based on the workflow template (e.g., bug fixes skip DESIGN)
- Config - Stages listed in
stages.skipconfiguration - PM recommendation - From STAGE_PLAN.md analysis
- skip_if condition - No changed files match the glob pattern
Commands
| Command | Description |
|---|---|
galangal init |
Initialize in current project |
galangal start "desc" |
Start new task |
galangal list |
List all tasks |
galangal switch <name> |
Switch active task |
galangal status |
Show task status |
galangal resume |
Continue active task |
galangal pause |
Pause for break |
galangal approve |
Approve plan |
galangal approve-design |
Approve design |
galangal skip-design |
Skip design stage |
galangal skip-to <stage> |
Jump to stage |
galangal complete |
Finalize & create PR |
galangal reset |
Delete active task |
galangal github setup |
Set up GitHub integration |
galangal github issues |
List galangal-labeled issues |
galangal github run |
Process issues automatically |
GitHub Integration
Galangal can create tasks directly from GitHub issues, automatically downloading screenshots and inferring task types from labels.
Quick Setup
# 1. Install GitHub CLI (if not already installed)
# macOS:
brew install gh
# Windows:
winget install GitHub.cli
# Linux: See https://cli.github.com
# 2. Authenticate
gh auth login
# 3. Set up GitHub integration (creates labels)
galangal github setup
How It Works
- Add the
galangallabel to any GitHub issue you want to work on - Run
galangal startand select "GitHub issue" as the task source - Galangal will:
- Download any screenshots from the issue body
- Infer the task type from issue labels
- Create a task linked to the issue
- Mark the issue as "in-progress"
When you complete the task with galangal complete, a PR is created that automatically closes the linked issue.
Batch Processing
Process all galangal-labeled issues automatically:
# List issues that would be processed
galangal github run --dry-run
# Process all issues headlessly
galangal github run
Issue Screenshots
Screenshots embedded in GitHub issues (using  syntax) are automatically:
- Downloaded to
galangal-tasks/<task>/screenshots/ - Passed to the AI during PM, Design, and Dev stages
- Available for the AI to view using Claude's Read tool
This is especially useful for bug reports with screenshots or design mockups.
Label Configuration
Galangal maps GitHub labels to task types. The defaults are:
| Task Type | Labels |
|---|---|
| bug_fix | bug, bugfix |
| feature | enhancement, feature |
| docs | documentation, docs |
| refactor | refactor |
| chore | chore, maintenance |
| hotfix | hotfix, critical |
Customize in .galangal/config.yaml:
github:
# Label that triggers galangal to pick up issues
pickup_label: galangal
# Label added when work starts
in_progress_label: in-progress
# Custom label colors (hex without #)
label_colors:
galangal: "7C3AED"
in-progress: "FCD34D"
# Map your labels to task types
label_mapping:
bug:
- bug
- bugfix
- defect # Add your custom labels
feature:
- enhancement
- feature
- new-feature
docs:
- documentation
- docs
refactor:
- refactor
- tech-debt
chore:
- chore
- maintenance
- dependencies
hotfix:
- hotfix
- critical
- urgent
GitHub Commands
| Command | Description |
|---|---|
galangal github setup |
Create required labels, show setup instructions |
galangal github setup --help-install |
Show detailed gh CLI installation instructions |
galangal github check |
Verify gh CLI installation and authentication |
galangal github issues |
List issues with galangal label |
galangal github issues --label <name> |
List issues with custom label |
galangal github run |
Process all labeled issues headlessly |
galangal github run --dry-run |
Preview without processing |
Configuration
After galangal init, customize .galangal/config.yaml. Here's a complete reference:
# =============================================================================
# PROJECT CONFIGURATION
# =============================================================================
project:
# Project name (displayed in logs and prompts)
name: "My Project"
# Default approver name for plan/design approvals (auto-fills signoff prompts)
approver_name: "Jane Smith"
# Technology stacks in your project
# Helps AI understand your codebase structure
stacks:
- language: python
framework: fastapi # Optional: framework name
root: backend/ # Optional: subdirectory for this stack
- language: typescript
framework: vite
root: frontend/
# =============================================================================
# TASK STORAGE
# =============================================================================
# Directory where task state and artifacts are stored
tasks_dir: galangal-tasks
# Git branch naming pattern ({task_name} is replaced with sanitized task name)
branch_pattern: "task/{task_name}"
# =============================================================================
# STAGE CONFIGURATION
# =============================================================================
stages:
# Stages to always skip (regardless of task type or PM recommendations)
skip:
- BENCHMARK
- CONTRACT
# Default timeout for each stage in seconds (4 hours default)
timeout: 14400
# Maximum retries per stage before rollback (default: 5)
max_retries: 5
# =============================================================================
# TEST GATE CONFIGURATION
# Mechanical test verification stage (no AI) - runs after TEST, before QA
# =============================================================================
test_gate:
# Enable the test gate stage (default: false)
enabled: true
# Stop on first test failure instead of running all tests (default: true)
fail_fast: true
# Test suites to run - all must pass for the stage to succeed
tests:
- name: "unit tests"
command: "npm test"
timeout: 300 # Timeout in seconds (default: 300)
- name: "integration tests"
command: "pytest tests/integration -v"
- name: "e2e tests"
command: "cd frontend && npm run e2e"
timeout: 600 # Longer timeout for e2e tests
# =============================================================================
# VALIDATION CONFIGURATION
# Each stage can have validation commands, checks, and skip conditions
# =============================================================================
validation:
# Preflight checks run before DEV stage
preflight:
timeout: 300 # Timeout for each check in seconds
checks:
- name: "Git status clean"
command: "git status --porcelain"
expect_empty: true # Pass if output is empty
warn_only: false # If true, warn but don't fail
- name: "Node modules exist"
path_exists: "node_modules" # Check if path exists
- name: "Dependencies installed"
command: "npm ls --depth=0"
warn_only: true
# Migration stage validation
migration:
# Skip if no migration files changed
skip_if:
no_files_match:
- "migrations/**"
- "**/migrations/**"
- "alembic/**"
timeout: 600
commands:
- name: "Run migrations"
command: "python manage.py migrate --check"
timeout: 300 # Override timeout for this command
optional: false # If true, don't fail if command fails
allow_failure: false # If true, report but don't block
# Test stage validation
test:
timeout: 600
commands:
- name: "Unit tests"
command: "pytest tests/unit"
- name: "Integration tests"
command: "pytest tests/integration"
optional: true # Don't fail if integration tests missing
# Use array form for paths with spaces or special characters
- name: "Task-specific tests"
command: ["pytest", "{task_dir}/tests"] # {task_dir} is substituted
# Contract stage (API compatibility)
contract:
skip_if:
no_files_match: "openapi.yaml"
timeout: 300
commands:
- name: "Validate OpenAPI spec"
command: "openapi-spec-validator openapi.yaml"
# QA stage validation
qa:
timeout: 3600
commands:
- name: "Lint"
command: "./scripts/lint.sh"
timeout: 600
- name: "Type check"
command: "mypy src/"
timeout: 600
# Marker-based validation (for AI output verification)
artifact: "QA_REPORT.md"
pass_marker: "## PASS"
fail_marker: "## FAIL"
# Security stage validation
security:
timeout: 1800
commands:
- name: "Security scan"
command: "bandit -r src/"
allow_failure: true # Report issues but don't block
artifacts_required:
- "SECURITY_CHECKLIST.md"
# Review stage validation
review:
timeout: 1800
artifact: "REVIEW_NOTES.md"
pass_marker: "APPROVED"
fail_marker: "REJECTED"
# Docs stage validation
docs:
timeout: 900
artifacts_required:
- "DOCS_REPORT.md"
# =============================================================================
# AI BACKEND CONFIGURATION
# =============================================================================
ai:
# Default backend to use
default: claude
# Available backends with customizable CLI flags
backends:
claude:
command: claude # CLI command to invoke
args: # Arguments with {placeholder} substitution
- "--output-format"
- "stream-json"
- "--verbose"
- "--max-turns"
- "{max_turns}" # Replaced with max_turns value
- "--permission-mode"
- "bypassPermissions"
max_turns: 200 # Maximum conversation turns per stage
read_only: false # If true, backend cannot write files
codex:
command: codex
args:
- "exec"
- "--full-auto"
- "--output-schema"
- "{schema_file}" # Replaced with schema file path
- "-o"
- "{output_file}" # Replaced with output file path
max_turns: 50
read_only: true # Codex runs in read-only sandbox
# Use different backends for specific stages
stage_backends:
REVIEW: codex # Use Codex for code review
# QA: gemini # Use Gemini for QA (when supported)
# =============================================================================
# DOCUMENTATION CONFIGURATION
# =============================================================================
docs:
# Directory for changelog entries
changelog_dir: docs/changelog
# Directory for security audit reports
security_audit: docs/security
# Directory for general documentation
general: docs
# Toggle documentation updates
update_changelog: true # Update changelog in DOCS stage
update_security_audit: true # Create security reports in SECURITY stage
update_general_docs: true # Update general docs in DOCS stage
# =============================================================================
# PULL REQUEST CONFIGURATION
# =============================================================================
pr:
# Base branch for PRs (e.g., main, develop)
base_branch: main
# Add @codex review to PR body for automated review
codex_review: false
# =============================================================================
# STRUCTURED LOGGING
# =============================================================================
logging:
# Enable structured logging to file
enabled: true
# Log level: debug, info, warning, error
level: info
# Log file path (JSON Lines format for easy parsing)
file: logs/galangal.jsonl
# Output format: true for JSON, false for pretty console format
json_format: true
# Also output to console (stderr)
console: false
# =============================================================================
# TASK TYPE SETTINGS
# Per-task-type overrides
# =============================================================================
task_type_settings:
bugfix:
skip_discovery: true # Skip the PM discovery Q&A for bugfixes
hotfix:
skip_discovery: true
# =============================================================================
# ARTIFACT CONTEXT FILTERING
# Control which artifacts are included in prompts per stage (reduces token usage)
# =============================================================================
artifact_context:
# Each stage can specify which artifacts to include
# - required: Must be included (no error if missing, just skipped)
# - include: Include if exists
# - exclude: Never include (overrides include)
review:
required:
- SPEC.md
- DEVELOPMENT.md
include:
- DESIGN.md
- QA_REPORT.md
- SECURITY_CHECKLIST.md
exclude:
- PREFLIGHT.md
- TEST_PLAN.md
- TEST_GATE_RESULTS.md
security:
required:
- SPEC.md
- DEVELOPMENT.md
include:
- DESIGN.md
exclude:
- TEST_SUMMARY.md
- QA_REPORT.md
docs:
required:
- SPEC.md
- DEVELOPMENT.md
include:
- DESIGN.md
exclude:
- TEST_PLAN.md
- QA_REPORT.md
- SECURITY_CHECKLIST.md
# =============================================================================
# PROMPT CONTEXT
# Additional context injected into AI prompts
# =============================================================================
# Global context added to ALL stage prompts
prompt_context: |
## Project Conventions
- Use repository pattern for data access
- API responses use api_success() / api_error() helpers
- All errors should be logged with context
## Testing Standards
- Unit tests go in tests/unit/
- Integration tests go in tests/integration/
- Use pytest fixtures for test data
# Per-stage context (merged with global context)
stage_context:
dev: |
## Development Environment
- Run `npm run dev` for hot reload
- Database: PostgreSQL on localhost:5432
- Redis: localhost:6379
test: |
## Test Setup
- Use vitest for frontend unit tests
- Use pytest for backend tests
- Mock external APIs in tests
security: |
## Security Requirements
- All user input must be validated
- Use parameterized queries (no raw SQL)
- Secrets must use environment variables
AI Backend Customization
Galangal invokes AI backends (like Claude Code CLI) using configurable commands and arguments. This allows you to customize CLI flags without modifying code.
Default Behavior
By default, Galangal invokes Claude with:
cat prompt.txt | claude --output-format stream-json --verbose --max-turns 200 --permission-mode bypassPermissions
Customizing CLI Flags
Override any flags in .galangal/config.yaml:
ai:
backends:
claude:
command: claude
args:
- "--output-format"
- "stream-json"
- "--verbose"
- "--max-turns"
- "{max_turns}"
- "--permission-mode"
- "bypassPermissions"
- "--model" # Add custom flags
- "opus"
max_turns: 300 # Increase max turns
Placeholder Reference
Arguments can include placeholders that are substituted at runtime:
| Placeholder | Backend | Description |
|---|---|---|
{max_turns} |
claude | Maximum conversation turns |
{schema_file} |
codex | Path to JSON schema file |
{output_file} |
codex | Path for structured output |
Common Customizations
Use a specific model:
ai:
backends:
claude:
args:
- "--output-format"
- "stream-json"
- "--model"
- "sonnet" # Use Sonnet instead of default
- "--max-turns"
- "{max_turns}"
Increase turn limit for complex tasks:
ai:
backends:
claude:
max_turns: 500 # Default is 200
args:
- "--output-format"
- "stream-json"
- "--max-turns"
- "{max_turns}" # Will use 500
Use different backends per stage:
ai:
default: claude
stage_backends:
REVIEW: codex # Use Codex for code review
Adding a Custom Backend
Define any CLI tool as a backend:
ai:
backends:
my-backend:
command: my-ai-tool
args:
- "--prompt-file"
- "-" # Read from stdin
- "--json-output"
max_turns: 100
read_only: true # Cannot write files directly
Then use it:
ai:
default: my-backend
# Or per-stage:
stage_backends:
QA: my-backend
Artifact Context Filtering
By default, each stage receives relevant artifacts from earlier stages in its prompt context. Later stages like REVIEW can accumulate large amounts of context, increasing token usage and costs.
Artifact context filtering lets you explicitly control which artifacts each stage receives, reducing token usage by 30-50% on later stages.
How It Works
When a stage runs, Galangal builds its prompt by including artifacts from earlier stages. Without filtering, the REVIEW stage might receive:
- SPEC.md, DESIGN.md, DEVELOPMENT.md (needed)
- PREFLIGHT.md, TEST_PLAN.md, TEST_GATE_RESULTS.md (not needed for review)
With filtering, you specify exactly what each stage needs:
artifact_context:
review:
required:
- SPEC.md # Core requirements
- DEVELOPMENT.md # Implementation details
include:
- QA_REPORT.md # Include if exists
- SECURITY_CHECKLIST.md
exclude:
- PREFLIGHT.md # Never include
- TEST_PLAN.md
Configuration Options
| Field | Description |
|---|---|
required |
Artifacts to always include (skipped if missing) |
include |
Artifacts to include if they exist |
exclude |
Artifacts to never include (overrides include) |
Recommended Configuration
For most projects, filtering these stages provides the best token savings:
artifact_context:
# REVIEW: Focus on spec, implementation, and findings
review:
required: [SPEC.md, DEVELOPMENT.md]
include: [DESIGN.md, QA_REPORT.md, SECURITY_CHECKLIST.md]
exclude: [PREFLIGHT.md, TEST_PLAN.md, TEST_GATE_RESULTS.md]
# SECURITY: Only needs code changes
security:
required: [SPEC.md, DEVELOPMENT.md]
include: [DESIGN.md]
exclude: [TEST_SUMMARY.md, QA_REPORT.md]
# DOCS: Requirements and implementation
docs:
required: [SPEC.md, DEVELOPMENT.md]
include: [DESIGN.md]
exclude: [TEST_PLAN.md, QA_REPORT.md, SECURITY_CHECKLIST.md]
# SUMMARY: Final reports only
summary:
required: [SPEC.md]
include: [QA_REPORT.md, SECURITY_CHECKLIST.md, REVIEW_NOTES.md]
exclude: [DEVELOPMENT.md, TEST_PLAN.md, TEST_SUMMARY.md]
Backwards Compatibility
If artifact_context is not configured, Galangal uses its default stage-specific logic. You can configure filtering for some stages and leave others to use defaults.
Customizing Prompts
Galangal uses a layered prompt system:
- Base prompts - Built-in, language-agnostic prompts
- Project prompts - Your customizations in
.galangal/prompts/
Supplement Mode (Recommended)
Add project-specific content that gets prepended to the base prompt:
<!-- .galangal/prompts/dev.md -->
## Project CLI Scripts
- `./scripts/test.sh` - Run all tests
- `./scripts/lint.sh` - Run linter
## Patterns to Follow
- Always use `api_success()` for responses
- Never use raw SQL queries
# BASE
The # BASE marker inserts the default prompt at that location.
Override Mode
To completely replace a base prompt, omit the # BASE marker:
<!-- .galangal/prompts/preflight.md -->
# Custom Preflight
This completely replaces the default preflight prompt.
[Your custom instructions...]
Available Prompt Files
Create any of these in .galangal/prompts/:
| File | Stage |
|---|---|
pm.md |
Requirements & planning |
design.md |
Architecture design |
preflight.md |
Environment checks |
dev.md |
Implementation |
test.md |
Test writing |
qa.md |
Quality assurance |
security.md |
Security review |
review.md |
Code review |
docs.md |
Documentation |
Mistake Tracking
Mistake tracking helps prevent the AI from repeating common errors in your codebase. It uses vector similarity search to identify patterns and inject warnings into prompts.
How It Works
-
Automatic Logging - When a stage fails and rolls back, or when you interrupt with feedback (Ctrl+I), the mistake is logged to
.galangal/mistakes.db -
Semantic Deduplication - Similar mistakes are merged using vector embeddings, preventing the database from growing unbounded
-
Prompt Injection - When a stage starts, relevant mistakes are retrieved and injected as warnings:
# Common Mistakes in This Repo - AVOID THESE
## 1. Forgot null check on user object
**Occurrences:** 4 times
**Files:** src/services/*
**Prevention:** Always check if user exists before accessing user.email
CLI Commands
galangal mistakes list # View all tracked mistakes
galangal mistakes list --stage DEV # Filter by stage
galangal mistakes stats # Show statistics
galangal mistakes search "null" # Semantic search
galangal mistakes delete 5 # Remove a mistake by ID
Example Output
$ galangal mistakes stats
Mistake Tracking Statistics
Unique mistakes 12
Total occurrences 34
Vector search Enabled
By Stage:
DEV 8
TEST 3
REVIEW 1
Storage
Mistakes are stored in .galangal/mistakes.db (SQLite with vector search). The database uses local embeddings via sentence-transformers - no API calls required.
Troubleshooting
Debug Mode
When something goes wrong and you need to see what happened:
# Enable debug logging (writes to logs/galangal_debug.log)
galangal --debug start "task description"
galangal --debug resume
# Alternative: set environment variable
GALANGAL_DEBUG=1 galangal start "task description"
Debug mode creates two log files:
logs/galangal_debug.log- Human-readable debug trace with timestampslogs/galangal.jsonl- Structured JSON logs for programmatic analysis
Example debug log:
[14:32:15.123] GitHub integration failed: HTTPError: 401 Unauthorized
[14:32:15.124] Traceback:
File "/path/to/start.py", line 138, in task_creation_thread
check = ensure_github_ready()
...
Structured Logging Configuration
Enable structured logging in .galangal/config.yaml:
logging:
enabled: true # Enable logging
level: debug # debug, info, warning, error
file: logs/galangal.jsonl
json_format: true # JSON for parsing, false for console format
console: false # Also output to stderr
Tests Hang at TEST Stage
Test frameworks must run non-interactively. Common issues:
Playwright - HTML reporter blocks by default:
# Use non-blocking reporter
npx playwright test --reporter=list
# Or set environment variable
PLAYWRIGHT_HTML_OPEN=never npx playwright test
# Or in playwright.config.ts:
# reporter: [['html', { open: 'never' }]]
Jest/Vitest - Watch mode blocks:
# Wrong (blocks):
npm test -- --watch
# Correct:
npm test
Cypress - Interactive mode blocks:
# Wrong (blocks):
cypress open
# Correct:
cypress run
General rule: Use CI-friendly commands that exit automatically. Avoid watch mode, interactive mode, or any GUI that waits for user input.
TEST Stage Loops Indefinitely
If the TEST stage keeps retrying instead of rolling back to DEV:
- Ensure your TEST_PLAN.md has a clear
**Status:** PASSor**Status:** FAILline - If tests fail due to implementation bugs, the AI should report FAIL (not try to fix the code)
- Check that test commands exit with proper exit codes (0 for success, non-zero for failure)
Note: As of v0.12.0, when artifact markers are unclear (missing PASS/FAIL), Galangal prompts you to manually approve or reject instead of retrying indefinitely. You'll see the artifact content and can make the decision yourself.
"Galangal has not been initialized" Error
Run galangal init in your project root before using other commands.
Task Exits Without Error Message
If a task quits unexpectedly with no visible error:
-
Enable debug mode and re-run:
galangal --debug start "your task"
-
Check the debug log for the actual error:
tail -50 logs/galangal_debug.log
-
Common causes:
- GitHub authentication failed (run
gh auth status) - Network timeout fetching issues
- Missing permissions for the repository
- Invalid issue number or no issues with
galangallabel
- GitHub authentication failed (run
GitHub Integration Fails Silently
If galangal start from a GitHub issue exits without creating a task:
# Check GitHub CLI is working
gh auth status
gh repo view
# Try with debug mode
galangal --debug start --issue 123
Check logs/galangal_debug.log for specific errors like:
401 Unauthorized- Re-authenticate withgh auth login404 Not Found- Issue doesn't exist or wrong repoNo issues with 'galangal' label- Add the label to an issue first
License
MIT License - see LICENSE file.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file galangal_orchestrate-0.22.2.tar.gz.
File metadata
- Download URL: galangal_orchestrate-0.22.2.tar.gz
- Upload date:
- Size: 269.5 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
d27c8fd781f81c6408f4cbb3a4b930c4b52b8119de465291acd474a6543dc92f
|
|
| MD5 |
09c2725cff4bf47abe7ab5c81e6b2223
|
|
| BLAKE2b-256 |
4c460d2e1d75e3ceab905849c533fd0c27b3434103a9488da7d52579693680c4
|
Provenance
The following attestation bundles were made for galangal_orchestrate-0.22.2.tar.gz:
Publisher:
publish.yml on Galangal-Media/galangal-orchestrate
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
galangal_orchestrate-0.22.2.tar.gz -
Subject digest:
d27c8fd781f81c6408f4cbb3a4b930c4b52b8119de465291acd474a6543dc92f - Sigstore transparency entry: 863134624
- Sigstore integration time:
-
Permalink:
Galangal-Media/galangal-orchestrate@ade2ee90e34c29d37c0fb914d50af052bcda696e -
Branch / Tag:
refs/tags/v0.22.2 - Owner: https://github.com/Galangal-Media
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@ade2ee90e34c29d37c0fb914d50af052bcda696e -
Trigger Event:
release
-
Statement type:
File details
Details for the file galangal_orchestrate-0.22.2-py3-none-any.whl.
File metadata
- Download URL: galangal_orchestrate-0.22.2-py3-none-any.whl
- Upload date:
- Size: 246.1 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
0ae524e7bfbe1bb68278d5318e0ba550cbc3542c566a71973d68d691f623f7b0
|
|
| MD5 |
8e215f25bade34eff7e2e5ff36a3d024
|
|
| BLAKE2b-256 |
9c48aa6aed49f0386f8df8001113eaf7d76104d38e710f9b05fd4381b2443836
|
Provenance
The following attestation bundles were made for galangal_orchestrate-0.22.2-py3-none-any.whl:
Publisher:
publish.yml on Galangal-Media/galangal-orchestrate
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
galangal_orchestrate-0.22.2-py3-none-any.whl -
Subject digest:
0ae524e7bfbe1bb68278d5318e0ba550cbc3542c566a71973d68d691f623f7b0 - Sigstore transparency entry: 863134628
- Sigstore integration time:
-
Permalink:
Galangal-Media/galangal-orchestrate@ade2ee90e34c29d37c0fb914d50af052bcda696e -
Branch / Tag:
refs/tags/v0.22.2 - Owner: https://github.com/Galangal-Media
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@ade2ee90e34c29d37c0fb914d50af052bcda696e -
Trigger Event:
release
-
Statement type: