Skip to main content

A simple automatic algorithm design tool

Project description

autoad

A simple automated algorithm design (AAD) tool.

Overview

This tool optimizes code by iteratively maximizing multiple measurable objectives. The core concepts are:

  • Prompt-driven Optimization: Accepts improvement instructions and evaluation criteria as prompts to guide the optimization process
  • Coding Agent Delegation: Delegates code improvement tasks to a coding agent within the optimization loop
  • Git-based Progress Tracking: Stores evaluation scores in Git tags to inform future optimization decisions
  • Evolutionary Approach: Simulates genetic and evolutionary algorithms by growing, merging, and selecting branches based on their performance scores

The optimization process starts when you provide improvement goals and evaluation metrics. The system then creates new branches where a coding agent implements suggested improvements. Each variant is evaluated using your specified metrics, with scores stored in Git tags. Based on these scores, the system selects high-performing branches for further improvement or merging, continuously evolving your codebase towards better solutions.

Usage

The tool requires:

  • --improvement-prompt: Describes what you want to improve
  • --objective NAME "PROMPT": Defines evaluation criteria (can be used multiple times)

Optional parameters:

  • --optional-prompt: Supplementary instructions for the optimization process
  • --sync-remote: Automatically sync with remote repository (fetches at start, pushes at end)
  • --log-dir PATH: Directory to save execution logs (default: ~/.autoad/logs)
  • --no-logging: Disable logging to files
  • --iterations N: Number of optimization iterations (default: 10)
  • --branch-prefix PREFIX: Prefix for optimization branches (default: 'optimize')
uvx autoad \
  --improvement-prompt "Improve accuracy of milwrap/countbase.py by increasing the higher value of the two iter 9 MIL instance unit accuracy metrics obtained from running 'uv run pytest -s .'" \
  --objective accuracy-auto-init "Run 'uv run pytest -s .' and use the first iter 9 MIL instance unit accuracy value as the score" \
  --objective accuracy-external-init "Run 'uv run pytest -s .' and use the second iter 9 MIL instance unit accuracy value as the score" \
  --iterations 300 \
  --branch-prefix optim-mil \
  --optional-prompt "Please report progress in Japanese."

The tool follows these steps to evolve your codebase:

  1. User Actions

    • Define optimization goals by providing:
      • Improvement prompt describing desired changes
      • Evaluation prompts specifying metrics
  2. System Actions - Code Generation

    • Generates improved code versions by:
      • Creating new branches
      • Delegating improvements to coding agent
      • Implementing suggested changes
  3. System Actions - Evaluation

    • Evaluates each variant by:
      • Running specified evaluation metrics
      • Calculating objective scores
      • Recording results in Git tags
  4. System Actions - Evolution

    • Evolves solution space through:
      • Selecting high-performing branches
      • Merging promising variants
      • Continuing optimization process

Example Application

As a practical example, this tool was applied to improve the algorithm performance in a multiple instance learning framework (inoueakimitsu/milwrap).

Optimization Progress

The optimization process ran for 2 days, focusing on enhancing the algorithm's performance on test data. The accuracy improved from 0.914 to 0.956 (with a theoretical maximum of 0.970). The graph shows the evaluation results of various algorithm variants generated during the optimization process.

Custom Iterations and Branch Prefix

You can specify the maximum number of iterations and customize the branch prefix using the following parameters:

  • --iterations N: Set the maximum number of optimization iterations (default: 100)
  • --branch-prefix PREFIX: Set custom prefix for optimization branches (default: "optim")

Remote Synchronization

The --sync-remote option enables automatic synchronization with a remote Git repository:

  • Before optimization: Fetches all branches and tags from the remote repository to ensure you're working with the latest state
  • After optimization: Force pushes all branches and tags to the remote repository to share your optimization results

This is particularly useful for:

  • Distributed optimization: Run optimization on multiple machines and combine results
  • Collaborative workflows: Share optimization progress with team members
  • Backup and persistence: Ensure optimization results are saved to remote repository

Example:

uvx autoad \
  --improvement-prompt "Optimize performance" \
  --objective speed "Measure execution time" \
  --sync-remote

Note: The --force flag is used when pushing, which will overwrite remote branches. Ensure you have appropriate permissions and understand the implications before using this option.

Logging and Output Management

Autoad automatically logs all execution output to help with debugging and analysis:

  • Default location: ~/.autoad/logs/
  • Directory structure: YYYY-MM-DD-HH-MM-SS-microseconds/ for each iteration (timestamp with microsecond precision)
  • Log files:
    • stdout.log: Standard output from the iteration
    • stderr.log: Error output from the iteration
    • metadata.json: Execution metadata (session_id, iteration_start_time, branch name, timestamps, etc.)

Logging Options

# Specify custom log directory
uvx autoad --log-dir /path/to/logs ...

# Set via environment variable
export AUTOAD_LOG_DIR=/path/to/logs
uvx autoad ...

# Disable logging entirely
uvx autoad --no-logging ...

Log Directory Structure Example

~/.autoad/logs/
├── 2025-07-21-13-45-00-123456/     # Iteration 1 (with microseconds)
│   ├── stdout.log
│   ├── stderr.log
│   └── metadata.json
├── 2025-07-21-13-45-01-789012/     # Iteration 2
│   ├── stdout.log
│   ├── stderr.log
│   └── metadata.json
└── 2025-07-21-13-45-02-345678/     # Iteration 3
    ├── stdout.log
    ├── stderr.log
    └── metadata.json

Note: Each iteration now creates its own directory based on the iteration start timestamp with microsecond precision. This ensures unique directories even when iterations run in parallel, eliminating the need for session IDs and iteration numbers in the directory names.

The logging system:

  • Preserves real-time console output while saving to files
  • Captures subprocess output (Git, Claude CLI, etc.)
  • Prevents accidental commits of log files to Git
  • Includes error handling with fallback directories
  • Protects against path traversal attacks

Requirements

  • Python 3.10+
  • macOS, Linux or WSL
  • Claude Code installed and configured. Due to intensive usage of the coding agent, we strongly recommend subscribing to the Claude MAX plan for optimal performance and to avoid rate limiting.
  • Git repository (for tracking optimization history)

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

autoad-0.1.2.tar.gz (14.5 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

autoad-0.1.2-py3-none-any.whl (16.4 kB view details)

Uploaded Python 3

File details

Details for the file autoad-0.1.2.tar.gz.

File metadata

  • Download URL: autoad-0.1.2.tar.gz
  • Upload date:
  • Size: 14.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.10.12

File hashes

Hashes for autoad-0.1.2.tar.gz
Algorithm Hash digest
SHA256 aa4fed957761b67c63954217c7cf2e8c902643cc123aad5497504c1fa9e85594
MD5 1e96333304a1357bed99ed1ea6cb15f1
BLAKE2b-256 b45106fba230d9239565a9983944e59217f46839d26369e4459dbe33ebe0b8d0

See more details on using hashes here.

File details

Details for the file autoad-0.1.2-py3-none-any.whl.

File metadata

  • Download URL: autoad-0.1.2-py3-none-any.whl
  • Upload date:
  • Size: 16.4 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.10.12

File hashes

Hashes for autoad-0.1.2-py3-none-any.whl
Algorithm Hash digest
SHA256 f20f2d8cf3108848447536b618165d192b7c528ff5164347807b60a68036afe1
MD5 12f84aaa42f24afc3b272f9266ec4f45
BLAKE2b-256 5df5e154b156508a26d9079c222f9dd8a1171dc08d5e932e949cdf30455a49e9

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page