Skip to main content

AntFlow: Async execution library with concurrent.futures-style API and advanced pipelines

Project description

AntFlow Logo

AntFlow

Why AntFlow?

The name 'AntFlow' is inspired by the efficiency of an ant colony, where each ant (worker) performs its specialized function, and together they contribute to the colony's collective goal. Similarly, AntFlow orchestrates independent workers to achieve complex asynchronous tasks seamlessly.

The Problem I Had to Solve

I was processing massive amounts of data using OpenAI's Batch API. The workflow was complex:

  1. Upload batches of data to OpenAI
  2. Wait for processing to complete
  3. Download the results
  4. Save to database
  5. Repeat for the next batch

Initially, I processed 10 batches at a time using basic async. But here's the problem: I had to wait for ALL 10 batches to complete before starting the next group.

The Bottleneck

Imagine this scenario:

  • 9 batches complete in 5 minutes
  • 1 batch gets stuck and takes 30 minutes
  • I waste 25 minutes waiting for that one slow batch while my system sits idle

With hundreds of batches to process, these delays accumulated into hours of wasted time. Even worse, one failed batch would block the entire pipeline.

The Solution: AntFlow

I built AntFlow to solve this exact problem. Instead of batch-by-batch processing, AntFlow uses worker pools where:

  • ✅ Each worker handles tasks independently
  • ✅ When a worker finishes, it immediately grabs the next task
  • ✅ Slow tasks don't block fast ones
  • ✅ Always maintain optimal concurrency (e.g., 10 tasks running simultaneously)
  • ✅ Built-in retry logic for failed tasks
  • ✅ Multi-stage pipelines for complex workflows

Result: My OpenAI batch processing went from taking hours to completing in a fraction of the time, with automatic retry handling and zero idle time.

AntFlow Workers

AntFlow: Modern async execution library with concurrent.futures-style API and advanced pipelines


Key Features

🚀 Worker Pool Architecture

  • Independent workers that never block each other
  • Automatic task distribution
  • Optimal resource utilization

🔄 Multi-Stage Pipelines

  • Chain operations with configurable worker pools per stage
  • Each stage runs independently
  • Data flows automatically between stages

💪 Built-in Resilience

  • Per-task retry with exponential backoff
  • Per-stage retry for transactional operations
  • Failed tasks don't stop the pipeline

📊 Real-time Monitoring & Dashboards

  • Worker State Tracking - Know what each worker is doing in real-time
  • Performance Metrics - Track items processed, failures, avg time per worker
  • Task-Level Events - Monitor individual task retries and failures
  • Dashboard API - Query snapshots for live dashboards and UIs
  • Event Streaming - Subscribe to status changes via callbacks
  • StatusTracker - Real-time item tracking with full history
  • PipelineDashboard - Helper for combining queries and events

🎯 Familiar API

  • Drop-in async replacement for concurrent.futures
  • submit(), map(), as_completed() methods
  • Clean, intuitive interface

Use Cases

Perfect for:

  • Batch API Processing - OpenAI, Anthropic, any batch API
  • ETL Pipelines - Extract, transform, load at scale
  • Web Scraping - Fetch, parse, store web data efficiently
  • Data Processing - Process large datasets with retry logic
  • Microservices - Chain async service calls with error handling

Real-world Impact:

  • Process large batches without bottlenecks
  • Automatic retry for transient failures
  • Zero idle time = maximum throughput
  • Clear observability with metrics and callbacks

Quick Install

pip install antflow

Quick Example

import asyncio
from antflow import Pipeline, Stage

# Your actual work
async def upload_batch(batch_data):
    # Upload to OpenAI API
    return "batch_id"

async def check_status(batch_id):
    # Check if batch is ready
    return "result_url"

async def download_results(result_url):
    # Download processed data
    return "processed_data"

async def save_to_db(processed_data):
    # Save results
    return "saved"

# Build the pipeline
upload_stage = Stage(name="Upload", workers=10, tasks=[upload_batch])
check_stage = Stage(name="Check", workers=10, tasks=[check_status])
download_stage = Stage(name="Download", workers=10, tasks=[download_results])
save_stage = Stage(name="Save", workers=5, tasks=[save_to_db])

pipeline = Pipeline(stages=[upload_stage, check_stage, download_stage, save_stage])

# Process batches efficiently
results = await pipeline.run(batches)

What happens: Each stage has its own worker pool (10 for Upload, 10 for Check, 10 for Download, 5 for Save). Workers in each stage process tasks independently. As soon as a worker finishes, it picks the next task. No waiting. No idle time. Maximum throughput.


Core Concepts

AsyncExecutor: Simple Concurrent Execution

For straightforward parallel processing, AsyncExecutor provides a concurrent.futures-style API:

import asyncio
from antflow import AsyncExecutor

async def process_item(x):
    await asyncio.sleep(0.1)
    return x * 2

async def main():
    async with AsyncExecutor(max_workers=10) as executor:
        # Using map() for parallel processing
        results = []
        async for result in executor.map(process_item, range(100)):
            results.append(result)
        print(f"Processed {len(results)} items")

asyncio.run(main())

Pipeline: Multi-Stage Processing

For complex workflows with multiple steps, you can build a Pipeline:

import asyncio
from antflow import Pipeline, Stage

async def fetch(x):
    await asyncio.sleep(0.1)
    return f"data_{x}"

async def process(x):
    await asyncio.sleep(0.1)
    return x.upper()

async def save(x):
    await asyncio.sleep(0.1)
    return f"saved_{x}"

async def main():
    # Define stages with different worker counts
    fetch_stage = Stage(name="Fetch", workers=10, tasks=[fetch])
    process_stage = Stage(name="Process", workers=5, tasks=[process])
    save_stage = Stage(name="Save", workers=3, tasks=[save])

    # Build pipeline
    pipeline = Pipeline(stages=[fetch_stage, process_stage, save_stage])

    # Process 100 items through all stages
    results = await pipeline.run(range(100))

    print(f"Completed: {len(results)} items")
    print(f"Stats: {pipeline.get_stats()}")

asyncio.run(main())

Why different worker counts?

  • Fetch: I/O bound, use more workers (10)
  • Process: CPU bound, moderate workers (5)
  • Save: Rate-limited API, fewer workers (3)

Real-Time Monitoring with StatusTracker

Track every item as it flows through your pipeline with StatusTracker. Get real-time status updates, query current states, and access complete event history.

from antflow import Pipeline, Stage, StatusTracker

tracker = StatusTracker()

pipeline = Pipeline(
    stages=[stage1, stage2, stage3],
    status_tracker=tracker
)

results = await pipeline.run(items)

# Query current status
stats = tracker.get_stats()
print(f"Completed: {stats['completed']}")
print(f"Failed: {stats['failed']}")

# Get specific item status
status = tracker.get_status(item_id=42)
print(f"Item 42: {status.status} @ {status.stage}")

# Get all failed items
failed = tracker.get_by_status("failed")
for event in failed:
    print(f"Item {event.item_id}: {event.metadata['error']}")

Documentation

AntFlow has comprehensive documentation to help you get started and master advanced features:

📚 Getting Started

🛠️ User Guides

💡 Examples

📖 API Reference

You can also build and serve the documentation locally using mkdocs:

pip install mkdocs-material
mkdocs serve

Then open your browser to http://127.0.0.1:8000.


Requirements

  • Python 3.9+
  • tenacity >= 8.0.0

Note: For Python 3.9-3.10, the taskgroup backport is automatically installed.


Running Tests

To run the test suite, first install the development dependencies from the project root:

pip install -e ".[dev]"

Then, you can run the tests using pytest:

pytest

Contributing

Contributions are welcome! Please see our Contributing Guidelines.


License

MIT License - see LICENSE file for details.


Made with ❤️ to solve real problems in production

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

antflow-0.1.0.tar.gz (30.7 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

antflow-0.1.0-py3-none-any.whl (21.9 kB view details)

Uploaded Python 3

File details

Details for the file antflow-0.1.0.tar.gz.

File metadata

  • Download URL: antflow-0.1.0.tar.gz
  • Upload date:
  • Size: 30.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.10.19

File hashes

Hashes for antflow-0.1.0.tar.gz
Algorithm Hash digest
SHA256 c1f72ead14a4d7fca4e03c55c39a17c2db2f314d3131a74fc91d3a4e62470056
MD5 c93ad24ab10be1cbf472d30881ac2592
BLAKE2b-256 ca462fec353c1c785e6e7411d22d2a24f13924f9ea3f75dcc9570c1a30b299cb

See more details on using hashes here.

File details

Details for the file antflow-0.1.0-py3-none-any.whl.

File metadata

  • Download URL: antflow-0.1.0-py3-none-any.whl
  • Upload date:
  • Size: 21.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.10.19

File hashes

Hashes for antflow-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 541e4c9e027bd1f1dbcf9fe575bee4966c59de7f6603140f9cedf7866924d05f
MD5 98ea06e11dc86ce969c921fcf24ad321
BLAKE2b-256 d745e755fd25ca6dd8496c20f075343f14a1435b728b16a2717960eb8e61bbe7

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page