Skip to main content

Intelligent batch processing tool for Replicate models with automatic fallback mechanisms

Project description

Replicate Model Calling Tool - Complete Usage Guide

๐Ÿš€ Overview

This tool provides three ways to call Replicate models, from single image generation to large-scale batch processing:

  1. Single Image Generation - Direct calling, suitable for testing and one-time use
  2. Simple Batch Processing - Convenient interface, suitable for batch generation with same parameters
  3. Advanced Batch Processing - Flexible configuration, suitable for mixed models and complex requirements

๐Ÿ”„ Intelligent Fallback Mechanism - Core Feature

Automatic model switching, no compatibility worries!

Our system intelligently detects model compatibility issues and automatically switches to the best alternative model:

Three Fallback Trigger Conditions:

  1. Reference Image Auto-Switching

    # User passes reference image to model that doesn't support it
    replicate_model_calling(
        prompt="Generate based on this image", 
        model_name="black-forest-labs/flux-dev",  # Doesn't support reference image
        input_image="path/to/image.jpg"           # System auto-switches to flux-kontext-max
    )
    
  2. Parameter Incompatibility Auto-Handling

    # User passes unsupported parameters, system auto-cleans and switches
    replicate_model_calling(
        prompt="Generate image",
        model_name="black-forest-labs/flux-kontext-max",
        guidance=3.5,        # Unsupported parameter
        num_outputs=2        # Auto-switches to supporting model
    )
    
  3. API Error Auto-Retry

    # If primary model fails, automatically tries backup models
    # Flux Dev -> Qwen Image -> Imagen 4 Ultra
    

๐Ÿ› ๏ธ Custom Fallback Configuration

If you have specific model preferences, you can modify fallback rules:

Modify Location: FALLBACK_MODELS and FALLBACK_PARAMETER_MAPPING in config.py file

Example Modification:

# Custom fallback in config.py
FALLBACK_MODELS = {
    'your-preferred-model': {
        'fail': {
            'fallback_model': 'your-backup-model',
            'condition': 'api_error',
            'description': 'Custom fallback description'
        }
    }
}

๐Ÿ“ฆ File Structure

replicate_batch_process/
โ”œโ”€โ”€ setup.py                     # Environment initialization script ๐Ÿš€
โ”œโ”€โ”€ main.py                      # Single image generation core function
โ”œโ”€โ”€ config.py                    # Model configuration
โ”œโ”€โ”€ intelligent_batch_processor.py  # Intelligent batch processor
โ”œโ”€โ”€ example_usage.py            # Complete usage examples for three scenarios โญ
โ”œโ”€โ”€ .env                         # API key configuration (generated after first run)
โ”œโ”€โ”€ .gitignore                   # Git ignore rules (auto-created/updated)
โ”œโ”€โ”€ output/                      # Output directory (auto-created)
โ””โ”€โ”€ README.md                   # This documentation

๐ŸŽฏ Three Usage Methods

Method 1: Single Image Generation

Use Case: Single generation, model testing, interactive use

# See detailed example in SINGLE_IMAGE_PARAMS in example_usage.py

from main import replicate_model_calling

# Core calling code
file_paths = replicate_model_calling(
    prompt="A beautiful sunset over mountains",
    model_name="black-forest-labs/flux-dev",
    output_filepath="output/my_image.jpg",
    aspect_ratio="16:9",
    output_quality=80
)

print(f"Generated file: {file_paths[0]}")

Features:

  • โœ… Simple and direct, no extra setup needed
  • โœ… Immediate results return
  • โœ… Supports all model parameter customization
  • โœ… Immediate download and save upon completion

Method 2: Simple Batch Processing

Use Case: Same model, same parameters batch generation

# See detailed example in BATCH_SAME_MODEL_PARAMS in example_usage.py

import asyncio
from intelligent_batch_processor import intelligent_batch_process

# Core calling code
files = await intelligent_batch_process(
    prompts=["sunset", "city", "robot", "forest"],  # Prompt list
    model_name="black-forest-labs/flux-dev",
    max_concurrent=8,
    output_filepath=["output/scene_01_sunset.jpg", "output/scene_02_city.jpg", 
                     "output/scene_03_robot.jpg", "output/scene_04_forest.jpg"],  # Optional: custom file paths
    aspect_ratio="16:9",
    output_quality=90
)

print(f"Generated {len(files)} files")

Features:

  • ๐Ÿš€ Intelligent Strategy Selection - Automatically selects optimal processing method
  • โšก Instant Download - Downloads immediately upon task completion
  • ๐Ÿ“Š Progress Monitoring - Real-time processing progress display
  • ๐Ÿ”„ Auto-Retry - Intelligent handling of 429 errors
  • ๐Ÿ“ Custom File Paths - Support custom output file paths, ensuring file-content correspondence
  • ๐Ÿ”„ Intelligent Fallback Mechanism - Auto-detects compatibility and switches to best model

Method 3: Advanced Batch Processing

Use Case: Mixed models, different parameters, complex batch processing needs

# See detailed example in MIXED_MODEL_REQUESTS in example_usage.py

import asyncio
from intelligent_batch_processor import IntelligentBatchProcessor, BatchRequest

# Core calling code
requests = [
    BatchRequest(
        prompt="High quality portrait photo",
        model_name="google/imagen-4-ultra",
        kwargs={"aspect_ratio": "4:3", "output_quality": 95}
    ),
    BatchRequest(
        prompt="Anime style character", 
        model_name="black-forest-labs/flux-dev",
        kwargs={"aspect_ratio": "1:1", "guidance": 4}
    ),
]

processor = IntelligentBatchProcessor(max_concurrent=15, max_retries=3)
results = await processor.process_intelligent_batch(requests)

# Process results
for result in results:
    if result.success:
        print(f"โœ… Success: {result.file_paths}")
    else:
        print(f"โŒ Failed: {result.error}")

Features:

  • ๐Ÿง  Intelligent Strategy Selection - Automatically selects processing strategy based on task volume
  • ๐Ÿ”€ Mixed Model Support - Use multiple different models simultaneously
  • โš™๏ธ Fine Control - Independent parameter configuration for each request
  • ๐Ÿ“ˆ Detailed Statistics - Complete success/failure statistics

๐Ÿš€ Environment Initialization (Required for First Use)

Step One: Run initialization script

# Auto-check and setup API keys
python setup.py

The initialization script will:

  • โœ… Check .env file and API keys
  • ๐Ÿ”‘ Prompt for missing API keys
  • ๐Ÿ’พ Auto-create and configure .env file
  • ๐Ÿ“ Create necessary directory structure
  • ๐Ÿ”’ Set secure file permissions
  • ๐Ÿงช Test API connection

API Key Acquisition:

๐ŸŽฏ Quick Start

Recommended Method: Use example_usage.py

# 1. Interactive selection of which example to run
python example_usage.py

# 2. Run all three examples
python example_usage.py all

# 3. Import and use in your code
from example_usage import single_image_generation, batch_same_model, advanced_mixed_models

Vibe Coder Friendly: Copy-and-use format

  1. Modify Parameter Configuration - Modify PARAMS variables at the top of file
  2. Copy Core Code - Find core calling code marked with ๐Ÿš€
  3. Direct Use - Paste into your project

๐Ÿง  Intelligent Batch Processing Strategies

The batch processor automatically selects optimal strategy based on task volume:

Strategy 1: Immediate Full Processing

Condition: Task count โ‰ค Current available quota

โœ… 12 tasks, current quota 450 โ†’ Process all tasks immediately with concurrency

Strategy 2: Single Window Batch Processing

Condition: Task count โ‰ค Window quota (600), but greater than current quota

โณ 450 tasks, current quota 200 โ†’ Wait for sufficient quota then batch process

Strategy 3: Dynamic Queue Processing

Condition: Task count > Window quota (600)

๐Ÿ”„ 1200 tasks โ†’ Process in batches dynamically, complete one add one

๐ŸŽฏ Usage Scenario Comparison

Usage Method Task Count Config Complexity Recommended Scenario
Single Image 1 Simple Testing, demo, single generation
Simple Batch 2-50 Medium Same parameter batch generation
Advanced Batch 10-1000+ High Mixed models, complex needs

๐Ÿ“Š Rate Limiting and Concurrency Control

Replicate API Limits

  • Create Predictions: 600 requests/minute (shared across all models)
  • Exceed Limit: Returns 429 error

Safe Concurrency Recommendations

# Conservative setting (recommended for beginners)
max_concurrent = 5

# Balanced setting (recommended for most users) 
max_concurrent = 8

# Aggressive setting (requires good retry mechanism)
max_concurrent = 12

๐Ÿ”„ JSON Data Batch Processing Example

If you have structured JSON data, you can use the test script:

# Run JSON batch processing test
python json_batch_test.py

This demonstrates how to extract image descriptions from JSON data and generate them in batches.

๐Ÿ’ก Best Practices

1. Choose Appropriate Method

# Single image - direct call to main
if len(prompts) == 1:
    result = replicate_model_calling(prompt, model_name)

# Batch same parameters - simple interface  
elif all_same_params:
    files = await intelligent_batch_process(prompts, model_name)

# Complex needs - advanced interface
else:
    processor = IntelligentBatchProcessor()
    results = await processor.process_intelligent_batch(requests)

2. Error Handling

# Check batch processing results
successful_files = []
failed_count = 0

for result in results:
    if result.success:
        successful_files.extend(result.file_paths)
    else:
        failed_count += 1
        print(f"Failed: {result.error}")

print(f"Success: {len(successful_files)}, Failed: {failed_count}")

3. Output Management

import time
import os

# Use timestamp to avoid file conflicts
timestamp = int(time.time())
output_dir = f"output/batch_{timestamp}"
os.makedirs(output_dir, exist_ok=True)

4. Large Batch Processing

# Process large tasks in chunks
def chunk_prompts(prompts, chunk_size=50):
    for i in range(0, len(prompts), chunk_size):
        yield prompts[i:i + chunk_size]

all_files = []
for batch in chunk_prompts(huge_prompt_list, 50):
    files = await intelligent_batch_process(batch, model_name)
    all_files.extend(files)

๐Ÿšจ Important Notes

  1. API Quota Sharing: All model calls share the 600/minute limit
  2. Instant Download: Each task downloads immediately upon completion, doesn't wait for all to finish
  3. Concurrency Control: Recommend starting with lower concurrency, gradually adjust
  4. Cost Control: Batch processing rapidly consumes API quota, mind the cost
  5. Storage Space: Ensure sufficient disk space for generated files

๐Ÿ”ง Troubleshooting

Common Problem Solutions

  1. 429 Error (Rate Limiting)

    # Reduce concurrency
    max_concurrent = 5  # Reduce from 8 to 5
    
  2. Import Error

    # Ensure correct directory
    import sys
    sys.path.append('/path/to/replicate_batch_process')
    
  3. File Path Issues

    # Use absolute paths
    import os
    output_dir = os.path.abspath("output/my_batch")
    

๐ŸŽฏ Complete Usage Workflow

# 1๏ธโƒฃ First use - Environment initialization
python setup.py

# 2๏ธโƒฃ Run examples
python example_usage.py

# 3๏ธโƒฃ Or use in your code
python your_script.py

๐Ÿš€ Quick Start Template

# ๐Ÿš€ Method 1: View complete examples
python example_usage.py

# ๐Ÿš€ Method 2: Direct copy and use
from example_usage import BATCH_SAME_MODEL_PARAMS
from intelligent_batch_processor import intelligent_batch_process
import asyncio

# Modify parameter configuration
BATCH_SAME_MODEL_PARAMS["prompts"] = ["your prompt 1", "your prompt 2"]
BATCH_SAME_MODEL_PARAMS["model_name"] = "black-forest-labs/flux-dev"

# Core calling
files = asyncio.run(intelligent_batch_process(
    prompts=BATCH_SAME_MODEL_PARAMS["prompts"],
    model_name=BATCH_SAME_MODEL_PARAMS["model_name"],
    max_concurrent=BATCH_SAME_MODEL_PARAMS["max_concurrent"]
))

print(f"โœ… Generation complete! Total {len(files)} files")

Now you have mastered the complete toolkit from single image to large-scale batch processing! ๐Ÿš€

Recommended: Use example_usage.py directly - tested, standardized, and Vibe Coder friendly!

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

replicate_batch_process-1.0.0.tar.gz (37.9 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

replicate_batch_process-1.0.0-py3-none-any.whl (34.6 kB view details)

Uploaded Python 3

File details

Details for the file replicate_batch_process-1.0.0.tar.gz.

File metadata

  • Download URL: replicate_batch_process-1.0.0.tar.gz
  • Upload date:
  • Size: 37.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.9.23

File hashes

Hashes for replicate_batch_process-1.0.0.tar.gz
Algorithm Hash digest
SHA256 fb2b0b0e7cc9509f064e98556cb8ee0054ddd039ac55bfc64c253881574d8eb5
MD5 c738baa92d0702ede879b63a492b24ce
BLAKE2b-256 381384585f6e8ad022dec000866c6f8e585543a2e94795fb5d87603a0d1ed019

See more details on using hashes here.

File details

Details for the file replicate_batch_process-1.0.0-py3-none-any.whl.

File metadata

File hashes

Hashes for replicate_batch_process-1.0.0-py3-none-any.whl
Algorithm Hash digest
SHA256 6a2852ffb0b2fe20134ca05af22ccd46e5cd7323754a7f761a9d37bea727cdf0
MD5 6ef77ba703fcc196ddde3b107a5e6c87
BLAKE2b-256 8ef0390af9ec6f9b26e215dc53893962a4ec300dd97f48351f5cc979b67b907e

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page