Intelligent batch processing tool for Replicate models with automatic fallback mechanisms
Project description
Replicate Model Calling Tool - Complete Usage Guide
๐ Overview
This tool provides three ways to call Replicate models, from single image generation to large-scale batch processing:
- Single Image Generation - Direct calling, suitable for testing and one-time use
- Simple Batch Processing - Convenient interface, suitable for batch generation with same parameters
- Advanced Batch Processing - Flexible configuration, suitable for mixed models and complex requirements
๐ Intelligent Fallback Mechanism - Core Feature
Automatic model switching, no compatibility worries!
Our system intelligently detects model compatibility issues and automatically switches to the best alternative model:
Three Fallback Trigger Conditions:
-
Reference Image Auto-Switching
# User passes reference image to model that doesn't support it replicate_model_calling( prompt="Generate based on this image", model_name="black-forest-labs/flux-dev", # Doesn't support reference image input_image="path/to/image.jpg" # System auto-switches to flux-kontext-max )
-
Parameter Incompatibility Auto-Handling
# User passes unsupported parameters, system auto-cleans and switches replicate_model_calling( prompt="Generate image", model_name="black-forest-labs/flux-kontext-max", guidance=3.5, # Unsupported parameter num_outputs=2 # Auto-switches to supporting model )
-
API Error Auto-Retry
# If primary model fails, automatically tries backup models # Flux Dev -> Qwen Image -> Imagen 4 Ultra
๐ ๏ธ Custom Fallback Configuration
If you have specific model preferences, you can modify fallback rules:
Modify Location: FALLBACK_MODELS and FALLBACK_PARAMETER_MAPPING in config.py file
Example Modification:
# Custom fallback in config.py
FALLBACK_MODELS = {
'your-preferred-model': {
'fail': {
'fallback_model': 'your-backup-model',
'condition': 'api_error',
'description': 'Custom fallback description'
}
}
}
๐ฆ File Structure
replicate_batch_process/
โโโ setup.py # Environment initialization script ๐
โโโ main.py # Single image generation core function
โโโ config.py # Model configuration
โโโ intelligent_batch_processor.py # Intelligent batch processor
โโโ example_usage.py # Complete usage examples for three scenarios โญ
โโโ .env # API key configuration (generated after first run)
โโโ .gitignore # Git ignore rules (auto-created/updated)
โโโ output/ # Output directory (auto-created)
โโโ README.md # This documentation
๐ฏ Three Usage Methods
Method 1: Single Image Generation
Use Case: Single generation, model testing, interactive use
# See detailed example in SINGLE_IMAGE_PARAMS in example_usage.py
from main import replicate_model_calling
# Core calling code
file_paths = replicate_model_calling(
prompt="A beautiful sunset over mountains",
model_name="black-forest-labs/flux-dev",
output_filepath="output/my_image.jpg",
aspect_ratio="16:9",
output_quality=80
)
print(f"Generated file: {file_paths[0]}")
Features:
- โ Simple and direct, no extra setup needed
- โ Immediate results return
- โ Supports all model parameter customization
- โ Immediate download and save upon completion
Method 2: Simple Batch Processing
Use Case: Same model, same parameters batch generation
# See detailed example in BATCH_SAME_MODEL_PARAMS in example_usage.py
import asyncio
from intelligent_batch_processor import intelligent_batch_process
# Core calling code
files = await intelligent_batch_process(
prompts=["sunset", "city", "robot", "forest"], # Prompt list
model_name="black-forest-labs/flux-dev",
max_concurrent=8,
output_filepath=["output/scene_01_sunset.jpg", "output/scene_02_city.jpg",
"output/scene_03_robot.jpg", "output/scene_04_forest.jpg"], # Optional: custom file paths
aspect_ratio="16:9",
output_quality=90
)
print(f"Generated {len(files)} files")
Features:
- ๐ Intelligent Strategy Selection - Automatically selects optimal processing method
- โก Instant Download - Downloads immediately upon task completion
- ๐ Progress Monitoring - Real-time processing progress display
- ๐ Auto-Retry - Intelligent handling of 429 errors
- ๐ Custom File Paths - Support custom output file paths, ensuring file-content correspondence
- ๐ Intelligent Fallback Mechanism - Auto-detects compatibility and switches to best model
Method 3: Advanced Batch Processing
Use Case: Mixed models, different parameters, complex batch processing needs
# See detailed example in MIXED_MODEL_REQUESTS in example_usage.py
import asyncio
from intelligent_batch_processor import IntelligentBatchProcessor, BatchRequest
# Core calling code
requests = [
BatchRequest(
prompt="High quality portrait photo",
model_name="google/imagen-4-ultra",
kwargs={"aspect_ratio": "4:3", "output_quality": 95}
),
BatchRequest(
prompt="Anime style character",
model_name="black-forest-labs/flux-dev",
kwargs={"aspect_ratio": "1:1", "guidance": 4}
),
]
processor = IntelligentBatchProcessor(max_concurrent=15, max_retries=3)
results = await processor.process_intelligent_batch(requests)
# Process results
for result in results:
if result.success:
print(f"โ
Success: {result.file_paths}")
else:
print(f"โ Failed: {result.error}")
Features:
- ๐ง Intelligent Strategy Selection - Automatically selects processing strategy based on task volume
- ๐ Mixed Model Support - Use multiple different models simultaneously
- โ๏ธ Fine Control - Independent parameter configuration for each request
- ๐ Detailed Statistics - Complete success/failure statistics
๐ Environment Initialization (Required for First Use)
Step One: Run initialization script
# Auto-check and setup API keys
python setup.py
The initialization script will:
- โ
Check
.envfile and API keys - ๐ Prompt for missing API keys
- ๐พ Auto-create and configure
.envfile - ๐ Create necessary directory structure
- ๐ Set secure file permissions
- ๐งช Test API connection
API Key Acquisition:
- Replicate API Token: Visit replicate.com/account/api-tokens
๐ฏ Quick Start
Recommended Method: Use example_usage.py
# 1. Interactive selection of which example to run
python example_usage.py
# 2. Run all three examples
python example_usage.py all
# 3. Import and use in your code
from example_usage import single_image_generation, batch_same_model, advanced_mixed_models
Vibe Coder Friendly: Copy-and-use format
- Modify Parameter Configuration - Modify
PARAMSvariables at the top of file - Copy Core Code - Find core calling code marked with ๐
- Direct Use - Paste into your project
๐ง Intelligent Batch Processing Strategies
The batch processor automatically selects optimal strategy based on task volume:
Strategy 1: Immediate Full Processing
Condition: Task count โค Current available quota
โ
12 tasks, current quota 450 โ Process all tasks immediately with concurrency
Strategy 2: Single Window Batch Processing
Condition: Task count โค Window quota (600), but greater than current quota
โณ 450 tasks, current quota 200 โ Wait for sufficient quota then batch process
Strategy 3: Dynamic Queue Processing
Condition: Task count > Window quota (600)
๐ 1200 tasks โ Process in batches dynamically, complete one add one
๐ฏ Usage Scenario Comparison
| Usage Method | Task Count | Config Complexity | Recommended Scenario |
|---|---|---|---|
| Single Image | 1 | Simple | Testing, demo, single generation |
| Simple Batch | 2-50 | Medium | Same parameter batch generation |
| Advanced Batch | 10-1000+ | High | Mixed models, complex needs |
๐ Rate Limiting and Concurrency Control
Replicate API Limits
- Create Predictions: 600 requests/minute (shared across all models)
- Exceed Limit: Returns 429 error
Safe Concurrency Recommendations
# Conservative setting (recommended for beginners)
max_concurrent = 5
# Balanced setting (recommended for most users)
max_concurrent = 8
# Aggressive setting (requires good retry mechanism)
max_concurrent = 12
๐ JSON Data Batch Processing Example
If you have structured JSON data, you can use the test script:
# Run JSON batch processing test
python json_batch_test.py
This demonstrates how to extract image descriptions from JSON data and generate them in batches.
๐ก Best Practices
1. Choose Appropriate Method
# Single image - direct call to main
if len(prompts) == 1:
result = replicate_model_calling(prompt, model_name)
# Batch same parameters - simple interface
elif all_same_params:
files = await intelligent_batch_process(prompts, model_name)
# Complex needs - advanced interface
else:
processor = IntelligentBatchProcessor()
results = await processor.process_intelligent_batch(requests)
2. Error Handling
# Check batch processing results
successful_files = []
failed_count = 0
for result in results:
if result.success:
successful_files.extend(result.file_paths)
else:
failed_count += 1
print(f"Failed: {result.error}")
print(f"Success: {len(successful_files)}, Failed: {failed_count}")
3. Output Management
import time
import os
# Use timestamp to avoid file conflicts
timestamp = int(time.time())
output_dir = f"output/batch_{timestamp}"
os.makedirs(output_dir, exist_ok=True)
4. Large Batch Processing
# Process large tasks in chunks
def chunk_prompts(prompts, chunk_size=50):
for i in range(0, len(prompts), chunk_size):
yield prompts[i:i + chunk_size]
all_files = []
for batch in chunk_prompts(huge_prompt_list, 50):
files = await intelligent_batch_process(batch, model_name)
all_files.extend(files)
๐จ Important Notes
- API Quota Sharing: All model calls share the 600/minute limit
- Instant Download: Each task downloads immediately upon completion, doesn't wait for all to finish
- Concurrency Control: Recommend starting with lower concurrency, gradually adjust
- Cost Control: Batch processing rapidly consumes API quota, mind the cost
- Storage Space: Ensure sufficient disk space for generated files
๐ง Troubleshooting
Common Problem Solutions
-
429 Error (Rate Limiting)
# Reduce concurrency max_concurrent = 5 # Reduce from 8 to 5
-
Import Error
# Ensure correct directory import sys sys.path.append('/path/to/replicate_batch_process')
-
File Path Issues
# Use absolute paths import os output_dir = os.path.abspath("output/my_batch")
๐ฏ Complete Usage Workflow
# 1๏ธโฃ First use - Environment initialization
python setup.py
# 2๏ธโฃ Run examples
python example_usage.py
# 3๏ธโฃ Or use in your code
python your_script.py
๐ Quick Start Template
# ๐ Method 1: View complete examples
python example_usage.py
# ๐ Method 2: Direct copy and use
from example_usage import BATCH_SAME_MODEL_PARAMS
from intelligent_batch_processor import intelligent_batch_process
import asyncio
# Modify parameter configuration
BATCH_SAME_MODEL_PARAMS["prompts"] = ["your prompt 1", "your prompt 2"]
BATCH_SAME_MODEL_PARAMS["model_name"] = "black-forest-labs/flux-dev"
# Core calling
files = asyncio.run(intelligent_batch_process(
prompts=BATCH_SAME_MODEL_PARAMS["prompts"],
model_name=BATCH_SAME_MODEL_PARAMS["model_name"],
max_concurrent=BATCH_SAME_MODEL_PARAMS["max_concurrent"]
))
print(f"โ
Generation complete! Total {len(files)} files")
Now you have mastered the complete toolkit from single image to large-scale batch processing! ๐
Recommended: Use example_usage.py directly - tested, standardized, and Vibe Coder friendly!
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file replicate_batch_process-1.0.0.tar.gz.
File metadata
- Download URL: replicate_batch_process-1.0.0.tar.gz
- Upload date:
- Size: 37.9 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.9.23
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
fb2b0b0e7cc9509f064e98556cb8ee0054ddd039ac55bfc64c253881574d8eb5
|
|
| MD5 |
c738baa92d0702ede879b63a492b24ce
|
|
| BLAKE2b-256 |
381384585f6e8ad022dec000866c6f8e585543a2e94795fb5d87603a0d1ed019
|
File details
Details for the file replicate_batch_process-1.0.0-py3-none-any.whl.
File metadata
- Download URL: replicate_batch_process-1.0.0-py3-none-any.whl
- Upload date:
- Size: 34.6 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.9.23
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
6a2852ffb0b2fe20134ca05af22ccd46e5cd7323754a7f761a9d37bea727cdf0
|
|
| MD5 |
6ef77ba703fcc196ddde3b107a5e6c87
|
|
| BLAKE2b-256 |
8ef0390af9ec6f9b26e215dc53893962a4ec300dd97f48351f5cc979b67b907e
|