ai-batch is now batchata
Project description
ai-batch is now batchata
This package has been renamed. Use pip install batchata instead.
New package: https://pypi.org/project/batchata/
New repository: https://github.com/agamm/batchata
API Reference
batch()- Process message conversations or PDF filesBatchJob- Job status and resultsBatchManager- Manage large-scale batch processing with parallel execution
Quick Start
from ai_batch import batch
from pydantic import BaseModel
class Invoice(BaseModel):
company_name: str
total_amount: str
date: str
# Process PDFs with structured output + citations
job = batch(
files=["invoice1.pdf", "invoice2.pdf", "invoice3.pdf"],
prompt="Extract the company name, total amount, and date.",
model="claude-3-5-sonnet-20241022",
response_model=Invoice,
enable_citations=True
)
# Wait for completion
while not job.is_complete():
time.sleep(30)
results = job.results()
# Results now contain both data and citations together:
# [{"result": Invoice(...), "citations": {"company_name": [Citation(...)], ...}}, ...]
Installation
pip install ai-batch
Usage
Create a .env file in your project root:
ANTHROPIC_API_KEY=your-api-key
API Functions
batch()
Process multiple message conversations with optional structured output.
from ai_batch import batch
from pydantic import BaseModel
class SpamResult(BaseModel):
is_spam: bool
confidence: float
reason: str
# Process messages
job = batch(
messages=[
[{"role": "user", "content": "Is this spam? You've won $1000!"}],
[{"role": "user", "content": "Meeting at 3pm tomorrow"}],
[{"role": "user", "content": "URGENT: Click here now!"}]
],
model="claude-3-haiku-20240307",
response_model=SpamResult
)
# Get results
results = job.results()
Response:
[
SpamResult(is_spam=True, confidence=0.95, reason="Contains monetary prize claim"),
SpamResult(is_spam=False, confidence=0.98, reason="Normal meeting reminder"),
SpamResult(is_spam=True, confidence=0.92, reason="Urgent call-to-action pattern")
]
batch() with files
Process PDF files with optional structured output and citations.
from ai_batch import batch
from pydantic import BaseModel
class Invoice(BaseModel):
company_name: str
total_amount: str
date: str
# Process PDFs with citations
job = batch(
files=["invoice1.pdf", "invoice2.pdf"],
prompt="Extract the company name, total amount, and date.",
model="claude-3-5-sonnet-20241022",
response_model=Invoice,
enable_citations=True
)
results = job.results()
# Results now contain both data and citations together
Response:
# Results now contain both data and citations together
[
{
"result": Invoice(company_name="TechCorp Solutions", total_amount="$12,500.00", date="March 15, 2024"),
"citations": {
"company_name": [Citation(cited_text="TechCorp Solutions", start_page=1)],
"total_amount": [Citation(cited_text="TOTAL: $12,500.00", start_page=2)],
"date": [Citation(cited_text="Date: March 15, 2024", start_page=1)]
}
},
{
"result": Invoice(company_name="DataFlow Systems", total_amount="$8,750.00", date="March 18, 2024"),
"citations": {
"company_name": [Citation(cited_text="DataFlow Systems", start_page=1)],
"total_amount": [Citation(cited_text="Total Due: $8,750.00", start_page=3)],
"date": [Citation(cited_text="Invoice Date: March 18, 2024", start_page=1)]
}
}
]
BatchJob
The job object returned by batch().
# Check completion status
if job.is_complete():
results = job.results()
# Get processing statistics with cost tracking
stats = job.stats(print_stats=True)
# Output:
# 📊 Batch Statistics
# ID: msgbatch_01BPtdnmEwxtaDcdJ2eUsq4T
# Status: ended
# Complete: ✅
# Elapsed: 41.8s
# Mode: Text + Citations
# Results: 0
# Citations: 0
# Input tokens: 2,117
# Output tokens: 81
# Total cost: $0.0038
# (50% batch discount applied)
# Citations are now included in results (if enabled)
# Access via: results[0]["citations"]
# Save raw API responses
job = batch(..., raw_results_dir="./raw_responses")
BatchManager
Manage large-scale batch processing with automatic job splitting, parallel execution, state persistence, and cost management.
from ai_batch import BatchManager
from pydantic import BaseModel
class Invoice(BaseModel):
company_name: str
total_amount: float
invoice_number: str
# Initialize BatchManager for large-scale processing
manager = BatchManager(
files=["invoice1.pdf", "invoice2.pdf", ...], # 100+ files
prompt="Extract invoice data",
model="claude-3-5-sonnet-20241022",
response_model=Invoice,
enable_citations=True,
items_per_job=10, # Process 10 files per job
max_parallel_jobs=5, # 5 jobs in parallel
max_cost=50.0, # Stop if cost exceeds $50
state_path="batch_state.json", # Auto-resume capability
save_results_dir="results/" # Save results to disk
)
# Run processing (auto-resumes if interrupted)
summary = manager.run(print_progress=True)
# Retry failed items
if summary['failed_items'] > 0:
retry_summary = manager.retry_failed()
# Get statistics
stats = manager.stats
print(f"Completed: {stats['completed_items']}/{stats['total_items']}")
print(f"Total cost: ${stats['total_cost']:.2f}")
# Load results from disk
results = manager.get_results_from_disk()
Key Features:
- Automatic job splitting: Breaks large batches into smaller chunks
- Parallel processing: Multiple jobs run concurrently with ThreadPoolExecutor
- State persistence: Resume from interruptions with JSON state files
- Cost management: Stop processing when budget limit is reached
- Progress monitoring: Real-time progress updates with statistics
- Retry mechanism: Easily retry failed items
- Result saving: Organized directory structure for results
Citations
Citations work in two modes depending on whether you use structured output:
1. Text + Citations (Flat List)
When enable_citations=True without a response model, citations are returned as a flat list:
job = batch(
files=["document.pdf"],
prompt="Summarize the key findings",
enable_citations=True
)
results = job.results() # List of {"result": str, "citations": List[Citation]}
# Example result structure:
[
{
"result": "Summary text...",
"citations": [
Citation(cited_text="AI reduces errors by 30%", start_page=2),
Citation(cited_text="Implementation cost: $50,000", start_page=5)
]
}
]
2. Structured + Field Citations (Mapping)
When using both response_model and enable_citations=True, citations are mapped to specific fields:
job = batch(
files=["document.pdf"],
prompt="Extract the data",
response_model=MyModel,
enable_citations=True
)
results = job.results() # List of {"result": Model, "citations": Dict[str, List[Citation]]}
# Example result structure:
[
{
"result": MyModel(title="Annual Report 2024", revenue="$1.2M"),
"citations": {
"title": [Citation(cited_text="Annual Report 2024", start_page=1)],
"revenue": [Citation(cited_text="Revenue: $1.2M", start_page=3)],
"growth": [Citation(cited_text="YoY Growth: 25%", start_page=3)]
}
}
]
The field mapping allows you to trace exactly which part of the source document was used to populate each field in your structured output.
Robust Citation Parsing
AI Batch uses proper JSON parsing for citation field mapping, ensuring reliability with complex JSON structures:
Handles Complex Scenarios:
- ✅ Escaped quotes in JSON values:
"name": "John \"The Great\" Doe" - ✅ URLs with colons:
"website": "http://example.com:8080" - ✅ Nested objects and arrays:
"metadata": {"nested": {"deep": "value"}} - ✅ Multi-line strings and special characters
- ✅ Fields with numbers/underscores:
user_name,age_2
Previous Limitations (Fixed): The old regex-based approach would fail on complex JSON patterns. The new JSON parser reliably handles any valid JSON structure that Claude produces, making citation mapping robust for production use.
Cost Tracking
AI Batch automatically tracks token usage and costs for all batch operations:
from ai_batch import batch
job = batch(
messages=[...],
model="claude-3-5-sonnet-20241022"
)
# Get cost information
stats = job.stats()
print(f"Total cost: ${stats['total_cost']:.4f}")
print(f"Input tokens: {stats['total_input_tokens']:,}")
print(f"Output tokens: {stats['total_output_tokens']:,}")
# Or print formatted statistics
job.stats(print_stats=True)
Example Scripts
examples/spam_detection.py- Email classificationexamples/pdf_extraction.py- PDF data extractionexamples/citation_example.py- Basic citation usageexamples/citation_with_pydantic.py- Structured output with citationsexamples/batch_manager_example.py- Large-scale batch processing with BatchManager
Limitations
- Citationm mapping only work with flat Pydantic models (no nested models)
- No support for OpenAI.
- PDFs require Opus/Sonnet models for best results
- Batch jobs can take up to 24 hours to process
- Use
job.is_complete()to check status before getting results - Citations may not be available in all batch API responses
Comparison with Alternatives
| Feature | ai-batch | LangChain | Instructor | PydanticAI |
|---|---|---|---|---|
| Batch Requests | ✅ Native (50% cost savings) | ❌ No native batch API | ✅ Via OpenAI Batch API (#1092) | ⚠️ Planned (#1771) |
| Structured Output | ✅ Full support | ✅ Via parsers | ✅ Core feature | ✅ Native |
| PDF File Input | ✅ Native support | ✅ Via document loaders | ✅ Via multimodal models | ✅ Via file handling |
| Citation Mapping | ✅ Field-level citations | ❌ Manual implementation | ❌ Manual implementation | ❌ Manual implementation |
| Cost Tracking | ✅ Automatic with tokencost | ❌ Manual implementation | ❌ Manual implementation | ❌ Manual implementation |
| Cost Limits | ✅ max_cost parameter | ❌ Manual implementation | ❌ Manual implementation | ❌ Manual implementation |
| Batch Providers | 2/2 (Anthropic, OpenAI planned) | 0/2 | 1/2 (OpenAI only) | 0/2 |
| Focus | Streamlined batch requests | General LLM orchestration | Structured outputs CLI | Agent framework |
License
MIT
Todos
-
Add pricing metadata and max_spend controls(Cost tracking implemented) -
Auto batch manager (parallel batches, retry, spend control)(BatchManager implemented) - Test mode to run on 1% sample before full batch
- Quick batch - split into smaller chunks for faster results
- Support text/other file types (not just PDFs)
- Support for OpenAI
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file ai_batch-0.2.1.tar.gz.
File metadata
- Download URL: ai_batch-0.2.1.tar.gz
- Upload date:
- Size: 125.9 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.12.9
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
8fc7b4b16652264c8a594591dcb5fe73a8f617e92dd2a32e1a9dbec81cd15416
|
|
| MD5 |
271902b880265ede954c853122cac096
|
|
| BLAKE2b-256 |
67889a0cd99369be5f361edd7da9895a078ac5515808f9ff72b41d839447c64d
|
Provenance
The following attestation bundles were made for ai_batch-0.2.1.tar.gz:
Publisher:
publish.yml on agamm/ai-batch
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
ai_batch-0.2.1.tar.gz -
Subject digest:
8fc7b4b16652264c8a594591dcb5fe73a8f617e92dd2a32e1a9dbec81cd15416 - Sigstore transparency entry: 271771031
- Sigstore integration time:
-
Permalink:
agamm/ai-batch@92e8e6b11d99970bee0fbd573445a3bd34b785ff -
Branch / Tag:
refs/tags/v0.2.1 - Owner: https://github.com/agamm
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@92e8e6b11d99970bee0fbd573445a3bd34b785ff -
Trigger Event:
release
-
Statement type:
File details
Details for the file ai_batch-0.2.1-py3-none-any.whl.
File metadata
- Download URL: ai_batch-0.2.1-py3-none-any.whl
- Upload date:
- Size: 29.7 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.12.9
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
96bf2320f870ce651912b0a5667f3fb257c2c4f98aa774961f46de0fd697c9b4
|
|
| MD5 |
dc96e52ab8ab55b5a49f0eac595e6a1f
|
|
| BLAKE2b-256 |
a526e3f607b8e1a55f2bf4ba8c49da0a2fef3dd45dd13fb63407ca2cb08cd5ca
|
Provenance
The following attestation bundles were made for ai_batch-0.2.1-py3-none-any.whl:
Publisher:
publish.yml on agamm/ai-batch
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
ai_batch-0.2.1-py3-none-any.whl -
Subject digest:
96bf2320f870ce651912b0a5667f3fb257c2c4f98aa774961f46de0fd697c9b4 - Sigstore transparency entry: 271771036
- Sigstore integration time:
-
Permalink:
agamm/ai-batch@92e8e6b11d99970bee0fbd573445a3bd34b785ff -
Branch / Tag:
refs/tags/v0.2.1 - Owner: https://github.com/agamm
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@92e8e6b11d99970bee0fbd573445a3bd34b785ff -
Trigger Event:
release
-
Statement type: