Simple batch processing at scale for Gemini and Claude
Project description
AjaxAI Python SDK
The official Python SDK for AjaxAI - batch AI processing made simple.
AjaxAI removes everything that isn't directly related to your business logic—no JSONL files, no polling, no parsing. The goal is to make batch AI processing even easier than synchronous.
Installation
pip install ajaxai-sdk
Quick Start
from ajaxai import create_batch_job, AjaxAiRequestItem
# Set your API key
import os
os.environ["AJAXAI_API_KEY"] = "your_api_key_here"
# Create a job
job = create_batch_job(job_type="content_generation")
# Add requests
request = AjaxAiRequestItem(request_id="req_1").add_text("Write a short poem about AI")
job.add_request(request)
# Submit and get results
job.submit()
# Check results when ready
if job.get_state() == "succeeded":
for result in job.get_results():
print(result.response['text'])
Getting Started
- Sign up at ajaxai.co
- Complete setup including Google Cloud integration
- Get your API key from the settings page
- Install the SDK and start processing
Core Workflow
The AjaxAI workflow is straightforward: create a job, add requests, submit, and get results. No file management or complex state tracking needed.
from ajaxai import create_batch_job, AjaxAiRequestItem
# Create a job (job_type is required for result routing)
job = create_batch_job(job_type="data_analysis")
# Add multiple requests
prompts = [
"Analyze customer sentiment: 'Great product, fast shipping!'",
"Summarize: 'The quarterly report shows 15% growth...'",
"Extract key points from: 'Meeting notes: discussed budget...'"
]
for i, prompt in enumerate(prompts):
request = AjaxAiRequestItem(request_id=f"analysis_{i}").add_text(prompt)
job.add_request(request)
# Submit for processing
job.submit()
Handling Results: Two Approaches
Manual Checking
Check results when you're ready:
# Check job status
print(f"Job status: {job.get_state()}")
# Process results when complete
if job.get_state() == "succeeded":
for result in job.get_results():
print(f"Request {result.summary.request_id}: {result.response['text']}")
Background Callbacks (Recommended)
Set up automatic result processing:
from ajaxai import AjaxAiClient
from ajaxai.registry import ajaxai_callback
@ajaxai_callback('data_analysis')
def handle_analysis_results(job):
print(f"Analysis job {job.job_id} completed!")
for result in job.get_results():
print(f"Result: {result.response['text']}")
# Start background processing
client = AjaxAiClient()
client.start_polling()
# Submit jobs - callbacks will trigger automatically
job = create_batch_job(job_type="data_analysis")
# ... add requests and submit
Structured Outputs
One of the most tedious parts of AI processing is parsing free-form text responses. Structured outputs let you define exactly what you want back:
from pydantic import BaseModel
from typing import List
class ProductAnalysis(BaseModel):
product_name: str
rating: float
pros: List[str]
cons: List[str]
recommendation: str
# Request structured output
request = AjaxAiRequestItem(
request_id="product_review",
output_model=ProductAnalysis
).add_text("Analyze: 'Great headphones, amazing sound but battery life could be better'")
job.add_request(request)
job.submit()
# Get typed results
for result in job.get_results():
try:
analysis = ProductAnalysis.model_validate_json(result.response['text'])
print(f"Product: {analysis.product_name}")
print(f"Rating: {analysis.rating}")
except ValidationError:
# Handle occasional parsing errors
print(f"Raw response: {result.response['text']}")
Multimodal Processing
Process text and images together:
request = AjaxAiRequestItem(request_id="image_analysis")\
.add_text("What's in this image?")\
.add_image("https://example.com/product-photo.jpg")\
.add_text("Suggest marketing copy based on what you see.")
job.add_request(request)
job.submit()
Image Requirements:
- Must be publicly accessible URLs
- Must include file extension (
.jpg,.png, etc.) in the URL - Use robust hosting (cloud storage, CDNs, e-commerce platforms)
- Avoid protected or rate-limited URLs
Available Models
gemini-2.0-flash(default) - Best balance of quality and speedgemini-2.0-flash-lite- Optimized for speed
job = create_batch_job(
job_type="quick_tasks",
model="gemini-2.0-flash-lite"
)
Understanding Responses
Every result follows this structure:
# result.response always contains {'text': '<AI output>'}
response_text = result.response['text']
# Additional data available
request_data = result.request # Original request
metadata = result.metadata # Your custom metadata
summary = result.summary # Status, timing, errors
usage = result.usage # Token usage and costs
Metadata for Context
Carrying context through your processing pipeline usually means database lookups. Metadata travels with your requests instead:
from pydantic import BaseModel
class JobMetadata(BaseModel):
campaign_id: str
priority: str
class RequestMetadata(BaseModel):
customer_id: str
category: str
# Job-level metadata
job = create_batch_job(
job_type="customer_analysis",
job_metadata=JobMetadata(campaign_id="Q2_2025", priority="high")
)
# Request-level metadata
request = AjaxAiRequestItem(request_id="analysis_1")\
.add_text("Analyze customer feedback")\
.add_request_metadata(RequestMetadata(customer_id="12345", category="electronics"))
Metadata is available when processing results without additional database queries. It doesn't affect token usage since it's never sent to the AI model.
Complete Example
from ajaxai import create_batch_job, AjaxAiRequestItem, AjaxAiClient
from ajaxai.registry import ajaxai_callback
from pydantic import BaseModel
from typing import List
class EmailAnalysis(BaseModel):
sentiment: str
urgency_level: int # 1-5 scale
key_topics: List[str]
requires_response: bool
@ajaxai_callback('email_processing')
def handle_email_results(job):
print(f"Processing completed emails from job {job.job_id}")
for result in job.get_results():
try:
analysis = EmailAnalysis.model_validate_json(result.response['text'])
print(f"Email {result.summary.request_id}:")
print(f" Sentiment: {analysis.sentiment}")
print(f" Urgency: {analysis.urgency_level}/5")
print(f" Needs response: {analysis.requires_response}")
except ValidationError:
print(f"Could not parse: {result.response['text']}")
def process_emails():
# Start background processing
client = AjaxAiClient()
client.start_polling()
# Create job
job = create_batch_job(job_type="email_processing")
emails = [
"Hi, the quarterly report is due tomorrow. Can you send it ASAP?",
"Thanks for the great presentation yesterday!",
"URGENT: Server down, customers can't access the app!"
]
for i, email_text in enumerate(emails):
request = AjaxAiRequestItem(
request_id=f"email_{i}",
output_model=EmailAnalysis
).add_text(f"Analyze this email: {email_text}")
job.add_request(request)
job.submit()
# Results will be processed automatically by the callback
if __name__ == "__main__":
process_emails()
API Reference
Core Functions
create_batch_job(job_type, **kwargs)- Create a new batch jobAjaxAiRequestItem(request_id)- Build individual requests@ajaxai_callback(job_type)- Register result handlers
Key Methods
job.add_request(request)- Add request to job queuejob.submit()- Start processingjob.get_state()- Check job statusjob.get_results()- Stream resultsrequest.add_text(text)- Add text contentrequest.add_image(url)- Add image content
Best Practices
✅ Do This
- Always specify
job_typefor proper result routing - Use structured outputs for reliable, typed responses
- Handle parsing errors gracefully (LLMs are reliable but not perfect)
- Use robust, public image URLs with file extensions
- Leverage metadata to reduce database lookups
❌ Avoid This
- Protected or rate-limited image URLs
- Assuming structured output is always perfectly formatted
- Hardcoding API keys (use environment variables)
- Ignoring job completion status
Error Handling
from ajaxai.classes.exceptions import (
AjaxAiApiError,
AjaxAiAuthorizationError,
AjaxAiRateLimitError
)
try:
job = create_batch_job(job_type="analysis")
job.submit()
except AjaxAiAuthorizationError:
print("Check your API key")
except AjaxAiRateLimitError:
print("Rate limited - try again later")
except AjaxAiApiError as e:
print(f"API error: {e}")
What Makes This Different
AjaxAI's goal is simple: remove everything that isn't directly related to your business logic. No JSONL files, no complex polling, no parsing workflows. Whether you're processing 10 requests or 10,000, the code stays the same.
Support
- Support: support@ajaxai.co
License
See LICENSE file for details.
Ready to get started? Sign up at ajaxai.co
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file ajaxai_sdk-0.1.1.tar.gz.
File metadata
- Download URL: ajaxai_sdk-0.1.1.tar.gz
- Upload date:
- Size: 19.0 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.12.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
b7ec27ee74a75228c915548fda334f3df6811d24378c7b87423e3de5081750fb
|
|
| MD5 |
3d49ea7864f1add4258ab06728c7b298
|
|
| BLAKE2b-256 |
25141fa8a4b8e3cf35632e2845a4c1382d7287dd3ef3474f8da59d58e955f2d6
|
File details
Details for the file ajaxai_sdk-0.1.1-py3-none-any.whl.
File metadata
- Download URL: ajaxai_sdk-0.1.1-py3-none-any.whl
- Upload date:
- Size: 5.4 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.12.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
420a9a5371799f7e571c40e0074d3aadfe6d1241b3c45ec9e81c2307bb67a911
|
|
| MD5 |
4ee650b445ce50cdcbe9d37d8396dbf2
|
|
| BLAKE2b-256 |
d2c7d24f27173fafeca6728d5d24928c18ac513d69d984924ba817617625b6b3
|