A simple AI toolkit for text processing using OpenAI and Gemini APIs
Project description
AIWand 🪄
The simplest way to unify OpenAI and Gemini APIs - Drop-in replacement for your existing AI code with automatic provider switching and structured output handling.
🎯 Simple Migration - One Line Change
Before - Direct API calls with provider-specific code:
# OpenAI specific code
content = openai_client.chat.completions.create(
model="gpt-4o",
messages=messages,
temperature=0.8,
top_p=0.9,
response_format={"type": "json_object"}
)
result = json.loads(content.choices[0].message.content) # Manual parsing
# OR Gemini specific code
content = gemini_client.chat.completions.create(
model="gemini-2.0-flash",
messages=messages,
temperature=0.8,
top_p=0.9,
response_format=SomeSchema
)
result = content.parsed # Different response handling
After - Unified AIWand code that works with both:
import aiwand
# Same code works with OpenAI, Gemini, and their structured outputs!
content = aiwand.call_ai(
model="gpt-4o", # or "gemini-2.0-flash"
messages=messages,
temperature=0.8,
top_p=0.9,
response_format=CarouselContent # Pydantic model - automatic parsing!
)
# 'content' is already your parsed Pydantic object - no post-processing needed! ✨
✨ Why AIWand?
- 🔄 Drop-in Replacement - Minimal code changes, maximum benefits
- 🧠 Smart Provider Detection - Automatically uses OpenAI or Gemini based on model name
- 🏗️ Structured Output Magic - Handles Pydantic models automatically for both providers
- ⚡ No Post-Processing - Get parsed objects directly, skip manual JSON handling
- 🎯 Unified API - Same code works across different AI providers
- 🔑 Zero Configuration - Works with just environment variables
- 📱 High-Level Functions - Built-in summarization, chat, text generation, and classification
🚀 Quick Start
Installation
pip install aiwand
Configuration
Set your API keys as environment variables:
# Option 1: OpenAI only
export OPENAI_API_KEY="your-openai-key"
# Option 2: Gemini only
export GEMINI_API_KEY="your-gemini-key"
# Option 3: Both (set preference)
export OPENAI_API_KEY="your-openai-key"
export GEMINI_API_KEY="your-gemini-key"
export AI_DEFAULT_PROVIDER="openai" # or "gemini"
Or create a .env file in your project:
OPENAI_API_KEY=your-openai-key
GEMINI_API_KEY=your-gemini-key
AI_DEFAULT_PROVIDER=openai
Core AI Functionality
The call_ai() function is the heart of AIWand - a unified interface for all AI providers:
import aiwand
from pydantic import BaseModel
# Basic text generation
response = aiwand.call_ai(
messages=[{"role": "user", "content": "Explain quantum computing"}],
model="gpt-4o" # Automatically uses OpenAI
)
# Switch providers seamlessly
response = aiwand.call_ai(
messages=[{"role": "user", "content": "Explain quantum computing"}],
model="gemini-2.0-flash" # Automatically uses Gemini
)
# Structured output with Pydantic models
class BlogPost(BaseModel):
title: str
content: str
tags: list[str]
blog_post = aiwand.call_ai(
messages=[{"role": "user", "content": "Write a blog post about AI"}],
model="gpt-4o",
response_format=BlogPost # Returns parsed BlogPost object!
)
print(blog_post.title) # Direct access to structured data
# Custom/preview models with explicit provider
response = aiwand.call_ai(
model="gemini-2.5-flash-preview-05-20", # New model not in our registry
provider="gemini", # Explicit provider specification
messages=[{"role": "user", "content": "Hello from the future!"}]
)
# Advanced parameters
response = aiwand.call_ai(
messages=[
{"role": "system", "content": "You are a helpful coding assistant"},
{"role": "user", "content": "Write a Python function to sort a list"}
],
model="gpt-4o",
temperature=0.3, # More focused
max_tokens=500,
top_p=0.9
)
High-Level Convenience Functions
For common tasks, use these simplified functions:
import aiwand
# Text summarization
summary = aiwand.summarize("Your long text here...")
# AI chat with conversation history
response = aiwand.chat("What is machine learning?")
# Text generation from prompts
story = aiwand.generate_text("Write a poem about coding")
# Customized summarization
summary = aiwand.summarize(
text="Your long text...",
style="bullet-points", # "concise", "detailed", "bullet-points"
max_length=50,
model="gpt-4o" # Optional: specify model
)
# Chat with conversation history
conversation = []
response1 = aiwand.chat("Hello!", conversation_history=conversation)
conversation.append({"role": "user", "content": "Hello!"})
conversation.append({"role": "assistant", "content": response1})
response2 = aiwand.chat("What did I just say?", conversation_history=conversation)
# Text generation with custom parameters
text = aiwand.generate_text(
prompt="Write a technical explanation",
max_tokens=300,
temperature=0.3 # Lower = more focused, Higher = more creative
)
# Structured data extraction from content and links
contact_info = aiwand.extract(content="John Doe, email: john@example.com")
# Extract from URLs and files
data = aiwand.extract(links=["https://example.com/article", "/path/to/file.txt"])
# Combine content and links with structured output
from pydantic import BaseModel
class ContactInfo(BaseModel):
name: str
email: str
phone: str
result = aiwand.extract(
content="Meeting notes about John...",
links=["https://company.com/about"],
response_format=ContactInfo, # Get structured Pydantic object
temperature=0.1 # Lower for more consistent extraction
)
# Text classification and grading
grader = aiwand.create_binary_classifier(criteria="correctness")
result = grader(question="What is 2+2?", answer="4", expected="4")
print(f"Score: {result.score}, Choice: {result.choice}")
# Custom classifier with multiple grades
math_grader = aiwand.create_classifier(
prompt_template="Grade this math answer: {question} -> {answer}",
choice_scores={"CORRECT": 1.0, "PARTIAL": 0.5, "WRONG": 0.0}
)
result = math_grader(question="What is 5+3?", answer="8", expected="8")
# Helper utilities for testing and development
random_num = aiwand.generate_random_number(8) # 8-digit number
unique_id = aiwand.generate_uuid() # UUID4
🎯 Smart Provider Features
Automatic Model Detection
# AIWand automatically detects the right provider:
response = aiwand.call_ai(model="gpt-4o", ...) # → OpenAI
response = aiwand.call_ai(model="gemini-2.0-flash", ...) # → Gemini
response = aiwand.call_ai(model="o3-mini", ...) # → OpenAI
# Pattern-based detection for unknown models:
response = aiwand.call_ai(model="gemini-experimental-123", ...) # → Gemini
Explicit Provider Control
# Force a specific provider for custom models:
response = aiwand.call_ai(
model="my-custom-model",
provider="gemini", # or AIProvider.GEMINI
messages=[...]
)
# Works with both string and enum:
from aiwand import AIProvider
response = aiwand.call_ai(
model="any-model",
provider=AIProvider.OPENAI,
messages=[...]
)
Structured Output Support
from pydantic import BaseModel
class ProductReview(BaseModel):
rating: int
pros: list[str]
cons: list[str]
recommendation: bool
# Works identically with both providers:
review = aiwand.call_ai(
model="gpt-4o", # or "gemini-2.0-flash"
messages=[{"role": "user", "content": "Review this product: ..."}],
response_format=ProductReview
)
# No manual JSON parsing needed - returns ProductReview object directly!
Configuration Management
import aiwand
# Show current configuration
aiwand.show_current_config()
# Interactive setup (optional)
aiwand.setup_user_preferences()
Error Handling
import aiwand
try:
summary = aiwand.summarize("Some text")
except aiwand.AIError as e:
print(f"AI service error: {e}")
except ValueError as e:
print(f"Input error: {e}")
🔧 CLI Usage (Optional)
# Direct prompts (easiest way!)
aiwand "Ten fun names for a pet pelican"
aiwand "Explain quantum computing in simple terms"
# Specific commands
aiwand summarize "Your text here" --style bullet-points
aiwand chat "What is machine learning?"
aiwand generate "Write a story about AI"
# Helper utilities
aiwand helper random --length 8 # Generate 8-digit random number
aiwand helper uuid --uppercase # Generate uppercase UUID
# Setup preferences
aiwand setup
aiwand config
📚 Documentation
- API Reference - Complete function documentation
- CLI Reference - Command line usage
- Installation Guide - Detailed setup instructions
- Virtual Environment Guide - Best practices for Python environments
🛠️ Contributing
We welcome contributions from both AI assistants and human developers! Please see our comprehensive contributing guide:
- Contributing Guide - Standards, workflows, and best practices
- Development Guide - Technical details and advanced topics
Whether you're an AI assistant helping users or a human developer, these guides ensure consistency and quality across all contributions.
🤝 Connect
- GitHub: github.com/onlyoneaman/aiwand
- PyPI: pypi.org/project/aiwand
- X (Twitter): @onlyoneaman
📝 License
MIT License - see LICENSE file for details.
Made with ❤️ by Aman Kumar
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file aiwand-0.4.10.tar.gz.
File metadata
- Download URL: aiwand-0.4.10.tar.gz
- Upload date:
- Size: 61.6 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.11.4
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
a74ab60e0d1aa0a3e080cb40d7dbd3726dcba143419e971e3391fede91c5a8c3
|
|
| MD5 |
afea240b1a34ed2092ee586009853eb1
|
|
| BLAKE2b-256 |
cffa1d5e1b9a3ac25d0d7734ee0dc2bd0f85475c0ee06f39eb80f56f8f9b7a22
|
File details
Details for the file aiwand-0.4.10-py3-none-any.whl.
File metadata
- Download URL: aiwand-0.4.10-py3-none-any.whl
- Upload date:
- Size: 30.1 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.11.4
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
f6d5fe1c01c265d2345bd08e6b566446edb19a8f18b010b84b82bdc78db192c3
|
|
| MD5 |
7c8089c693c68b489ebe7ab8c30376b7
|
|
| BLAKE2b-256 |
be38d1ca5dae594777d0a466cec4b95ee73c9b31cad6de95d3fc8c99cb79a0c0
|