Universal AI Agent supporting multiple LLM providers (Anthropic, OpenAI, Gemini, Groq, DeepSeek)
Project description
GiantKelp AI
Universal AI Agent supporting multiple LLM providers with a single, unified interface
Built by GiantKelp - AI Agency in London
Overview
GiantKelp AI is a powerful, provider-agnostic Python library that gives you a unified interface to interact with multiple leading LLM providers. Write your code once and switch between providers seamlessly - no need to learn different APIs or refactor your codebase.
Why GiantKelp AI?
- 🔄 Provider Flexibility: Switch between Anthropic, OpenAI, Gemini, Groq, and DeepSeek without changing your code
- 🎯 Smart Model Selection: Automatically use smart, fast, or reasoning models based on your needs
- 📄 Rich Media Support: Handle text, images, and documents (PDFs) with the same simple interface
- 🌐 Web Search Integration: Native web search capabilities where supported
- 🤖 Agent Teams: Build sophisticated multi-agent systems with handoffs (optional)
- ⚡ Streaming Support: Real-time response streaming across all providers
- 🛡️ Production Ready: Comprehensive error handling, logging, and type hints
Supported Providers
| Provider | Text | Vision | Documents | Web Search | Reasoning |
|---|---|---|---|---|---|
| Anthropic (Claude) | ✅ | ✅ | ✅ | ✅ | ✅ |
| OpenAI | ✅ | ✅ | ✅ | ✅ | ✅ |
| Google Gemini | ✅ | ✅ | ✅ | ✅ | ✅ |
| Groq | ✅ | ✅ | ❌ | ✅ | ✅ |
| DeepSeek | ✅ | ❌ | ❌ | ❌ | ✅ |
Installation
Basic Installation
pip install giantkelp-ai
With Agent Support
pip install giantkelp-ai[agents]
Quick Start
from giantkelp_ai import AIAgent
# Initialize with your preferred provider
agent = AIAgent(provider="anthropic")
# Get a response
response = agent.fast_completion("What is the capital of France?")
print(response) # "Paris is the capital of France."
Configuration
Environment Variables
Set your API keys as environment variables:
export ANTHROPIC_API_KEY="your-anthropic-key"
export OPENAI_API_KEY="your-openai-key"
export GEMINI_API_KEY="your-gemini-key"
export GROQ_API_KEY="your-groq-key"
export DEEPSEEK_API_KEY="your-deepseek-key"
# Optional global settings
export MAX_TOKENS=5000
export TEMPERATURE=0.1
Using .env File
Create a .env file in your project root:
ANTHROPIC_API_KEY=your-anthropic-key
OPENAI_API_KEY=your-openai-key
GEMINI_API_KEY=your-gemini-key
GROQ_API_KEY=your-groq-key
DEEPSEEK_API_KEY=your-deepseek-key
MAX_TOKENS=5000
TEMPERATURE=0.1
Core Features
1. Text Completions
Choose from three model tiers for different use cases:
Fast Completion (Optimized for Speed)
agent = AIAgent(provider="anthropic")
response = agent.fast_completion(
user_prompt="Translate 'hello' to Spanish",
system_prompt="You are a helpful translator",
max_tokens=100,
temperature=0.1
)
print(response) # "Hola"
Smart Completion (Balanced Performance)
response = agent.smart_completion(
user_prompt="Explain quantum entanglement",
system_prompt="You are a physics professor",
max_tokens=500,
temperature=0.7
)
Reasoning Completion (Advanced Problem Solving)
response = agent.reasoning_completion(
user_prompt="Solve this complex math problem: ...",
max_tokens=2000
)
2. Streaming Responses
Get real-time responses as they're generated:
stream = agent.fast_completion(
user_prompt="Write a short story about a robot",
stream=True
)
for chunk in agent.normalize_stream(stream):
print(chunk, end="", flush=True)
3. JSON Output Mode
Request structured JSON responses:
response = agent.fast_completion(
user_prompt="List 5 fruits with their colors",
json_output=True
)
print(response)
# {
# "fruits": [
# {"name": "apple", "color": "red"},
# {"name": "banana", "color": "yellow"},
# ...
# ]
# }
4. Image Analysis
Analyze images with vision-capable models:
# From file path
response = agent.image_completion(
user_prompt="What objects are in this image?",
image="path/to/image.jpg",
file_path=True
)
# From base64 data
response = agent.image_completion(
user_prompt="Describe this image",
image=base64_image_data,
file_path=False
)
# Use smart model for complex analysis
response = agent.image_completion(
user_prompt="Analyze the composition and artistic style",
image="artwork.jpg",
smart_model=True
)
5. Document Processing
Process PDF documents with automatic text extraction:
# Single document processing
response = agent.document_completion(
user_prompt="Summarize this document",
document="report.pdf",
smart_model=True
)
# Process each page independently
results = agent.document_completion(
user_prompt="Extract key points from each page",
document="multi-page-report.pdf",
split_into_pages=True
)
# Results is a dict: {1: "Page 1 summary", 2: "Page 2 summary", ...}
for page_num, summary in results.items():
print(f"Page {page_num}: {summary}")
6. Web Search
Perform real-time web searches (provider-dependent):
# Basic web search
response = agent.web_search(
query="Latest developments in AI 2025",
scope="smart"
)
# With system prompt
response = agent.web_search(
query="Best practices for Python async programming",
system="You are a senior Python developer",
scope="fast"
)
# With location-based search
response = agent.web_search(
query="Local restaurants",
country_code="GB",
city="London",
scope="fast"
)
# With reasoning model
response = agent.web_search(
query="Compare the economic impacts of renewable energy",
scope="reasoning",
thinking_budget=5000 # Anthropic only
)
Advanced Features
Agent Teams with Handoffs
Build sophisticated multi-agent systems that can delegate tasks to specialized agents:
agent = AIAgent(provider="anthropic")
# Create a team of specialized agents
agent.create_handoff_team([
{
"name": "triage",
"instructions": "You are a customer service triage agent. Route inquiries to the appropriate specialist.",
"type": "smart",
"handoffs_to": ["billing", "technical", "sales"]
},
{
"name": "billing",
"instructions": "You handle all billing and payment-related questions. Be clear and concise.",
"type": "fast",
"handoffs_to": ["escalation"]
},
{
"name": "technical",
"instructions": "You provide technical support and troubleshooting. Be detailed and helpful.",
"type": "fast",
"handoffs_to": ["escalation"]
},
{
"name": "sales",
"instructions": "You handle sales inquiries and product questions. Be persuasive and informative.",
"type": "fast"
},
{
"name": "escalation",
"instructions": "You handle complex issues requiring deep reasoning and nuanced judgment.",
"type": "reasoning"
}
])
# Run an agent
response = agent.run_agent(
user_prompt="I'm having trouble with my last payment",
agent_name="triage"
)
# The triage agent will automatically hand off to billing if needed
print(response)
Creating Individual Agents
# Create a single agent
support_agent = agent.create_agent_sdk_agent(
name="support",
instructions="You are a friendly customer support agent.",
agent_type="smart",
store=True
)
# Create agent with custom tools
from my_tools import calculator, database_query
analyst_agent = agent.create_agent_sdk_agent(
name="analyst",
instructions="You analyze data and provide insights.",
agent_type="reasoning",
tools=[calculator, database_query]
)
# List all agents
print(agent.list_agents()) # ['support', 'analyst']
# Get a specific agent
my_agent = agent.get_agent("support")
Async Agent Execution
import asyncio
async def main():
agent = AIAgent(provider="anthropic")
# Create agent
agent.create_agent_sdk_agent(
name="assistant",
instructions="You are a helpful assistant."
)
# Run asynchronously
response_coro = agent.run_agent(
user_prompt="What's the weather like?",
agent_name="assistant",
async_mode=True
)
response = await response_coro
print(response)
asyncio.run(main())
Model Selection Guide
When to Use Each Model Tier
| Model Tier | Best For | Examples |
|---|---|---|
| Fast | Quick responses, simple tasks, high-volume requests | Translations, classifications, simple Q&A |
| Smart | Complex reasoning, detailed analysis, creative tasks | Content generation, code review, strategy |
| Reasoning | Deep problem-solving, multi-step reasoning, expert-level analysis | Research, mathematical proofs, complex debugging |
Provider-Specific Models
# Anthropic
agent = AIAgent(provider="anthropic")
# Fast: claude-haiku-4-5
# Smart: claude-sonnet-4-5
# Reasoning: claude-opus-4-1
# OpenAI
agent = AIAgent(provider="openai")
# Fast: gpt-4o-mini
# Smart: gpt-4o
# Reasoning: o3
# Gemini
agent = AIAgent(provider="gemini")
# Fast: gemini-2.5-flash
# Smart: gemini-2.5-pro
# Reasoning: gemini-2.5-pro
# Groq
agent = AIAgent(provider="groq")
# Fast: llama-3.1-8b-instant
# Smart: llama-3.3-70b-versatile
# Reasoning: llama-3.3-70b-versatile
# DeepSeek
agent = AIAgent(provider="deepseek")
# Fast: deepseek-chat
# Smart: deepseek-chat
# Reasoning: deepseek-reasoner
Switching Providers
One of the key benefits of GiantKelp AI is provider flexibility:
# Start with Anthropic
agent = AIAgent(provider="anthropic")
response1 = agent.smart_completion("Explain AI")
# Switch to OpenAI (same code!)
agent = AIAgent(provider="openai")
response2 = agent.smart_completion("Explain AI")
# Switch to Groq (same code!)
agent = AIAgent(provider="groq")
response3 = agent.smart_completion("Explain AI")
# All three work identically!
Error Handling
GiantKelp AI provides comprehensive error handling:
from giantkelp_ai import AIAgent
try:
agent = AIAgent(provider="anthropic")
response = agent.smart_completion("Hello")
except ValueError as e:
# Configuration or input errors
print(f"Configuration error: {e}")
except RuntimeError as e:
# API or operational errors
print(f"Runtime error: {e}")
except FileNotFoundError as e:
# File-related errors (images, documents)
print(f"File error: {e}")
except NotImplementedError as e:
# Feature not supported by provider
print(f"Feature not available: {e}")
Logging and Debugging
Enable verbose logging for debugging:
import logging
# Configure logging
logging.basicConfig(level=logging.INFO)
# Enable verbose mode
agent = AIAgent(provider="anthropic", verbose=True)
# Now all operations will be logged
response = agent.smart_completion("Test")
Examples
Example 1: Content Generation
from giantkelp_ai import AIAgent
agent = AIAgent(provider="anthropic")
blog_post = agent.smart_completion(
user_prompt="Write a 300-word blog post about the future of AI in healthcare",
system_prompt="You are a professional medical technology writer",
max_tokens=500,
temperature=0.7
)
print(blog_post)
Example 2: Image Analysis Pipeline
from giantkelp_ai import AIAgent
import os
agent = AIAgent(provider="openai")
# Analyze multiple images
image_folder = "product_photos/"
analyses = []
for filename in os.listdir(image_folder):
if filename.endswith((".jpg", ".png")):
analysis = agent.image_completion(
user_prompt="Describe this product image for an e-commerce catalog",
image=os.path.join(image_folder, filename),
smart_model=True,
json_output=True
)
analyses.append({
"filename": filename,
"analysis": analysis
})
print(analyses)
Example 3: Document Summarization
from giantkelp_ai import AIAgent
agent = AIAgent(provider="gemini")
# Summarize a research paper
summary = agent.document_completion(
user_prompt="""
Provide a structured summary with:
1. Main findings
2. Methodology
3. Conclusions
4. Limitations
""",
document="research_paper.pdf",
smart_model=True,
max_tokens=1000
)
print(summary)
Example 4: Multi-Provider Comparison
from giantkelp_ai import AIAgent
providers = ["anthropic", "openai", "gemini", "groq"]
prompt = "What is the meaning of life?"
results = {}
for provider in providers:
try:
agent = AIAgent(provider=provider)
response = agent.fast_completion(prompt)
results[provider] = response
except Exception as e:
results[provider] = f"Error: {e}"
for provider, response in results.items():
print(f"\n{provider.upper()}:")
print(response)
Example 5: Intelligent Customer Support
from giantkelp_ai import AIAgent
agent = AIAgent(provider="anthropic")
# Create support team
agent.create_handoff_team([
{
"name": "receptionist",
"instructions": """
You are the first point of contact. Be warm and welcoming.
Understand the customer's needs and route them to the right specialist.
""",
"type": "fast",
"handoffs_to": ["technical", "billing", "general"]
},
{
"name": "technical",
"instructions": "You solve technical problems. Be patient and thorough.",
"type": "smart"
},
{
"name": "billing",
"instructions": "You handle billing inquiries. Be clear and accurate.",
"type": "fast"
},
{
"name": "general",
"instructions": "You handle general questions and provide information.",
"type": "fast"
}
])
# Handle customer inquiry
customer_message = "I'm having trouble logging into my account"
response = agent.run_agent(customer_message, agent_name="receptionist")
print(response)
API Reference
AIAgent Class
Constructor
AIAgent(provider: str = "anthropic", verbose: bool = False)
Parameters:
provider(str): LLM provider name - "anthropic", "openai", "gemini", "groq", or "deepseek"verbose(bool): Enable detailed logging
Methods
Text Completion Methods
fast_completion(user_prompt, system_prompt=None, max_tokens=None, temperature=None, stream=False, json_output=False)
Fast model completion for quick responses.
smart_completion(user_prompt, system_prompt=None, max_tokens=None, temperature=None, stream=False, json_output=False)
Smart model completion for complex tasks.
reasoning_completion(user_prompt, system_prompt=None, max_tokens=None, temperature=None, stream=False, json_output=False)
Reasoning model completion for advanced problem-solving.
Parameters:
user_prompt(str): User's input textsystem_prompt(str, optional): System instructionsmax_tokens(int, optional): Maximum tokens to generatetemperature(float, optional): Sampling temperature (0.0-1.0)stream(bool): Enable streaming responsesjson_output(bool): Request JSON formatted output
Returns: str or dict (if json_output=True) or stream object (if stream=True)
Image Analysis
image_completion(user_prompt, image, file_path=True, smart_model=False, system_prompt=None, max_tokens=None, temperature=None, stream=False, json_output=False)
Analyze images using vision-capable models.
Parameters:
user_prompt(str): Question or instruction about the imageimage(str or bytes): Image file path or base64 datafile_path(bool): True if image is a file path, False if base64smart_model(bool): Use smart model instead of fast- Other parameters same as completion methods
Returns: str or dict or stream object
Document Processing
document_completion(user_prompt, document, file_path=True, smart_model=False, system_prompt=None, max_tokens=None, temperature=None, stream=False, json_output=False, split_into_pages=False)
Process PDF documents.
Parameters:
user_prompt(str): Question or instruction about the documentdocument(str or bytes): Document file path or bytesfile_path(bool): True if document is a file pathsmart_model(bool): Use smart model instead of fastsplit_into_pages(bool): Process each page independently- Other parameters same as completion methods
Returns: str or dict or stream object, or dict of page results if split_into_pages=True
Web Search
web_search(query, system=None, scope="fast", max_tokens=10000, temperature=None, max_results=20, thinking_budget=5000, country_code=None, city=None)
Perform real-time web searches.
Parameters:
query(str): Search querysystem(str, optional): System promptscope(str): "smart", "fast", or "reasoning"max_tokens(int): Maximum tokenstemperature(float, optional): Sampling temperaturemax_results(int): Hint for number of resultsthinking_budget(int, optional): Thinking token budget (Anthropic only)country_code(str, optional): Country code for location-based searchcity(str, optional): City name for location-based search
Returns: str
Agent Methods
create_agent_sdk_agent(name, instructions, agent_type="smart", handoffs=[], store=True, **agent_kwargs)
Create an OpenAI Agents SDK agent.
create_handoff_team(team_config)
Create a team of agents with handoff relationships.
run_agent(user_prompt, agent=None, agent_name=None, async_mode=False, **runner_kwargs)
Execute a stored agent.
get_agent(name)
Retrieve a stored agent by name.
list_agents()
List all stored agents.
Utility Methods
normalize_stream(stream)
Normalize streaming responses to yield text chunks.
clean_json_output(text)
Parse and clean JSON output from LLM responses.
Best Practices
1. Choose the Right Model Tier
# Use fast for simple, high-volume tasks
summaries = [
agent.fast_completion(f"Summarize: {text}")
for text in texts
]
# Use smart for important, complex tasks
strategy = agent.smart_completion(
"Develop a market entry strategy for...",
max_tokens=2000
)
# Use reasoning for critical decisions
analysis = agent.reasoning_completion(
"Analyze the risks and opportunities of..."
)
2. Implement Proper Error Handling
def safe_completion(agent, prompt):
try:
return agent.smart_completion(prompt)
except RuntimeError as e:
# Log and retry with different provider
logger.error(f"Provider failed: {e}")
backup_agent = AIAgent(provider="groq")
return backup_agent.smart_completion(prompt)
except Exception as e:
logger.error(f"Unexpected error: {e}")
return None
3. Use Streaming for Long Responses
# Better user experience with streaming
stream = agent.smart_completion(
"Write a comprehensive guide to...",
stream=True
)
for chunk in agent.normalize_stream(stream):
print(chunk, end="", flush=True)
# Update UI in real-time
4. Leverage JSON Mode for Structured Data
# Request structured output
user_data = agent.fast_completion(
f"Extract name, email, and phone from: {text}",
json_output=True
)
# Now you can use the structured data
send_email(user_data['email'])
5. Optimize Token Usage
# Be specific with max_tokens
agent.fast_completion(
"Yes or no: Is this spam?",
max_tokens=5 # Only need a short answer
)
# Use appropriate temperature
agent.smart_completion(
"Generate creative story ideas",
temperature=0.9 # Higher for creativity
)
agent.fast_completion(
"What is 2+2?",
temperature=0.1 # Lower for factual answers
)
Performance Tips
- Batch Processing: Process multiple items in parallel when possible
- Caching: Cache responses for repeated queries
- Provider Selection: Choose providers based on your use case (cost, speed, capabilities)
- Model Tiering: Use fast models for simple tasks, save smart/reasoning for complex ones
- Streaming: Use streaming for long-form content to improve perceived performance
Troubleshooting
Common Issues
Issue: "API key not found"
# Solution: Set environment variable
import os
os.environ['ANTHROPIC_API_KEY'] = 'your-key'
agent = AIAgent(provider="anthropic")
Issue: "Vision not supported for X provider"
# Solution: Use a provider that supports vision
agent = AIAgent(provider="anthropic") # Supports vision
# or
agent = AIAgent(provider="openai") # Supports vision
Issue: "Document processing failed"
# Solution: Check file exists and is a valid PDF
import os
if os.path.exists("document.pdf"):
response = agent.document_completion(
"Summarize",
"document.pdf"
)
Issue: "Rate limit exceeded"
# Solution: Implement retry logic with exponential backoff
import time
def completion_with_retry(agent, prompt, max_retries=3):
for attempt in range(max_retries):
try:
return agent.fast_completion(prompt)
except RuntimeError as e:
if "rate limit" in str(e).lower():
wait = 2 ** attempt
time.sleep(wait)
else:
raise
raise RuntimeError("Max retries exceeded")
Support
- Email: jonah@giantkelp.com
- Website: giantkelp.com
About GiantKelp
GiantKelp is an AI agency based in London, specializing in cutting-edge artificial intelligence solutions for businesses. We build intelligent systems that help organizations leverage the power of AI effectively.
Visit us: www.giantkelp.com
**Website
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file giantkelp_ai-0.1.6.tar.gz.
File metadata
- Download URL: giantkelp_ai-0.1.6.tar.gz
- Upload date:
- Size: 25.8 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/2.1.1 CPython/3.13.2 Darwin/24.5.0
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
59062fca2b8410cb11a80fb03779b35bcb8e9e39c977ddfc3c18f984836d981a
|
|
| MD5 |
7864fbaf7b02c814dff1158f3d2b2205
|
|
| BLAKE2b-256 |
74c74019126a5b947c39b3dfe53b7e5e2521a01bfcaf512a2e20b77fbecef92e
|
File details
Details for the file giantkelp_ai-0.1.6-py3-none-any.whl.
File metadata
- Download URL: giantkelp_ai-0.1.6-py3-none-any.whl
- Upload date:
- Size: 20.5 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/2.1.1 CPython/3.13.2 Darwin/24.5.0
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
4e6e6e1dd374e92972ec67021398d1d88b18380f76176bb6c12272125808ab72
|
|
| MD5 |
3de52aee6a35ae060bea1a0283423a7e
|
|
| BLAKE2b-256 |
3652aab52f2446e45790b9e1129ad31372534c1954a15af3d3a850b7dd3eaea7
|