UltraGPT: A modular multi-provider AI library for advanced reasoning and step pipelines with OpenAI and Claude support
Project description
🤖 UltraGPT
A powerful and modular library for advanced AI-based reasoning and step pipelines with multi-provider support
🌟 Features
- 🔄 Multi-Provider Support: Use OpenAI and Anthropic Claude models seamlessly
- 📝 Steps Pipeline: Break down complex tasks into manageable steps
- 🧠 Reasoning Pipeline: Advanced multi-iteration reasoning capabilities
- 🛠️ Tool Integration: Web search, calculator, math operations, and custom tools
- 🎯 Structured Output: Get structured responses using Pydantic schemas
- 🔧 Tool Calling: Execute custom tools with validated parameters
- 📊 Token Management: Comprehensive token tracking across providers
📦 Installation
pip install ultragpt
# For environment variable support (optional)
pip install python-dotenv
Note: Starting with version 4.0.0, Anthropic Claude support is included by default!
🚀 Quick Start
Basic Usage (OpenAI)
from ultragpt import UltraGPT
# Initialize with OpenAI (default)
ultragpt = UltraGPT(api_key="your-openai-api-key")
# Simple chat
response, tokens, details = ultragpt.chat([
{"role": "user", "content": "Write a story about an elephant."}
])
print("Response:", response)
print("Tokens used:", tokens)
Multi-Provider Support
from ultragpt import UltraGPT
# OpenAI (default)
ultragpt_openai = UltraGPT(api_key="your-openai-api-key")
# Claude
ultragpt_claude = UltraGPT(
api_key="your-anthropic-api-key",
provider="anthropic"
)
# Both work the same way!
response, tokens, details = ultragpt_claude.chat([
{"role": "user", "content": "Hello Claude!"}
])
Provider:Model Format
# Use provider:model format for specific models
ultragpt = UltraGPT(
api_key="your-openai-api-key",
claude_api_key="your-anthropic-api-key" # For Claude models
)
# OpenAI models
response = ultragpt.chat([
{"role": "user", "content": "Hello!"}
], model="openai:gpt-4o")
# Claude models
response = ultragpt.chat([
{"role": "user", "content": "Hello!"}
], model="claude:claude-3-sonnet-20240229")
🌐 Web Search & Tools
Google Search Integration
from ultragpt import UltraGPT
ultragpt = UltraGPT(
api_key="your-openai-api-key",
google_api_key="your-google-api-key",
search_engine_id="your-search-engine-id"
)
# Web search with scraping
response = ultragpt.chat([
{"role": "user", "content": "What are the latest AI trends?"}
], tools=["web-search"], tools_config={
"web-search": {
"max_results": 3,
"enable_scraping": True,
"max_scrape_length": 2000
}
})
Built-in Tools
# Use multiple tools
response = ultragpt.chat([
{"role": "user", "content": "Calculate 15% of 200 and check if 17 is prime"}
], tools=["calculator", "math-operations"])
🔧 Custom Tool Calling
Define Custom Tools
from pydantic import BaseModel
from ultragpt.schemas import UserTool
class EmailParams(BaseModel):
recipient: str
subject: str
body: str
email_tool = UserTool(
name="send_email",
description="Send an email to a recipient",
parameters_schema=EmailParams,
usage_guide="Use when user wants to send an email",
when_to_use="When user asks to send an email"
)
# Use custom tools
response, tokens = ultragpt.tool_call(
messages=[{"role": "user", "content": "Send email to john@example.com about meeting"}],
user_tools=[email_tool]
)
🧠 Advanced Pipelines
Steps Pipeline
response = ultragpt.chat([
{"role": "user", "content": "Plan a trip to Japan for 2 weeks"}
], steps_pipeline=True, steps_model="gpt-4o-mini") # Use cheaper model for steps
Reasoning Pipeline
response = ultragpt.chat([
{"role": "user", "content": "Solve this complex problem: ..."}
], reasoning_pipeline=True, reasoning_iterations=5)
Mixed Provider Pipelines
# Use OpenAI for main response, Claude for reasoning
response = ultragpt.chat([
{"role": "user", "content": "Complex analysis task"}
],
model="openai:gpt-4o", # Main model
reasoning_model="claude:claude-3-sonnet-20240229", # Reasoning model
reasoning_pipeline=True
)
📊 Structured Output
Using Pydantic Schemas
from pydantic import BaseModel
class AnalysisResult(BaseModel):
sentiment: str
confidence: float
keywords: list[str]
# Get structured output (works with both providers)
result = ultragpt.chat_with_schema(
messages=[{"role": "user", "content": "Analyze: 'I love this product!'"}],
schema=AnalysisResult
)
print(result.sentiment) # "positive"
print(result.confidence) # 0.95
🔄 History Management
# Enable conversation history tracking
ultragpt = UltraGPT(
api_key="your-api-key",
track_history=True,
max_history=50 # Keep last 50 messages
)
# Continue conversations naturally
response1 = ultragpt.chat([{"role": "user", "content": "My name is Alice"}])
response2 = ultragpt.chat([{"role": "user", "content": "What's my name?"}])
# Response2 will remember Alice from response1
⚙️ Configuration Options
| Parameter | Type | Default | Description |
|---|---|---|---|
api_key |
str | Required | OpenAI API key |
claude_api_key |
str | None | Anthropic API key (for Claude models) |
provider |
str | "openai" | Default provider ("openai" or "anthropic") |
model |
str | Auto-selected | Default model for provider |
temperature |
float | 0.7 | Output randomness (0-2) |
reasoning_iterations |
int | 3 | Number of reasoning steps |
tools |
list | [] | Enabled tools |
verbose |
bool | False | Enable detailed logging |
track_history |
bool | False | Enable conversation history |
max_history |
int | 100 | Maximum messages to keep |
🛠️ Available Tools
Web Search
- Google Custom Search with result scraping
- Configurable result limits and scraping depth
- Error handling and rate limiting
Calculator
- Mathematical expression evaluation
- Complex calculations with step-by-step solutions
- Support for scientific functions
Math Operations
- Range checking and validation
- Statistical analysis and outlier detection
- Prime number checking and factorization
- Sequence analysis (arithmetic/geometric patterns)
- Percentage calculations and ratios
🌍 Environment Variables
Create a .env file for easy configuration:
OPENAI_API_KEY=your-openai-api-key
ANTHROPIC_API_KEY=your-anthropic-api-key
GOOGLE_SEARCH_API_KEY=your-google-api-key
SEARCH_ENGINE_ID=your-search-engine-id
from dotenv import load_dotenv
import os
load_dotenv()
ultragpt = UltraGPT(
api_key=os.getenv("OPENAI_API_KEY"),
claude_api_key=os.getenv("ANTHROPIC_API_KEY"),
google_api_key=os.getenv("GOOGLE_SEARCH_API_KEY"),
search_engine_id=os.getenv("SEARCH_ENGINE_ID")
)
📋 Requirements
- Python 3.6+
- OpenAI API key (for OpenAI models)
- Anthropic API key (for Claude models)
- Google Custom Search API (for web search tool)
Built-in Dependencies:
anthropic==0.60.0- Claude API support (included by default)openai>=1.59.3- OpenAI API supportpydantic>=2.10.4- Data validation and schemas
🚀 Examples
Check out the examples/ directory for comprehensive usage examples:
example_tool_call.py- Custom tool callingexample_claude_support.py- Claude-specific featuresexample_multi_provider.py- Multi-provider usageexample_history_control.py- Conversation history
🤝 Contributing
Contributions are welcome! Please follow these steps:
- Fork the repository
- Create a feature branch (
git checkout -b feature/improvement) - Make your changes
- Add tests if applicable
- Commit your changes (
git commit -am 'Add new feature') - Push to the branch (
git push origin feature/improvement) - Open a Pull Request
📝 License
This project is licensed under the MIT License - see the LICENSE.rst file for details.
👥 Author
Ranit Bhowmick
- Email: bhowmickranitking@duck.com
- GitHub: @Kawai-Senpai
🔗 Links
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file ultragpt-7.0.0.tar.gz.
File metadata
- Download URL: ultragpt-7.0.0.tar.gz
- Upload date:
- Size: 113.2 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.10
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
fce7c6c688ba08cad5cde9940df6ecde24e73d5f2c3079e01faaf80654af28b7
|
|
| MD5 |
7e0b2a45af5b6e94ab70aef137a37462
|
|
| BLAKE2b-256 |
fcdf9616ab8958523968d637297c1a6f04e5237fac2098d75be8e83d7c51e48c
|
File details
Details for the file ultragpt-7.0.0-py3-none-any.whl.
File metadata
- Download URL: ultragpt-7.0.0-py3-none-any.whl
- Upload date:
- Size: 72.5 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.10
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
fdb5eb0d532106e5213a5bdd50d7cf65d980796b66a44b284766216b237ced44
|
|
| MD5 |
e76496391756af6a1fecb4d67665919a
|
|
| BLAKE2b-256 |
6ba52d35b6177e7f17b8b171a6d8b970df14398f87d278a1c84639ee6da4f5ea
|