Unified Python interface for multiple AI model APIs
Project description
What is Model API Hub?
Model API Hub is a unified Python SDK that lets you access multiple AI model APIs across different platforms and modalities with a consistent, simple interface.
Stop juggling different SDKs for each provider. Use one library for everything.
# Same interface, different providers
from model_api_hub import deepseek_chat, siliconflow_chat, kimi_chat
# All work the same way
response = deepseek_chat("Hello!")
response = siliconflow_chat("Hello!")
response = kimi_chat("Hello!")
Key Features
| Feature | Description |
|---|---|
| 18+ LLM Providers | OpenAI, Anthropic, DeepSeek, ZhipuAI, Kimi, SiliconFlow, and more |
| Streaming Support | Real-time streaming responses for all major providers |
| 5 Modalities | LLM, Vision-Language, Image Gen, Audio TTS, Video Gen |
| One-Line Setup | pip install model-api-hub and you're ready |
| Unified API | Same interface across all providers |
| Flexible Config | .env, YAML, or direct API keys |
| CLI Included | Test models directly from command line |
| Type Hints | Full type safety support |
Quick Start
Installation
pip install model-api-hub
1. Set Your API Key
# Create .env file
echo 'DEEPSEEK_API_KEY=your_key_here' > .env
2. Start Coding
from model_api_hub import deepseek_chat
# That's it. You're done.
response = deepseek_chat("Explain quantum computing in simple terms")
print(response)
Usage Examples
Language Models (LLM)
Synchronous Chat
from model_api_hub import deepseek_chat, kimi_chat, siliconflow_chat
# DeepSeek
response = deepseek_chat(
"Write a Python function to sort a list",
system_prompt="You are a coding expert."
)
# Kimi (Moonshot)
response = kimi_chat(
"Summarize this article",
temperature=0.5
)
# SiliconFlow - access 50+ models
response = siliconflow_chat("Hello!", model="deepseek-ai/DeepSeek-V3")
Streaming Chat
from model_api_hub import deepseek_chat_stream
# Stream responses in real-time
for chunk in deepseek_chat_stream("Tell me a long story"):
print(chunk, end="", flush=True)
Multi-turn Conversation
from model_api_hub.api.llm.deepseek_llm import create_client, get_completion
client = create_client()
messages = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "What is Python?"},
{"role": "assistant", "content": "Python is a programming language..."},
{"role": "user", "content": "What are its main features?"}
]
response = get_completion(client, messages)
Vision-Language Models (VLM)
from model_api_hub.api.vlm.openai_vlm import chat
response = chat(
prompt="What's in this image?",
image_path="photo.jpg"
)
Image Generation
from model_api_hub.api.image.siliconflow_image_gen import generate
image_url = generate("A beautiful sunset over mountains")
Text-to-Speech
from model_api_hub.api.audio.openai_tts import synthesize
audio = synthesize("Hello, world!", voice="alloy", output_path="hello.mp3")
Supported Models
Language Models (LLM)
|
• DeepSeek-Chat • DeepSeek-R1 • DeepSeek-Coder-V2 • DeepSeek-V3 • GPT-4o • GPT-4o-mini • GPT-4-Turbo • Claude-Sonnet-4.5 • Claude-Opus-4 • Gemini-Pro • Gemini-Flash • GLM-4.7-Flash • GLM-4 • GLM-4-Plus • GLM-4.5-Air • GLM-4.1-Thinking • Moonshot-v1-128k • Moonshot-v1-32k |
• Kimi-K2 • MiniMax-ABAB6.5s • MiniMax-M2 • ERNIE-4.0 • ERNIE-4.5 • Qwen-Max • Qwen-Plus • Qwen-Turbo • Qwen2.5 • Qwen2.5-Coder • Qwen3 • Qwen2 • Qwen 1.5 • Llama4 • Llama3.1 • Llama3-70B • Gemma3 • Gemma-2 |
• Mistral-Large • Mixtral-8x22B • Command-R-Plus • InternLM3 • InternLM • InternLM2-20B • Baichuan • Yi 零一万物 • Yuan2.0 • Yuan2.0-M32 • Hunyuan-A13B • Hunyuan3D-2 • Spark-v3.5 • Phi4 • Phi-3 • MiniCPM • CharacterGLM |
Vision-Language Models (VLM)
|
• GPT-4o-Vision • GPT-4V • Gemini-Pro-Vision • Qwen3-VL • Qwen2-VL • Qwen-VL-Plus |
• GLM-4V • MiniCPM-o • Yi-VL • InternVL • DeepSeek-VL |
• SpatialLM • LLaVA • CogVLM • BlueLM-Vision |
Image Generation Models
|
• DALL-E 3 • DALL-E 2 • Kolors • Stable Diffusion XL • Stable Diffusion 3 |
• Recraft-v3 • Wanx • ERNIE-ViLG • Jimeng (Dreamina) • CogView |
• Hunyuan-Image • Playground-v2 • Kandinsky • DeepFloyd IF |
Audio Models
|
• Whisper • Whisper-Large-v3 • TTS-1 • TTS-1-HD • ElevenLabs-Multilingual-v2 |
• ElevenLabs-Flash • Azure-TTS • Azure-Speech • MiniMax-TTS • Baidu-TTS |
• Qwen-Audio • ChatTTS • Fish-Speech • GPT-SoVITS |
Video Generation Models
|
• Runway-Gen3 • Runway-Gen2 • Luma-Dream-Machine • Luma-Genie |
• Pika • Stable-Video-Diffusion • Jimeng-Video • CogVideo |
• VideoCrafter • ModelScope-Video • AnimateDiff |
Configuration
Environment Variables (.env)
Create a .env file in your project root:
# LLM Providers
OPENAI_API_KEY=sk-...
ANTHROPIC_API_KEY=sk-...
DEEPSEEK_API_KEY=sk-...
KIMI_API_KEY=sk-...
ZHIPUAI_API_KEY=...
SILICONFLOW_API_KEY=sk-...
MINIMAX_API_KEY=...
YIYAN_API_KEY=...
DASHSCOPE_API_KEY=sk-...
MODELSCOPE_API_KEY=ms-...
XUNFEI_SPARK_API_KEY=...
GROQ_API_KEY=gsk_...
TOGETHER_API_KEY=...
MISTRAL_API_KEY=...
COHERE_API_KEY=...
PERPLEXITY_API_KEY=pplx-...
AZURE_OPENAI_API_KEY=...
# Other Services
ELEVENLABS_API_KEY=...
AZURE_SPEECH_KEY=...
STABILITY_API_KEY=...
RECRAFT_API_KEY=...
RUNWAY_API_KEY=...
LUMA_API_KEY=...
YAML Configuration
Create config.yaml:
llm:
openai:
model: "gpt-4o"
temperature: 0.7
max_tokens: 4096
deepseek:
model: "deepseek-chat"
temperature: 0.7
max_tokens: 4096
vlm:
openai:
model: "gpt-4o"
image:
siliconflow:
model: "Kwai-Kolors/Kolors"
size: "1024x1024"
Documentation
- LLM Usage Guide - Complete LLM documentation
- API Reference - Full API reference
- llm.txt - Quick reference for AI assistants
Testing
Run tests for all providers:
# Test all LLMs (sync)
python tests/test_llm.py
# Test streaming
python tests/test_llm_streaming.py
# Test other modalities
python tests/test_vlm.py
python tests/test_image.py
python tests/test_audio.py
python tests/test_video.py
CLI Usage
# Chat with a provider
model-api-hub chat deepseek "Hello!"
# List available providers
model-api-hub list
# Test a provider
model-api-hub test deepseek
Architecture
model_api_hub/
├── api/
│ ├── llm/ # Language Models (18+ providers)
│ ├── vlm/ # Vision-Language Models
│ ├── image/ # Image Generation
│ ├── audio/ # Text-to-Speech
│ └── video/ # Video Generation
├── utils/
│ └── config.py # Configuration management
├── cli.py # Command-line interface
└── __init__.py # Public API exports
Contributing
We welcome contributions! See CONTRIBUTING.md for guidelines.
Adding a New Provider
- Create a new file in
model_api_hub/api/llm/{provider}_llm.py - Implement
chat(),chat_stream()(optional), andcreate_client() - Add exports to
model_api_hub/api/llm/__init__.py - Add tests in
tests/test_llm.py - Update documentation
See llm.txt for detailed implementation guide.
License
MIT License - see LICENSE file.
Support
Acknowledgments
Thanks to all the AI providers for their amazing APIs!
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file model_api_hub-0.1.0.tar.gz.
File metadata
- Download URL: model_api_hub-0.1.0.tar.gz
- Upload date:
- Size: 19.2 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.10.16
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
92a20be122bfc38e076573766a8e6b9f33d32be9d348f475fbe3794bd04d5a88
|
|
| MD5 |
f9e024ebb8f5cf5923fca6b6a7c86494
|
|
| BLAKE2b-256 |
04f56401af40c40c49919481899fa614744ba26b0a55f9809b38179bff23e3b5
|
File details
Details for the file model_api_hub-0.1.0-py3-none-any.whl.
File metadata
- Download URL: model_api_hub-0.1.0-py3-none-any.whl
- Upload date:
- Size: 10.1 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.10.16
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
3b4606bf1caf90929112441d4f94a4a53924c359802f8858add220deb160faf3
|
|
| MD5 |
94e84e69a65140cca793995c1b01068a
|
|
| BLAKE2b-256 |
085a6875bbe7ab028d1281de3f67600def33aac95e38f042346cfe135c3f174e
|