⚡ Python AI SDK - OpenAI, MistralAI, Anthropic, Google Gemini, XAI Grok, Hugging Face
Project description
AIHuber
aihuber is the easiest and quickest way to communicate with AI providers. Free and open-source.
✨ Features
aihuber is compatible with main AI providers :
Chat completion (Text generation)
AI Providers Comparison
| AI Provider | Buffering sync & async | Streaming (sync & async) | Endpoint |
|---|---|---|---|
| OpenAI | ✅ | ✅ | https://api.openai.com/v1/chat/completions |
| Mistral AI | ✅ | ✅ | https://api.mistral.ai/v1/chat/completions |
| Anthropic | ✅ | ✅ | https://api.anthropic.com/v1/messages |
| Google (Gemini) | ❌ | ❌ | https://generativelanguage.googleapis.com/v1/models/{model}:generateContent |
| Cohere | ✅ | ✅ | https://api.cohere.com/v2/chat |
| Together AI | ✅ | ✅ | https://api.together.xyz/v1/chat/completions |
| Replicate | ❌ | ❌ | https://api.replicate.com/v1/predictions |
| Hugging Face | ❌ | ❌ | https://api-inference.huggingface.co/models/{model_id} |
| Azure OpenAI | ❌ | ❌ | https://{your-resource-name}.openai.azure.com/openai/deployments/{deployment-name}/chat/completions?api-version=2024-02-01 |
| AWS Bedrock | ❌ | ❌ | https://bedrock-runtime.{region}.amazonaws.com/model/{model-id}/invoke |
| Perplexity | ✅ | ✅ | https://api.perplexity.ai/chat/completions |
| xAI (Grok) | ❌ | ❌ | https://api.x.ai/v1/chat/completions |
Legend
- ✅ = Available
- ❌ = Not Available
- Text in parentheses = Specific service/model name
🚀 Quick Start
Installation
pip install aihuber
Or with uv:
uv add aihuber
Basic Usage
import asyncio
import os
from dotenv import load_dotenv
from aihuber import LLM, Message
from pydantic import SecretStr
load_dotenv()
MISTRAL_AI_TOKEN = SecretStr(os.getenv("MISTRAL_AI_TOKEN")) # type: ignore
async def main():
"""Chat completion example with asynchronous"""
llm = LLM(model="mistral:mistral-small-latest", api_key=MISTRAL_AI_TOKEN)
messages = [
Message(role="system", content="You are a helpful Python developer assistant."),
Message(role="user", content="Give me 10 numbers from 0"),
]
async_chat_completion = await llm.chat_completion_async(messages, stream=True)
async for chunk in async_chat_completion:
print(chunk)
async_chat_completion = await llm.chat_completion_async(messages, stream=True)
async for chunk in async_chat_completion:
print(chunk)
if __name__ == "__main__":
asyncio.run(main())
🐳 Docker Usage
docker build -t aihuber -f dockerfiles/aihuber.Dockerfile .
docker run -p 8080:8080 aihuber
📄 License
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
aihuber-0.1.0.tar.gz
(113.0 kB
view details)
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
aihuber-0.1.0-py3-none-any.whl
(20.0 kB
view details)
File details
Details for the file aihuber-0.1.0.tar.gz.
File metadata
- Download URL: aihuber-0.1.0.tar.gz
- Upload date:
- Size: 113.0 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
a770fc4c93d56249ef64fe603b923f9979baf19ac3bc6f6db91e84e02eab0a6a
|
|
| MD5 |
34e399df58b4493327251151d71f1ee9
|
|
| BLAKE2b-256 |
b5363fb42352288cba9039d6c7f2c79643911290fd142791ec55e2374061baf3
|
File details
Details for the file aihuber-0.1.0-py3-none-any.whl.
File metadata
- Download URL: aihuber-0.1.0-py3-none-any.whl
- Upload date:
- Size: 20.0 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
bc58fb0dabd4a5de3a573f190240ba088339a0d282069f134249837c8e7b01e1
|
|
| MD5 |
a85388a3a8d634cd9c20e5e709ec2b86
|
|
| BLAKE2b-256 |
814bb6f5c952b30e07fbd8cf3813cd758ea62165eb5f6bcfb098c952f59aea4e
|