Skip to main content

⚡ Python AI SDK - OpenAI, MistralAI, Anthropic, Google Gemini, XAI Grok, Hugging Face

Project description

AIHuber

aihuber is the easiest and quickest way to communicate with AI providers. Free and open-source.

License Python Build Status Version

✨ Features

aihuber is compatible with main AI providers :

Chat completion (Text generation)

AI Providers Comparison
AI Provider Buffering sync & async Streaming (sync & async) Endpoint
OpenAI https://api.openai.com/v1/chat/completions
Mistral AI https://api.mistral.ai/v1/chat/completions
Anthropic https://api.anthropic.com/v1/messages
Google (Gemini) https://generativelanguage.googleapis.com/v1/models/{model}:generateContent
Cohere https://api.cohere.com/v2/chat
Together AI https://api.together.xyz/v1/chat/completions
Replicate https://api.replicate.com/v1/predictions
Hugging Face https://api-inference.huggingface.co/models/{model_id}
Azure OpenAI https://{your-resource-name}.openai.azure.com/openai/deployments/{deployment-name}/chat/completions?api-version=2024-02-01
AWS Bedrock https://bedrock-runtime.{region}.amazonaws.com/model/{model-id}/invoke
Perplexity https://api.perplexity.ai/chat/completions
xAI (Grok) https://api.x.ai/v1/chat/completions

Legend

  • ✅ = Available
  • ❌ = Not Available
  • Text in parentheses = Specific service/model name

🚀 Quick Start

Installation

pip install aihuber

Or with uv:

uv add aihuber

Basic Usage

import asyncio
import os

from dotenv import load_dotenv

from aihuber import LLM, Message
from pydantic import SecretStr

load_dotenv()

MISTRAL_AI_TOKEN = SecretStr(os.getenv("MISTRAL_AI_TOKEN"))  # type: ignore


async def main():
    """Chat completion example with asynchronous"""
    llm = LLM(model="mistral:mistral-small-latest", api_key=MISTRAL_AI_TOKEN)

    messages = [
        Message(role="system", content="You are a helpful Python developer assistant."),
        Message(role="user", content="Give me 10 numbers from 0"),
    ]

    async_chat_completion = await llm.chat_completion_async(messages, stream=True)
    async for chunk in async_chat_completion:
        print(chunk)

if __name__ == "__main__":
    asyncio.run(main())

🐳 Docker Usage

docker build -t aihuber -f dockerfiles/aihuber.Dockerfile .
docker run  -p 8080:8080 aihuber

📄 License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

aihuber-0.2.0.tar.gz (113.2 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

aihuber-0.2.0-py3-none-any.whl (20.0 kB view details)

Uploaded Python 3

File details

Details for the file aihuber-0.2.0.tar.gz.

File metadata

  • Download URL: aihuber-0.2.0.tar.gz
  • Upload date:
  • Size: 113.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for aihuber-0.2.0.tar.gz
Algorithm Hash digest
SHA256 4ab7593df9c2e0ba0f91cfa9d21c39bfbb5e65cdaf04548315fce3b81a5f757f
MD5 90953175d914fa0c5de446f6ef6436c6
BLAKE2b-256 444475924bcd44088d03d1199ca0eeadc4368c690d63338634ef105328bf8a01

See more details on using hashes here.

File details

Details for the file aihuber-0.2.0-py3-none-any.whl.

File metadata

  • Download URL: aihuber-0.2.0-py3-none-any.whl
  • Upload date:
  • Size: 20.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for aihuber-0.2.0-py3-none-any.whl
Algorithm Hash digest
SHA256 84c6783586d168c2e18a923299ed4f8fd5f74111c116a8b24d04883054fa6f71
MD5 74bf5fb477cd3a966b300dda475570e9
BLAKE2b-256 bc1050e39b7605eeda66ceba87fe1f31e60985c4d23056588fe5c70d9856e910

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page