Skip to main content

Professional Multimodal AI Engine for Onyx platform

Project description

ONYX AI Logo

💎 ONYX AI Gemma 4 Engine (E2B Edition)

A high-performance, professional FastAPI wrapper for Gemma Multimodal models with built-in 4-bit quantization and streaming support. Developed by ONYX (RUI Company).

🚀 Features

  • Zero Config Integration: Deploy a multimodal AI server in seconds.
  • Optimized Performance: Native 4-bit quantization using bitsandbytes for low VRAM/RAM usage.
  • Real-time Streaming: Built-in SSE (Server-Sent Events) for smooth, token-by-token generation.
  • Hardware Friendly: Optimized for both GPU and high-performance CPU inference.

📦 Installation

Option 1: Via pip

You can install the engine directly from PyPI: 📦 Installation Option 1: Install via pip

pip install onyx-AI-Gemma4

Option 2: requirements.txt

fastapi
uvicorn
transformers>=4.48.0
torch
accelerate
bitsandbytes
Pillow
torchvision
onyx-AI-Gemma4

💻 Usage ▶ Standard Script

from ONYXAI_Gemma4E2B import OnyxEngine

# Initialize the engine
engine = OnyxEngine(model_id="google/gemma-4-E2B-it")

# Run the server
if __name__ == "__main__":
    engine.run(host="0.0.0.0", port=7860)

🌐 Production / Hugging Face Spaces

from ONYXAI_Gemma4E2B import OnyxEngine
import uvicorn
import os

engine = OnyxEngine(model_id="google/gemma-4-E2B-it")
app = engine.app

@app.get("/")
def home():
    return {"message": "ONYX Engine is running!"}

if __name__ == "__main__":
    port = int(os.environ.get("PORT", 7860))
    uvicorn.run(app, host="0.0.0.0", port=port)

🛠 API Usage Endpoint

POST /predict

Example Request

{
  "messages": [
    {
      "role": "user",
      "content": "Explain the importance of AI in modern software engineering."
    }
  ],
  "temperature": 0.7,
  "max_tokens": 1024
}

🔗 Links Organization: ONYX / RUI Company Author: Eng. Rawan Jassim

© 2026 ONYX. All rights reserved.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

onyx_ai_gemma4-0.1.6.tar.gz (3.9 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

onyx_ai_gemma4-0.1.6-py3-none-any.whl (4.2 kB view details)

Uploaded Python 3

File details

Details for the file onyx_ai_gemma4-0.1.6.tar.gz.

File metadata

  • Download URL: onyx_ai_gemma4-0.1.6.tar.gz
  • Upload date:
  • Size: 3.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.10.11

File hashes

Hashes for onyx_ai_gemma4-0.1.6.tar.gz
Algorithm Hash digest
SHA256 9b9df2cf755e197ea0bab3cde914b118bf3c409826132f536ad37345332710b4
MD5 8f701699212816592935a10f4d6e3ec5
BLAKE2b-256 cbfb67db35a112ed37d711a13850ae64211e54dd26054d698e60edb5d976d0a9

See more details on using hashes here.

File details

Details for the file onyx_ai_gemma4-0.1.6-py3-none-any.whl.

File metadata

  • Download URL: onyx_ai_gemma4-0.1.6-py3-none-any.whl
  • Upload date:
  • Size: 4.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.10.11

File hashes

Hashes for onyx_ai_gemma4-0.1.6-py3-none-any.whl
Algorithm Hash digest
SHA256 cf404ff1208a5e9c220210ceb8b971e607288405209fb910163d6ea1b2990bcb
MD5 8e2143a48899d36c43a5834c5383c3ba
BLAKE2b-256 0ec7fd4335f1685b042cc684901252e86569348f37b4237f65bb4aba910d7450

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page