Skip to main content

Professional Multimodal AI Engine for Onyx platform

Project description

ONYX AI Logo

💎 ONYX AI Gemma 4 Engine (E2B Edition)

A high-performance, professional FastAPI wrapper for Gemma Multimodal models with built-in 4-bit quantization and streaming support. Developed by ONYX (RUI Company).

🚀 Features

  • Zero Config Integration: Deploy a multimodal AI server in seconds.
  • Optimized Performance: Native 4-bit quantization using bitsandbytes for low VRAM/RAM usage.
  • Real-time Streaming: Built-in SSE (Server-Sent Events) for smooth, token-by-token generation.
  • Hardware Friendly: Optimized for both GPU and high-performance CPU inference.

📦 Installation

Option 1: Via pip

You can install the engine directly from PyPI: 📦 Installation Option 1: Install via pip

pip install onyx-AI-Gemma4

Option 2: requirements.txt

fastapi
uvicorn
transformers>=4.48.0
torch
accelerate
bitsandbytes
Pillow
torchvision
onyx-AI-Gemma4

💻 Usage ▶ Standard Script

from ONYXAI_Gemma4E2B import OnyxEngine

# Initialize the engine
engine = OnyxEngine(model_id="google/gemma-4-E2B-it")

# Run the server
if __name__ == "__main__":
    engine.run(host="0.0.0.0", port=7860)

🌐 Production / Hugging Face Spaces

from ONYXAI_Gemma4E2B import OnyxEngine
import uvicorn
import os

engine = OnyxEngine(model_id="google/gemma-4-E2B-it")
app = engine.app

@app.get("/")
def home():
    return {"message": "ONYX Engine is running!"}

if __name__ == "__main__":
    port = int(os.environ.get("PORT", 7860))
    uvicorn.run(app, host="0.0.0.0", port=port)

🛠 API Usage Endpoint

POST /predict

Example Request

{
  "messages": [
    {
      "role": "user",
      "content": "Explain the importance of AI in modern software engineering."
    }
  ],
  "temperature": 0.7,
  "max_tokens": 1024
}

🔗 Links Organization: ONYX / RUI Company Author: Eng. Rawan Jassim

© 2026 ONYX. All rights reserved.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

onyx_ai_gemma4-0.1.8.tar.gz (3.9 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

onyx_ai_gemma4-0.1.8-py3-none-any.whl (4.2 kB view details)

Uploaded Python 3

File details

Details for the file onyx_ai_gemma4-0.1.8.tar.gz.

File metadata

  • Download URL: onyx_ai_gemma4-0.1.8.tar.gz
  • Upload date:
  • Size: 3.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.10.11

File hashes

Hashes for onyx_ai_gemma4-0.1.8.tar.gz
Algorithm Hash digest
SHA256 44a7d77e9a5f28c0c901b7561ae86a4e62cda4c180208abfca6a58467f3d0e20
MD5 f521652bca67c12b2320053f2c93e091
BLAKE2b-256 ea03ca616e31ccb349635b4be430b38455f269adccbfdb4d2a44a9738f9ab681

See more details on using hashes here.

File details

Details for the file onyx_ai_gemma4-0.1.8-py3-none-any.whl.

File metadata

  • Download URL: onyx_ai_gemma4-0.1.8-py3-none-any.whl
  • Upload date:
  • Size: 4.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.10.11

File hashes

Hashes for onyx_ai_gemma4-0.1.8-py3-none-any.whl
Algorithm Hash digest
SHA256 a69801419d6cd4d195b7afbd1877a26458e0f7f336ae49730c05f3dee8d4d682
MD5 c44787634484d8d823be3cefa9483b31
BLAKE2b-256 abd929070b62ad1b44207ab73371b65539f14c07abcf9ebceb2de1441e69ab11

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page