Skip to main content

Professional Multimodal AI Engine for Onyx platform

Project description

alt text

💎 ONYX AI Gemma 4 Engine (E2B Edition)

A high-performance, professional FastAPI wrapper for Gemma Multimodal models with built-in 4-bit quantization and streaming support. Developed by ONYX (RUI Company).

🚀 Features

  • Zero Config Integration: Deploy a multimodal AI server in seconds.

  • Optimized Performance: Native 4-bit quantization using bitsandbytes for low VRAM/RAM usage.

  • Real-time Streaming: Built-in SSE (Server-Sent Events) for smooth, token-by-token generation.

  • Hardware Friendly: Optimized for both GPU and high-performance CPU inference.

  • 🌊 Streaming Support (Text-Only) The engine supports real-time streaming for text-based conversations only. This allows the model to respond token-by-token using Server-Sent Events (SSE).

  • Note: Streaming is currently limited to text inputs and outputs. Multimodal inputs (images/video) are processed using the standard predict endpoint.

📦 Installation

Option 1: Via pip

You can install the engine directly from PyPI: 📦 Installation Option 1: Install via pip

pip install onyx-AI-Gemma4

Option 2: requirements.txt

fastapi
uvicorn
transformers>=4.48.0
torch
accelerate
bitsandbytes
Pillow
torchvision
onyx-AI-Gemma4

💻 Usage ▶ Standard Script

from ONYXAI_Gemma4E2B import OnyxEngine

# Initialize the engine
engine = OnyxEngine(model_id="google/gemma-4-E2B-it")

# Run the server
if __name__ == "__main__":
    engine.run(host="0.0.0.0", port=7860)

🌐 Production

from ONYXAI_Gemma4E2B import OnyxEngine
import uvicorn
import os

engine = OnyxEngine(model_id="google/gemma-4-E2B-it")
app = engine.app

@app.get("/")
def home():
    return {"message": "ONYX Engine is running!"}

if __name__ == "__main__":
    port = int(os.environ.get("PORT", 7860))
    uvicorn.run(app, host="0.0.0.0", port=port)

🛠 API Usage Endpoint

POST /predict

Example Request

{
  "messages": [
    {
      "role": "user",
      "content": "Write a long story about space exploration."
    }
  ],
  "stream": true,
  "temperature": 0.7,
  "max_tokens": 1024
}

🔗 Links Organization: ONYX / RUI Company Author: Eng. Rawan Jassim

© 2026 ONYX. All rights reserved.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

onyx_ai_gemma4-0.1.11.tar.gz (4.0 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

onyx_ai_gemma4-0.1.11-py3-none-any.whl (4.3 kB view details)

Uploaded Python 3

File details

Details for the file onyx_ai_gemma4-0.1.11.tar.gz.

File metadata

  • Download URL: onyx_ai_gemma4-0.1.11.tar.gz
  • Upload date:
  • Size: 4.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.10.11

File hashes

Hashes for onyx_ai_gemma4-0.1.11.tar.gz
Algorithm Hash digest
SHA256 cecb8463a7aede70fddfa772569c956f636d46c2f8f4d9965e0a41d3fe3a293a
MD5 d9ad2712de24b3429502712541e6c398
BLAKE2b-256 72e2dfef8badb83bd1b9c59dc84cf451208a8db75d6666e9dc87eb1d5f5e332d

See more details on using hashes here.

File details

Details for the file onyx_ai_gemma4-0.1.11-py3-none-any.whl.

File metadata

File hashes

Hashes for onyx_ai_gemma4-0.1.11-py3-none-any.whl
Algorithm Hash digest
SHA256 44ff843bfe1552f5bcf17ea25f17b1d434e5c5d0743ab509685844e8f97eca2c
MD5 a1cd454dd82044355591cda6de680341
BLAKE2b-256 0e38073405ba54945afed438632786c934c53e582cd444440bfae00588874659

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page