Skip to main content

Professional Multimodal AI Engine for Onyx platform

Project description

ONYX AI Logo

💎 ONYX AI Gemma 4 Engine (E2B Edition)

A high-performance, professional FastAPI wrapper for Gemma Multimodal models with built-in 4-bit quantization and streaming support. Developed by ONYX (RUI Company).

🚀 Features

  • Zero Config Integration: Deploy a multimodal AI server in seconds.
  • Optimized Performance: Native 4-bit quantization using bitsandbytes for low VRAM/RAM usage.
  • Real-time Streaming: Built-in SSE (Server-Sent Events) for smooth, token-by-token generation.
  • Hardware Friendly: Optimized for both GPU and high-performance CPU inference.

📦 Installation

Option 1: Via pip

You can install the engine directly from PyPI: 📦 Installation Option 1: Install via pip

pip install onyx-AI-Gemma4

Option 2: requirements.txt

fastapi
uvicorn
transformers>=4.48.0
torch
accelerate
bitsandbytes
Pillow
torchvision
onyx-AI-Gemma4

💻 Usage ▶ Standard Script

from ONYXAI_Gemma4E2B import OnyxEngine

# Initialize the engine
engine = OnyxEngine(model_id="google/gemma-4-E2B-it")

# Run the server
if __name__ == "__main__":
    engine.run(host="0.0.0.0", port=7860)

🌐 Production / Hugging Face Spaces

from ONYXAI_Gemma4E2B import OnyxEngine
import uvicorn
import os

engine = OnyxEngine(model_id="google/gemma-4-E2B-it")
app = engine.app

@app.get("/")
def home():
    return {"message": "ONYX Engine is running!"}

if __name__ == "__main__":
    port = int(os.environ.get("PORT", 7860))
    uvicorn.run(app, host="0.0.0.0", port=port)

🛠 API Usage Endpoint

POST /predict

Example Request

{
  "messages": [
    {
      "role": "user",
      "content": "Explain the importance of AI in modern software engineering."
    }
  ],
  "temperature": 0.7,
  "max_tokens": 1024
}

🔗 Links Organization: ONYX / RUI Company Author: Eng. Rawan Jassim

© 2026 ONYX. All rights reserved.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

onyx_ai_gemma4-0.1.7.tar.gz (3.9 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

onyx_ai_gemma4-0.1.7-py3-none-any.whl (4.2 kB view details)

Uploaded Python 3

File details

Details for the file onyx_ai_gemma4-0.1.7.tar.gz.

File metadata

  • Download URL: onyx_ai_gemma4-0.1.7.tar.gz
  • Upload date:
  • Size: 3.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.10.11

File hashes

Hashes for onyx_ai_gemma4-0.1.7.tar.gz
Algorithm Hash digest
SHA256 4ce4b8c685b52a4d88daff1d241b1a8e7cf4e55412049aaf7dafafd764b5f393
MD5 9ec1a7cbea68584e7ced731f670c1045
BLAKE2b-256 06d49241b04197aabaff58324fe8c4c74d5a04f1825aafeac9ea12bbb81764d0

See more details on using hashes here.

File details

Details for the file onyx_ai_gemma4-0.1.7-py3-none-any.whl.

File metadata

  • Download URL: onyx_ai_gemma4-0.1.7-py3-none-any.whl
  • Upload date:
  • Size: 4.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.10.11

File hashes

Hashes for onyx_ai_gemma4-0.1.7-py3-none-any.whl
Algorithm Hash digest
SHA256 435402f5993617e9e0edffb67c298478706254ab897f7d9604e96b08c0666b43
MD5 44347e74a9caeaf873540f446bbc3268
BLAKE2b-256 e4b54616b84e4e7260b38f649a6097ef267806be670baf7fcb4c572073f95ce2

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page