Skip to main content

Professional Multimodal AI Engine for Onyx platform

Project description

![Alt text](https://onyxchat-ai.vercel.app/logo/Group%2071.png)

💎 ONYX AI Gemma 4 Engine (E2B Edition)

A high-performance, professional FastAPI wrapper for Gemma Multimodal models with built-in 4-bit quantization and streaming support. Developed by ONYX (RUI Company).

🚀 Features

  • Zero Config Integration: Deploy a multimodal AI server in seconds.
  • Optimized Performance: Native 4-bit quantization using bitsandbytes for low VRAM/RAM usage.
  • Real-time Streaming: Built-in SSE (Server-Sent Events) for smooth, token-by-token generation.
  • Hardware Friendly: Optimized for both GPU and high-performance CPU inference.

📦 Installation

Option 1: Via pip

You can install the engine directly from PyPI: 📦 Installation Option 1: Install via pip

pip install onyx-AI-Gemma4

Option 2: requirements.txt

fastapi
uvicorn
transformers>=4.48.0
torch
accelerate
bitsandbytes
Pillow
torchvision
onyx-AI-Gemma4

💻 Usage ▶ Standard Script

from ONYXAI_Gemma4E2B import OnyxEngine

# Initialize the engine
engine = OnyxEngine(model_id="google/gemma-4-E2B-it")

# Run the server
if __name__ == "__main__":
    engine.run(host="0.0.0.0", port=7860)

🌐 Production

from ONYXAI_Gemma4E2B import OnyxEngine
import uvicorn
import os

engine = OnyxEngine(model_id="google/gemma-4-E2B-it")
app = engine.app

@app.get("/")
def home():
    return {"message": "ONYX Engine is running!"}

if __name__ == "__main__":
    port = int(os.environ.get("PORT", 7860))
    uvicorn.run(app, host="0.0.0.0", port=port)

🛠 API Usage Endpoint

POST /predict

Example Request

{
  "messages": [
    {
      "role": "user",
      "content": "Explain the importance of AI in modern software engineering."
    }
  ],
  "temperature": 0.7,
  "max_tokens": 1024
}

🔗 Links Organization: ONYX / RUI Company Author: Eng. Rawan Jassim

© 2026 ONYX. All rights reserved.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

onyx_ai_gemma4-0.1.9.tar.gz (3.9 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

onyx_ai_gemma4-0.1.9-py3-none-any.whl (4.2 kB view details)

Uploaded Python 3

File details

Details for the file onyx_ai_gemma4-0.1.9.tar.gz.

File metadata

  • Download URL: onyx_ai_gemma4-0.1.9.tar.gz
  • Upload date:
  • Size: 3.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.10.11

File hashes

Hashes for onyx_ai_gemma4-0.1.9.tar.gz
Algorithm Hash digest
SHA256 d7b6a6e408c717459f5136c1e532d0f105112d4c506e206dde74e0ddeea8603b
MD5 6315930f9b657c8c9916097c848c03f6
BLAKE2b-256 8b08627bcb0e90fa9455b34d91de07b0361b64de3bbb851346fcfdec4c5640cf

See more details on using hashes here.

File details

Details for the file onyx_ai_gemma4-0.1.9-py3-none-any.whl.

File metadata

  • Download URL: onyx_ai_gemma4-0.1.9-py3-none-any.whl
  • Upload date:
  • Size: 4.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.10.11

File hashes

Hashes for onyx_ai_gemma4-0.1.9-py3-none-any.whl
Algorithm Hash digest
SHA256 b46edebed9e4300589ee5ed02b61989b0d1e923b343f08b809cc2d06e8a0a6ce
MD5 17b1810978df9e8f0b7df294843e10dd
BLAKE2b-256 d746010a474376d3501bc3bf5f0408d6facfee6e5a93c1f0e9d39fe1b72aab4c

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page