SOTA Multimodal Inference Engine (S2S, I2I, V2V) for Xoron-Dev.
Project description
🚀 Xoron-Dev: Unified Multimodal AI Model
A state-of-the-art multimodal MoE model that unifies text, image, video, and audio understanding and generation.
Architecture | Features | Installation | Usage | Training | Documentation
🏗️ Architecture Overview
Xoron-Dev is built on a modular, mixture-of-experts architecture designed for maximum flexibility and performance.
🧠 LLM Backbone (Mixture of Experts)
- 12 Layers, 1024d, 16 Heads - Optimized for efficient inference and training.
- Aux-Lossless MoE - 8 experts with top-2 routing and configurable shared expert isolation.
- Ring Attention - Memory-efficient processing for up to 128K context.
- Qwen2.5 Tokenizer - High-density 151K vocabulary for multilingual and code support.
👁️ Vision & Video
- SigLIP-2 Encoder - 384px native resolution with multi-scale support (128-512px).
- TiTok 1D Tokenization - Compressed visual representation (256 tokens) for faster processing.
- VidTok 3D VAE - Efficient spatiotemporal video encoding with 4x8x8 compression.
- 3D-RoPE & Temporal MoE - Sophisticated motion pattern recognition and spatial awareness.
🎤 Audio System
- Raw Waveform Processing - Direct 16kHz audio input/output (no Mel spectrograms required).
- Conformer + RMLA - Advanced speech-to-text with KV compression.
- BigVGAN Waveform Decoder - High-fidelity direct waveform generation with Snake activation.
- Zero-Shot Voice Cloning - Clone voices from short reference clips using speaker embeddings.
🌟 Features
Multimodal Capabilities
| Modality | Input | Output | Strategy |
|---|---|---|---|
| Text | 128K Context | Reasoning, Code, Agentic | MoE LLM |
| Image | 128-512px | Understanding & SFT | SigLIP + TiTok |
| Video | 8-24 Frames | Understanding | VidTok + 3D-RoPE |
| Audio | 16kHz Waveform | ASR & TTS | Conformer + BigVGAN |
Agentic & Tool Calling
- 250+ Special Tokens for structured agent behaviors.
- Native Tool Use: Execute shell commands, Python scripts, and Jupyter notebooks.
- Reasoning: Advanced Chain-of-Thought (
<|think|>,<|plan|>) for complex tasks. - Safety: Anti-hallucination tokens (
<|uncertain|>,<|cite|>) and confidence scores.
Optimization
- LoRA Variants: LoRA+, rsLoRA, and DoRA (r=32, α=64).
- Lookahead Optimizer: Enhanced stability and faster convergence.
- 8-bit Optimization: Save up to 75% optimizer memory with bitsandbytes.
- Continuous-Scale Training: Adaptive resolution sampling for optimal VRAM usage.
🚀 Installation
# Clone the repository
git clone https://github.com/nigfuapp-web/Xoron-Dev.git
cd Xoron-Dev
# Install dependencies
pip install -r requirements.txt
💻 Usage
Quick Start (Inference)
from load import load_xoron_model
# Load model and tokenizer
model, tokenizer, device, config = load_xoron_model("Backup-bdg/Xoron-Dev-MultiMoe")
# Generate response
output = model.generate_text("Explain quantum entanglement.", tokenizer)
print(output)
CLI Training
The build.py script provides a powerful interface for training and building models.
# Build a new model from scratch
python build.py --build
# Targeted Fine-tuning
python build.py --hf --text --math # Fine-tune on Math
python build.py --hf --text --agent # Fine-tune on Agentic tasks
python build.py --hf --video # Fine-tune on Video datasets
python build.py --hf --voice # Fine-tune on Audio/Voice
Granular Text Training Flags
| Flag | Description |
|---|---|
--math |
Focus on mathematical reasoning and steps. |
--agent |
Tool use, code execution, and system operations. |
--software |
High-quality software engineering and coding. |
--cot |
Chain-of-Thought and logical reasoning. |
--medical |
Medical knowledge and clinical reasoning. |
--hallucination |
Anti-hallucination and truthfulness. |
🏋️ Training
Weighted Loss Strategy
The trainer applies specialized weights to ensure high performance on critical tokens:
- Reasoning (CoT): 1.5x
- Tool Calling: 1.3x
- Anti-Hallucination: 1.2x
Continuous-Scale Strategy
Xoron-Dev dynamically samples resolutions during training:
- Image: 128px to 384px (step=32)
- Video: 8 to 24 frames, 128px to 320px
📦 Export & Quantization
Export your models for efficient deployment:
# Export to GGUF (for llama.cpp)
python build.py --hf --gguf --gguf-quant q4_k_m
# Export to ONNX
python build.py --hf --onnx --quant-bits 4
🤝 Contributing
Contributions are welcome! If you have ideas for new modalities or optimizations, please open an issue or PR.
📄 License
This project is licensed under the MIT License.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file xorfice-0.1.10.tar.gz.
File metadata
- Download URL: xorfice-0.1.10.tar.gz
- Upload date:
- Size: 153.0 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.3
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
084f6ca27d4347c8d4b4ac70629a5da81d87942079b27b0a92611d49f83347ef
|
|
| MD5 |
35dcbd4486703589fcc627b02803420c
|
|
| BLAKE2b-256 |
103989bf318c5a213430e489846d949e05bd8746ed0558196e8904f1c0f92e06
|
File details
Details for the file xorfice-0.1.10-py3-none-any.whl.
File metadata
- Download URL: xorfice-0.1.10-py3-none-any.whl
- Upload date:
- Size: 164.8 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.3
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
dab8e53f141e042bf08b670a6e3c44cab0e4c554d5de80fcc6300f575e8585f1
|
|
| MD5 |
c7263db5b76134baab6ecd89a28e02a5
|
|
| BLAKE2b-256 |
7c2159c5335917145eca447fff10435f39da78b7beb0268b9b282e622defdb46
|