๐ง SAARA - Autonomous Document-to-LLM Data Engine with Pre-training, Cloud Runtime & AI Tokenizer
Project description
๐ง SAARA: Autonomous Document-to-LLM Data Engine
๐ Built for Google Gemini Hackathon - Showcasing the power of Gemini 2.0 Flash and Gemma 2 models in autonomous AI training pipelines.
SAARA is an end-to-end autonomous data pipeline designed to transform raw, unstructured documents (PDFs, research papers) into high-quality, instruction-tuned datasets for fine-tuning Large Language Models (LLMs).
Why this exists: Creating high-quality datasets is the bottleneck in training domain-specific AI. This tool automates the "boring stuff"โOCR, chunking, labeling, and cleaningโallowing you to go from PDF to fine-tuned model in hours, not weeks.
๐ Gemini & Gemma Integration
Gemini 2.0 Flash - AI Teacher & Evaluator
- Default Teacher Model: Uses Gemini 2.0 Flash for autonomous learning
- Quality Evaluation: Scores and improves model responses
- Data Generation: Creates high-quality training examples
- Self-Improvement: Iterative correction loop powered by Gemini
Gemma 2 - Fine-Tuning Targets
- Gemma 2 2B: Lightweight, CPU-trainable, perfect for domain-specific models
- Gemma 2 9B: Production-ready with excellent performance
- Pre-configured: Optimized LoRA settings for Gemma architecture
- First-Class Support: Gemma models are highlighted and recommended
๐ Key Features
1. ๐๏ธ SOTA Vision-LLM OCR
- No more Garbled Text: Uses Moondream and Qwen2.5-VL (Vision-Language Models) to "read" PDFs visually.
- Handles complex double-column layouts, tables, and scientific diagrams that traditional OCR (Tesseract) fails on.
- Hybrid Fallback: Automatically switches between PyMuPDF (fast) and Vision OCR (accurate) based on page extractability.
2. ๐ค Autonomous Data Labeling (Gemini-Powered)
- Uses Gemini 2.0 Flash as the default teacher model for:
- Instruction Tuning: "How do I treat X using Ayurveda?"
- Q&A Pairs: Fact-based extraction.
- Summarization: TL;DRs of complex sections.
- Classification: Topic tagging.
3. ๐งช Data Distillation & Hygiene
- Self-Cleaning: The
distillmodule removes low-quality generations, duplicates, and confabulations. - ShareGPT Formatting: Automatically converts raw data into the industry-standard conversation format.
4. ๐๏ธ Pre-training from Scratch
- Build Your Own LLM: Create custom models from 15M to 3B parameters.
- Custom Tokenizers: Train domain-specific BPE tokenizers on your data.
- Full Pipeline: Pre-train โ Fine-tune โ Evaluate โ Deploy.
- Production-ready LLaMA-style architectures.
5. ๐ Native Fine-Tuning Support (Gemma Optimized)
- Gemma 2 First-Class Support: Pre-configured LoRA settings for optimal Gemma performance.
- One-Command Training: Built-in training loop using
SFTTrainer(QLoRA). - Multi-Format Support: Automatically handles ShareGPT, Alpaca, and Raw Text formats.
- Optimized for consumer GPUs (supports 4-bit quantization).
6. ๐งช Model Evaluation & Self-Improvement (Gemini Judge)
- Gemini 2.0 as Judge: Test your fine-tuned model with automatic quality scoring.
- Self-Improvement Loop: Low-scoring responses are corrected by Gemini and used for next training round.
- Iterative Enhancement: Train โ Evaluate โ Improve โ Repeat.
7. ๐ Model Deployment
- Local Chat: Interactive terminal testing with your model.
- Ollama Export: Convert to GGUF format for Ollama usage.
- HuggingFace Hub: Push your model to share with the community.
- Cloud Deployment: Docker + Google Cloud Run ready.
8. โก Neural Accelerator (NEW)
- Automatic GPU Optimization: Detects CUDA/CPU/MPS and configures optimal settings.
- Mixed Precision Training: FP16/BF16 for faster training with less memory.
- Gradient Accumulation: Train with larger effective batch sizes.
- Memory Efficient Attention: Flash Attention / Memory-Efficient SDPA.
- Smart Recommendations: Suggests optimal batch size, sequence length based on your GPU.
9. ๐ Neural Network Visualizer (NEW)
- Architecture Visualization: Beautiful console display of model layers.
- Live Training Dashboard: Real-time metrics, loss curves, and throughput.
- HTML Reports: Generate stunning training reports with Chart.js.
- Model Analysis: Inspect any PyTorch model's structure and parameters.
10. โ๏ธ Cloud Runtime (NEW)
- Run on Google Colab: Full support without Ollama dependency.
- API-Based Labeling: Use Gemini, GPT-4, DeepSeek, Groq, or HuggingFace for text processing.
- Auto-Detection: Automatically detects Colab, Kaggle, SageMaker, etc.
- Optimized Settings: Recommends training parameters based on cloud GPU.
11. ๐ค AI-Enhanced Tokenizer (NEW)
- Domain-Aware Vocabulary: AI extracts medical, legal, code, or scientific terms.
- Protected Tokens: Domain terms are never split by BPE.
- Smart Segmentation: AI-guided subword merging for semantic coherence.
- Multi-Domain Support: Medical, legal, code, scientific, and general domains.
- Integrated Selection: Choose tokenizer during training/pretraining wizards.
- Multiple Providers: Auto-detect, Ollama, Gemini, OpenAI, or rule-based.
12. ๐ RAG Agent Builder (NEW)
- Build Knowledge Bases: Index PDFs, text files, and JSONL datasets.
- Semantic Search: ChromaDB-powered vector search with sentence-transformers.
- Interactive Chat: Query your documents with natural language.
- Multi-Step Wizard: Create RAG agents with back navigation and step indicators.
- REST API Server: Deploy as an API endpoint for integration.
- Citation Tracking: Responses include source references.
- Multiple Embedding Models: all-MiniLM-L6-v2, all-mpnet-base-v2, or Ollama embeddings.
๐ ๏ธ Architecture
graph LR
A[Raw PDF] --> B(Vision OCR / Extractor)
B --> C{Chunker Strategy}
C --> D[Synthetic Labeling Agent]
D --> E[Raw Dataset JSONL]
E --> F(Data Distiller)
F --> G[Clean ShareGPT Dataset]
G --> H{Training Path}
H -->|Pre-train| I[Build New Model]
H -->|Fine-tune| J[Adapt Existing Model]
I --> K[Model Evaluation]
J --> K
K --> L{Score < 7?}
L -->|Yes| M[Generate Corrections]
M --> J
L -->|No| N((Deploy Model))
๐ฆ Installation
-
Clone the repository:
git clone https://github.com/nikhil49023/Data-engine.git cd Data-engine
-
Install the CLI:
pip install -e .
-
Setup Ollama:
- Install Ollama
- The setup wizard will help you install models automatically
Quick Start
First-time setup (recommended):
saara setup
The setup wizard will:
- โ Detect your hardware (GPU, VRAM, RAM)
- โ Recommend optimal models for your system
- โ Install selected vision and analyzer models
- โ Save configuration
โก Usage
๐ฏ Interactive Wizard (Recommended)
saara run
This launches a beautiful CLI wizard with 5 workflows:
| Option | Mode | Description |
|---|---|---|
| 1 | ๐ Dataset Creation | Extract data from PDFs โ Generate training datasets |
| 2 | ๐ง Model Training | Fine-tune LLMs on your prepared data |
| 3 | ๐งช Model Evaluation | Test & improve models with Granite 4 |
| 4 | ๐ Model Deployment | Deploy locally (Ollama) or to cloud |
| 5 | ๐๏ธ Pre-training | Build & train a model from scratch |
๐๏ธ Pre-training from Scratch (NEW)
Build your own language model from the ground up:
saara pretrain
Available Architectures:
| Name | Parameters | VRAM | Use Case |
|---|---|---|---|
| Nano | ~15M | 2GB+ | Testing, learning (CPU trainable) |
| Micro | ~50M | 4GB+ | Experimentation |
| Mini | ~125M | 6GB+ | Domain-specific pre-training |
| Small | ~350M | 8GB+ | Specialized tasks |
| Base | ~1B | 16GB+ | Production models |
| Large | ~3B | 24GB+ | High-capacity models |
Pre-training Sub-menu:
- ๐ Create Pre-training Dataset
- ๐๏ธ Build & Train New Model
- ๐ค Train Custom Tokenizer
- ๐งช Test Pre-trained Model
- ๐ List Pre-trained Models
Pre-training Dataset Creation:
- Extracts raw text from PDFs, markdown, and text files
- Cleans OCR artifacts and normalizes unicode
- Chunks text into optimal sizes for language modeling
- LLM-Enhanced Processing (Optional):
- Uses local LLM (Granite 4, Llama 3, Qwen) to clean and improve text
- Fixes OCR errors and expands abbreviations
- LLM-based quality scoring for more accurate filtering
- Quality filtering (removes low-quality/incoherent text)
- Deduplication (prevents model memorization)
- Outputs in JSONL format ready for training
- Optional train/validation split
Workflow:
Create Dataset โ Train Tokenizer (optional) โ Pre-train Model โ Test โ Fine-tune โ Deploy
๐ Dataset Creation Flow
- Select input PDF folder and output directory
- Choose Vision OCR model (Moondream/Qwen) - auto-detects available models
- Choose Analyzer model (Granite 4/Llama 3/Qwen 2.5/Mistral)
- Configure advanced options (chunk size, Q&A density)
- Pipeline automatically generates:
*_instruction.jsonl- Instruction tuning data*_qa.jsonl- Q&A pairs*_sharegpt.jsonl- Chat format (best for training)*_summarization.jsonl- Summarization tasks
๐ง Model Training Flow
The training wizard now supports:
- Gemma 2 Models: Recommended for best quality-to-cost ratio
- Custom Pre-trained: Your own pre-trained models
- Fine-tuned Adapters: Continue training existing adapters
Supported Base Models (Gemma First):
| Model | Size | Best For |
|---|---|---|
| โญ google/gemma-2-2b | 2B | Recommended - Efficient, CPU-trainable |
| โญ google/gemma-2-9b | 9B | Production-ready, high quality |
| google/gemma-2b | 2B | General Purpose |
| google/gemma-7b | 7B | Higher capacity |
| sarvamai/sarvam-1 | 2B | Indian Languages |
| TinyLlama/TinyLlama-1.1B | 1.1B | Fast Testing |
Output: models/{model-name}-finetuned/final_adapter/
๐งช Model Evaluation Flow (Gemini-Powered)
Uses Gemini 2.0 Flash to evaluate your fine-tuned model:
- Runs test prompts through your model
- Scores each response (1-10) using Gemini
- Generates improved responses for low scores
- Creates correction data for next training round
Self-Improvement Cycle:
Train Model โ Evaluate (Gemini 2.0) โ Generate Corrections โ Retrain โ Repeat
๐ Model Deployment Flow
| Option | Platform | Description |
|---|---|---|
| 1 | Local Chat | Interactive terminal chat |
| 2 | Ollama Export | Convert to GGUF format |
| 3 | HuggingFace | Push to HF Hub |
| 4 | Cloud Deploy | Docker + Google Cloud Run |
| 5 | Merge Model | Merge adapter with base |
๐ CLI Commands
Core Commands
| Command | Description |
|---|---|
saara run |
Start interactive wizard |
saara pretrain |
Build & train model from scratch |
saara setup |
First-time hardware detection & model setup |
saara version |
Show version information |
Data Processing
| Command | Description |
|---|---|
saara process <file> |
Process a single PDF file |
saara batch <dir> |
Process all PDFs in directory |
saara distill <input> |
Generate synthetic training data |
Model Operations
| Command | Description |
|---|---|
saara train |
Fine-tune a model (interactive) |
saara deploy |
Deploy a trained model |
saara evaluate <base> <adapter> |
Evaluate model quality |
Model Management
| Command | Description |
|---|---|
saara models list |
List all available models |
saara models install <name> |
Install an Ollama model |
saara models remove <name> |
Remove a model |
saara models status |
Show hardware & model status |
saara models info <name> |
Show detailed model info |
saara models storage |
Show disk usage breakdown |
saara models clear checkpoints |
Delete all training checkpoints |
saara models clear models --yes |
Delete ALL trained models |
saara models clear all --yes |
Factory reset (delete everything) |
saara models retrain <name> |
Delete & retrain from scratch |
Accelerator & Visualizer (NEW)
| Command | Description |
|---|---|
saara accelerator |
Show GPU status & recommended settings |
saara visualize |
Visualize neural network architecture |
saara visualize --report |
Generate HTML training report |
saara benchmark |
Benchmark training performance |
Cloud Runtime (NEW)
| Command | Description |
|---|---|
saara cloud info |
Show cloud environment info |
saara cloud setup |
Configure cloud API keys |
saara cloud quickstart |
Show Colab quickstart guide |
AI Tokenizer (NEW)
| Command | Description |
|---|---|
saara tokenizer train |
Train AI-enhanced tokenizer |
saara tokenizer train --domain medical |
Train with medical vocabulary |
saara tokenizer info -o path/to/tokenizer |
Show tokenizer info |
saara tokenizer test -o path/to/tokenizer |
Test tokenization interactively |
RAG Agent (NEW)
| Command | Description |
|---|---|
saara rag create <name> |
Create a new knowledge base |
saara rag add <kb> <path> |
Add documents to a knowledge base |
saara rag chat <kb> |
Interactive chat with knowledge base |
saara rag search <kb> "query" |
Search without generation |
saara rag list |
List all knowledge bases |
saara rag info <kb> |
Show knowledge base details |
saara rag serve <kb> |
Start RAG API server |
saara rag delete <kb> |
Delete a knowledge base |
saara rag clear <kb> |
Clear documents (keep KB) |
Server
| Command | Description |
|---|---|
saara serve |
Start REST API server |
๐ Project Structure
Data-engine/
โโโ setup.py # Package setup
โโโ config.yaml # Configuration settings
โโโ requirements.txt # Dependencies
โโโ SAARA_Colab.ipynb # Google Colab notebook (NEW)
โโโ saara/ # Source code
โ โโโ cli.py # CLI entry point
โ โโโ pipeline.py # Core data pipeline
โ โโโ pretrain.py # Pre-training module
โ โโโ train.py # LLM fine-tuning module
โ โโโ evaluator.py # Model evaluation
โ โโโ deployer.py # Deployment utilities
โ โโโ distiller.py # Data cleaning
โ โโโ model_manager.py # Ollama model management
โ โโโ accelerator.py # Neural accelerator
โ โโโ visualizer.py # Training visualizer
โ โโโ cloud_runtime.py # Cloud runtime
โ โโโ rag_engine.py # RAG Agent engine (NEW)
โ โโโ splash.py # SAARA splash screen
โโโ models/ # Saved models (pre-trained & fine-tuned)
โโโ datasets/ # Generated datasets
โโโ tokenizers/ # Custom tokenizers
โโโ knowledge_bases/ # RAG knowledge bases (NEW)
โโโ evaluations/ # Evaluation results
โโโ reports/ # Training reports
โโโ exports/ # Deployment artifacts
๐ฎ Roadmap
- Vision-LLM OCR (Moondream, Qwen)
- Autonomous data labeling
- Multi-format dataset generation
- Native fine-tuning with QLoRA
- Model evaluation with Granite 4
- Self-improvement training loop
- Local & cloud deployment
- Pre-training from scratch
- Custom tokenizer training
- Iterative adapter fine-tuning
- Neural Accelerator (GPU optimization)
- Training Visualizer (live dashboard, HTML reports)
- Cloud Runtime (Colab/Kaggle support)
- RAG Agent Builder (knowledge bases, semantic search, chat)
- Multi-modal dataset generation (images + text)
- Web UI dashboard
๐ License
Proprietary License - Copyright ยฉ 2025-2026 Kilani Sai Nikhil. All Rights Reserved.
This software is provided under a proprietary license with the following terms:
โ Permitted:
- Use the software for personal, educational, or commercial purposes
- Reference in academic/educational contexts with attribution
โ Not Permitted:
- Modify, alter, or create derivative works
- Reproduce, copy, or duplicate the software
- Distribute, sublicense, or sell the software
- Reverse engineer or decompile the software
See the LICENSE file for full details.
๐ค Author
Kilani Sai Nikhil - GitHub
Built with โค๏ธ for the AI community
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file saara_ai-1.6.4.tar.gz.
File metadata
- Download URL: saara_ai-1.6.4.tar.gz
- Upload date:
- Size: 207.8 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.13.11
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
f0517a7918a87fbae6990e552f4858d35ac52fe8ebd1173c78d6f03db062bad8
|
|
| MD5 |
c47330ef76d589510403ac15cc2802bf
|
|
| BLAKE2b-256 |
08ad748e7002a3add2b978ca2a0a5a0b365e275f5aedb368ec977dc1227ef764
|
File details
Details for the file saara_ai-1.6.4-py3-none-any.whl.
File metadata
- Download URL: saara_ai-1.6.4-py3-none-any.whl
- Upload date:
- Size: 213.9 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.13.11
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
2b392694d47bd99ea07318c5aac8c08d7d25add76a9eba41ff06138b93f7908c
|
|
| MD5 |
a7b87f42f48f6e6ca4d1f50630a0ba58
|
|
| BLAKE2b-256 |
2093b394c3dd32f90af79a0d3c9d49f6280487140c92ed8bda31c59a38d73acd
|