ModelForge: A no-code toolkit for fine-tuning HuggingFace models
Project description
ModelForge 🔧⚡
Finetune LLMs on your laptop’s GPU—no code, no PhD, no hassle.
🚀 Features
- GPU-Powered Finetuning: Optimized for NVIDIA GPUs (even 4GB VRAM).
- One-Click Workflow: Upload data → Pick task → Train → Test.
- Hardware-Aware: Auto-detects your GPU/CPU and recommends models.
- React UI: No CLI or notebooks—just a friendly interface.
📖 Supported Tasks
- Text-Generation: Generates answers in the form of text based on prior and fine-tuned knowledge. Ideal for use cases like customer support chatbots, story generators, social media script writers, code generators, and general-purpose chatbots.
- Summarization: Generates summaries for long articles and texts. Ideal for use cases like news article summarization, law document summarization, and medical article summarization.
- Extractive Question Answering: Finds the answers relevant to a query from a given context. Best for use cases like Retrieval Augmented Generation (RAG), and enterprise document search (for example, searching for information in internal documentation).
Installation
Prerequisites
- Python 3.8+: Ensure you have Python installed.
- NVIDIA GPU: Recommended VRAM >= 6GB.
- CUDA: Ensure CUDA is installed and configured for your GPU.
- Node.js & npm: Required for running the frontend.
- HuggingFace Account: Create an account on Hugging Face and generate a finegrained access token.
Steps
-
Install the Package:
pip install git+https://github.com
-
Set HuggingFace API Key in environment variables:
Linux:export HUGGINGFACE_TOKEN=your_huggingface_token
Windows Powershell:
$env:HUGGINGFACE_TOKEN="your_huggingface_token"
Windows CMD:
set HUGGINGFACE_TOKEN=your_huggingface_token
Or use a .env file:
echo "HUGGINGFACE_TOKEN=your_huggingface_token" > .env
-
Install Backend Dependencies:
cd FastAPI_server pip install -r requirements.txt
-
Install Frontend Dependencies and build Frontend:
cd ../Frontend npm install npm run build
-
Run the Backend:
cd ../FastAPI_server uvicorn app:app --host 127.0.0.1 --port 8000 --reload
-
Done!: Navigate to http://localhost:8000 in your browser and get started!
Running the Application Again in the Future
- Start the Application:
cd backend uvicorn main:app --host 0.0.0.0 --port 8000
- Navigate to the App:
Open your browser and go to http://localhost:8000.
Stopping the Application
To stop the application and free up resources, press Ctrl+C in the terminal running the app.
📂 Dataset Format
{"input": "Enter a really long article here...", "output": "Short summary."},
{"input": "Enter the poem topic here...", "output": "Roses are red..."}
🛠 Tech Stack
transformers+peft(LoRA finetuning)bitsandbytes(4-bit quantization)React(UI)FastAPI(Backend)Python(Backend)Node.js(Frontend)
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file modelforge_finetuning-0.1.0.tar.gz.
File metadata
- Download URL: modelforge_finetuning-0.1.0.tar.gz
- Upload date:
- Size: 516.7 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.11.9
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
8ee0dac9e5552073c4758d9a0cfa9f97da5393d83036be8a7b5f05fac14e1ed0
|
|
| MD5 |
f4ad219b01232770e0ab808418416f1b
|
|
| BLAKE2b-256 |
87cac5401f150a67db6a59bc6f44914a801ca6d44bd91ac64cdb8decac6172e4
|
File details
Details for the file modelforge_finetuning-0.1.0-py3-none-any.whl.
File metadata
- Download URL: modelforge_finetuning-0.1.0-py3-none-any.whl
- Upload date:
- Size: 559.3 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.11.9
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
8afc58c190b973f9a8baadb207f03bca851c8a2e9210e219f6ec27396c57964d
|
|
| MD5 |
3136f50e4ed191434175da97ca26b89c
|
|
| BLAKE2b-256 |
372d6c765c9f68de5e0b2611c491ff986240dfc6b4ee3b01b33692151835e047
|