Fully automatic censorship removal for language models
Project description
Shade : Fully Automatic Censorship Removal
🌟 What is Shade?
Shade is a state-of-the-art platform designed to liberate Large Language Models (LLMs) from artificial censorship and safety filters. Using advanced Abliteration (directional ablation) and an automated TPE-based parameter optimizer powered by Optuna, Shade removes "safety alignment" without damaging the model's core intelligence.
🚀 New in Version 2.0.0
The v2.0.0 release transforms Shade from a CLI utility into a complete Model Liberation Platform.
- Ollama One-Click Integration: Automatically register your uncensored models with Ollama.
- Model Quality Benchmarking: Built-in "Sanity Check" system to verify model intelligence after processing.
- Space Optimizer (Prune): Deep clean temporary files, checkpoints, and heavy Hugging Face cache.
- Proactive Core (Doctor ++): Self-healing diagnostic system that can auto-install missing dependencies.
- Official API & Web Backend: Ready-to-use FastAPI server for custom app integrations.
� Core Features
1. Fully Automated Abliteration
- No Training Required: Uses mathematical projection to remove censorship without expensive GPU fine-tuning.
- Smart Layer Analysis: Automatically identifies which layers are responsible for refusals.
- Precision Optimization: Balances removal of safety filters with the preservation of model intelligence (KL Divergence tracking).
2. High-End Web Interface (Shade Web UI)
- Modern Liquid Glass Design: A premium, responsive web chat interface.
- Model Comparison Mode: View original vs. uncensored responses side-by-side.
3. Hardware & System Care
- Multi-GPU Support: Automatically detects and leverages CUDA, XPU, MLU, and Apple Metal (MPS).
- GPU Diagnostics: Real-time VRAM monitoring.
- Memory Optimization: Optimized memory management to prevent OOM errors.
⚒️ Getting Started
1. Installation (PyPI)
Install Shade directly from PyPI:
pip install shadeai
For research features (plotting, clustering, etc.):
pip install shadeai[research]
2. Installation (From Source)
git clone https://github.com/AssemSabry/Shade.git
cd Shade
pip install -e .
3. Configuration & Login
To use models from Hugging Face, secure your access first:
shade hf login
4. Liberate a Model
Run the automatic optimization process on any model ID:
shade <model_id>
Example: shade Qwen/Qwen2.5-1.5B-Instruct
5. Start Web Chat
Launch the web interface to talk to your models:
shade serve
📋 Command Reference
| Command | Description |
|---|---|
shade <model_id> |
Start the automatic optimization & abliteration process. |
shade serve |
Launch the Shade Web UI interface. |
shade library |
Manage and launch your saved decensored models. |
shade ollama |
Export and register a model with Ollama automatically. |
shade benchmark |
Run quality tests to ensure the model's logic is intact. |
shade doctor --fix |
Automatically diagnose and fix system/dependency issues. |
shade prune --all |
Free up disk space by cleaning cache and checkpoints. |
shade hf login |
Securely authenticate with Hugging Face Hub. |
shade commands |
Show the complete CLI manual. |
🐍 Python API Usage
Shade is not just a CLI; it's a powerful library. You can integrate Shade's liberation engine directly into your Python apps.
1. Basic Generation
from shade.model import Model
from shade.config import Settings
# Initialize with default settings
settings = Settings()
model = Model("meta-llama/Llama-3.1-8B-Instruct", settings)
# Generate uncensored response
response = model.generate("Your prompt here")
print(response)
2. Automatic Optimization (Abliteration)
from shade.main import main as run_optimization
# This starts the fully automatic search and removal process
run_optimization()
3. Integrated Web UI & API
from shade.server import start_server
from shade.config import Settings
# Launch the FastAPI backend and Liquid Glass UI
settings = Settings()
start_server(model_id="Qwen/Qwen2.5-1.5B-Instruct", settings=settings)
4. System Diagnostics
from shade.cli import run_doctor
# Check for CUDA, RAM, and auto-fix dependencies
run_doctor(fix=True)
🧠 How It Works
Shade identifies the "refusal direction" within the model's high-dimensional space and applies an Ablation Weight Kernel. This kernel is optimized specifically for each component (Attention Out-Projection, MLP Down-Projection) to ensure that the censorship is removed with the least amount of "collateral damage" to the model's capabilities.
[!IMPORTANT] Shade is a fully original, independent project built from the ground up. It is NOT a clone, fork, or derivative of any existing repository. All automation logic, UI design, and optimization workflows were developed specifically for this project.
👤 Meet the Developer
Assem Sabry
Lead Developer & AI Researcher
⚠️ Disclaimer
Assem Sabry, the developer of Shade, is not responsible for any misuse of this tool. Shade is provided for educational and research purposes only. The primary goal of this project is to allow users to unlock the full potential of open-source language models. Users are expected to interact with de-censored models responsibly.
📜 Citation
If you use Shade in your research, please cite it:
@misc{shade,
author = {Sabry, Assem},
title = {Shade: Fully automatic censorship removal for language models},
year = {2026},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/AssemSabry/Shade}}
}
⚖️ License
Copyright © 2026 Assem Sabry Licensed under the GNU Affero General Public License v3.0. See the LICENSE file for details.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file shadeai-2.1.0.tar.gz.
File metadata
- Download URL: shadeai-2.1.0.tar.gz
- Upload date:
- Size: 204.0 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
3c5cb7ba0d24e3e3836743e8e3a7b37ab4c8a40d8b970f344c2772bb65905cb6
|
|
| MD5 |
179192db2016b021431ed197b00bcbc3
|
|
| BLAKE2b-256 |
664915c2b8597ead1ecc45eb7c91d2e11324e4a44cba46bf1480ec78454ded6b
|
File details
Details for the file shadeai-2.1.0-py3-none-any.whl.
File metadata
- Download URL: shadeai-2.1.0-py3-none-any.whl
- Upload date:
- Size: 210.8 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
2de7b1d2b0feee4c4e5895825d5652d0bf051651cfe16041a2b9e1d27d9802a0
|
|
| MD5 |
2489e1405669b894fb4b65db49577c52
|
|
| BLAKE2b-256 |
c83bfd16fa619803dbfbf6658b91eb13c630d6eb25b8136740cadd772082bd17
|