A comprehensive GUI toolkit for Large Language Models (LLMs) with GGUF support, document processing, email automation, and multi-backend inference
Project description
LLM Toolkit
A comprehensive toolkit for working with Large Language Models (LLMs) that provides an intuitive GUI interface for model loading, chat interactions, document summarization, and email automation. Built with modern Python technologies and designed for both developers and end-users.
Features
🤖 Multiple Model Backends
- GGUF Support: Optimized inference with ctransformers and llama-cpp-python
- Hugging Face Integration: Direct model loading from HF Hub (optional)
- Hardware Detection: Automatic GPU/CPU optimization
- Memory Management: Intelligent resource allocation
💬 Advanced Chat Interface
- Interactive Conversations: Real-time chat with loaded models
- History Management: Persistent conversation storage
- Parameter Control: Fine-tune generation settings
- Context Awareness: Maintain conversation context
📄 Document Processing
- Multi-format Support: PDF, Word, and text documents
- Intelligent Summarization: AI-powered content extraction
- Chunked Processing: Handle large documents efficiently
- Batch Operations: Process multiple files simultaneously
📧 Email Automation
- Gmail Integration: Secure OAuth2 authentication
- AI-Powered Drafting: Generate professional emails
- Smart Replies: Context-aware response generation
- Bulk Operations: Marketing and communication automation
🎨 Modern User Interface
- Cross-Platform: Windows, macOS, and Linux support
- Theme Support: Dark and light mode options
- Responsive Design: Adaptive layout for different screen sizes
- Accessibility: Keyboard shortcuts and screen reader support
⚡ Performance & Reliability
- Multi-threading: Non-blocking UI operations
- Resource Monitoring: Real-time memory and CPU tracking
- Error Recovery: Graceful handling of failures
- Logging System: Comprehensive debugging information
Quick Start
-
Install the package:
pip install llmtoolkit
-
Launch the application:
llmtoolkit
-
Load a model and start chatting!
Installation
Basic Installation
pip install llmtoolkit
With Optional Dependencies
For Hugging Face transformers support:
pip install llmtoolkit[transformers]
For GPU acceleration:
pip install llmtoolkit[gpu]
For all features:
pip install llmtoolkit[all]
Usage
Command Line
After installation, you can launch the application with:
llmtoolkit
Command Line Options
llmtoolkit --help # Show help message
llmtoolkit --version # Show version information
llmtoolkit --model PATH # Load a specific model on startup
llmtoolkit --debug # Enable debug logging
Python Module
You can also run it as a Python module:
python -m llmtoolkit
Programmatic Usage
import llmtoolkit
# Launch the GUI application
llmtoolkit.main()
# Or access specific components
from llmtoolkit.app.core import ModelService
model_service = ModelService()
Supported Model Formats
- GGUF (.gguf) - Recommended format for efficient inference
- GGML (.ggml) - Legacy format support
- Hugging Face - Direct model loading from HF Hub (with transformers extra)
- PyTorch (.bin, .pt, .pth) - PyTorch model files
- Safetensors (.safetensors) - Safe tensor format
System Requirements
- Python: 3.8 or higher
- Operating System: Windows, macOS, or Linux
- Memory: 4GB RAM minimum (8GB+ recommended for larger models)
- Storage: 2GB free space (plus space for models)
- GPU (optional): NVIDIA CUDA, AMD ROCm, or Apple Metal support
Configuration
The application stores configuration and data in:
- Windows:
%APPDATA%\llmtoolkit\ - macOS:
~/Library/Application Support/llmtoolkit/ - Linux:
~/.config/llmtoolkit/
Troubleshooting
Common Issues
Installation Problems:
- Ensure you have Python 3.8+ installed
- Try upgrading pip:
pip install --upgrade pip - For GPU support issues, check your CUDA/ROCm installation
Model Loading Issues:
- Verify model file format is supported (GGUF recommended)
- Check available system memory
- Ensure model file is not corrupted
GUI Not Starting:
- Install GUI dependencies:
pip install llmtoolkit[all] - On Linux, ensure X11 forwarding is enabled if using SSH
- Check system compatibility with PySide6
Performance Issues:
- Close other memory-intensive applications
- Use smaller models for limited hardware
- Enable GPU acceleration if available
Development
Setting up Development Environment
git clone https://github.com/hussainnazary2/LLM-Toolkit.git
cd LLM-Toolkit
pip install -e .[dev]
Running Tests
pytest
Code Formatting
black llmtoolkit/
isort llmtoolkit/
Contributing
Contributions are welcome! Please feel free to submit a Pull Request. For major changes, please open an issue first to discuss what you would like to change.
License
This project is licensed under the MIT License - see the LICENSE file for details.
Acknowledgments
- Built with PySide6 for the GUI framework
- Model loading powered by llama-cpp-python and ctransformers
- Optional Hugging Face integration via transformers
Changelog
See CHANGELOG.md for version history and updates.
Support
If you encounter any issues or have questions:
- Check the documentation
- Search existing issues
- Create a new issue if needed
- Contact the developer: hussainnazary475@gmail.com
Author
Hussain Nazary
- Email: hussainnazary475@gmail.com
- GitHub: @hussainnazary2
- Project: LLM-Toolkit
Made with ❤️ for the AI community
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file aikitx-1.0.0.tar.gz.
File metadata
- Download URL: aikitx-1.0.0.tar.gz
- Upload date:
- Size: 619.4 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.12.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
b7d50b79f2400ef778258cd8c225c7c5d69b6c3dcb324b1eed4889d19c762197
|
|
| MD5 |
d95d4332e7819d143520def3db72d7f8
|
|
| BLAKE2b-256 |
4ac0deebd4186ae1a6238958761bc93fa876673b46b1b4ee62bec165b65344a4
|
File details
Details for the file aikitx-1.0.0-py3-none-any.whl.
File metadata
- Download URL: aikitx-1.0.0-py3-none-any.whl
- Upload date:
- Size: 686.1 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.12.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
3e88a6c2d1ca2d2f9d6db95501a5e2e5d92f01ad993e68909324586c0bd63a60
|
|
| MD5 |
7fca78f432ec2b7f56d920c04cc1d725
|
|
| BLAKE2b-256 |
838abd109b157d13888aa79f00830bba05f1f8031746716e24c0651b942d006b
|