A lightweight GPT-based language model framework for training custom question-answering models on any domain
Project description
GptGpt ๐ค
A lightweight GPT-based language model framework for training custom question-answering models on any domain. This package provides a transformer-based GPT architecture that you can train on your own Q&A datasets - whether it's casual conversations, technical support, education, or any other domain.
Features
- ๐ง Custom GPT Architecture: Lightweight transformer model for any Q&A domain
- ๐ฏ Domain-Agnostic: Train on any question-answering dataset (casual chat, tech support, education, etc.)
- โก Fast Inference: Optimized for quick question answering
- ๐ง Flexible Training: Easy to train on your own custom datasets
- ๐ฆ Lightweight: Small model size suitable for edge deployment
- ๐ ๏ธ Complete Toolkit: Includes tokenizer training, model training, and inference utilities
Installation
From PyPI (Recommended)
pip install gptgpt
From Source
git clone https://github.com/sigdelsanjog/gptgpt.git
cd gptgpt
pip install -e .
With Optional Dependencies
# For development
pip install gptgpt[dev]
# For training
pip install gptgpt[training]
# All dependencies
pip install gptgpt[dev,training]
Quick Start
Inference (Generate Answers)
from gptgpt.inference.generator import TextGenerator
from gptgpt.model.architecture import GPTTransformer
from gptgpt.model.configs.model_config import get_small_config
# Load model
config = get_small_config()
model = GPTTransformer(config)
# Load your trained checkpoint
# model.load_state_dict(torch.load('path/to/checkpoint.pt'))
# Create generator
generator = TextGenerator(
model=model,
tokenizer_path='path/to/tokenizer.model'
)
# Generate answer
question = "What's your favorite programming language?"
answer = generator.generate(
prompt=question,
max_length=100,
temperature=0.7
)
print(f"Q: {question}")
print(f"A: {answer}")
Using Command Line
# Generate answers
gptgpt-generate --prompt "How do I train a custom model?" --max-length 100
# Train model
gptgpt-train --model-size small --num-epochs 10 --batch-size 16
Training Your Own Model
from gptgpt.training.train import main
from gptgpt.configs.train_config import get_default_config
from gptgpt.model.configs.model_config import get_small_config
# Configure training
train_config = get_default_config()
train_config.batch_size = 16
train_config.num_epochs = 10
train_config.learning_rate = 3e-4
# Start training
main()
Model Architecture
The model uses a custom GPT-based transformer architecture:
- Embedding: Token + positional embeddings
- Transformer Blocks: Multi-head self-attention + feed-forward networks
- Parameters: ~10M (small), ~50M (medium)
- Context Length: 512 tokens
- Vocabulary: Custom SentencePiece tokenizer trained on your data
Configuration
Model Sizes
from gptgpt.model.configs.model_config import (
get_tiny_config, # ~2M parameters - for testing
get_small_config, # ~10M parameters - recommended
get_medium_config # ~50M parameters - higher quality
)
Training Configuration
from gptgpt.configs.train_config import TrainingConfig
config = TrainingConfig(
batch_size=16,
learning_rate=3e-4,
num_epochs=10,
warmup_steps=100,
grad_clip=1.0
)
Project Structure
gptgpt/
โโโ model/
โ โโโ architecture/ # GPT transformer implementation
โ โโโ configs/ # Model configurations
โโโ inference/
โ โโโ generator.py # Text generation
โ โโโ sampling.py # Sampling strategies
โโโ training/
โ โโโ train.py # Training script
โ โโโ trainer.py # Training loop
โ โโโ dataset.py # Data loading
โโโ tokenizer/
โ โโโ train_tokenizer.py # SentencePiece tokenizer
โโโ configs/
โ โโโ train_config.py # Training configurations
โโโ utils/
โโโ checkpoints.py # Model checkpointing
โโโ logging.py # Training logging
Requirements
- Python >= 3.8
- PyTorch >= 2.0.0
- sentencepiece >= 0.1.99
- numpy >= 1.24.0
- tqdm >= 4.65.0
Documentation
For detailed documentation, visit GitHub Repository.
Key Guides
Performance
| Model Size | Parameters | Training Time | Inference Speed |
|---|---|---|---|
| Tiny | ~2M | 2 hours | ~100 tokens/sec |
| Small | ~10M | 8 hours | ~80 tokens/sec |
| Medium | ~50M | 24 hours | ~50 tokens/sec |
Tested on GTX 1080 8GB
Examples
Medical Question Answering
# Example 1: Symptoms inquiry
question = "What are the early signs of Alzheimer's disease?"
answer = generator.generate(question, temperature=0.7)
# Example 2: Treatment information
question = "How is Type 2 diabetes treated?"
answer = generator.generate(question, temperature=0.6)
# Example 3: Medical definitions
question = "What is hypertension?"
answer = generator.generate(question, temperature=0.5)
Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
- Fork the repository
- Create your feature branch (
git checkout -b feature/AmazingFeature) - Commit your changes (
git commit -m 'Add some AmazingFeature') - Push to the branch (
git push origin feature/AmazingFeature) - Open a Pull Request
Citation
If you use this model in your research, please cite:
@software{llm_med_2026,
author = {Your Name},
title = {llm-med: Medical Question-Answering Language Model},
year = {2026},
url = {https://github.com/yourusername/medllm}
}
License
This project is licensed under the MIT License - see the LICENSE file for details.
Acknowledgments
- MedQuAD dataset creators
- PyTorch team
- Hugging Face for inspiration
Disclaimer
โ ๏ธ Medical Disclaimer: This model is for research and educational purposes only. It should NOT be used for actual medical diagnosis or treatment decisions. Always consult qualified healthcare professionals for medical advice.
Support
- ๐ซ Issues: GitHub Issues
- ๐ฌ Discussions: GitHub Discussions
- ๐ง Email: your.email@example.com
Changelog
See CHANGELOG.md for version history.
Made with โค๏ธ for the medical AI community
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file gptgpt-0.2.1.tar.gz.
File metadata
- Download URL: gptgpt-0.2.1.tar.gz
- Upload date:
- Size: 47.1 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.10.19
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
81387825f390e5130ab731985483b926ac3af21c589ab70c650b34da0fc40321
|
|
| MD5 |
c8e3165b6fbdd5c63648325b30212038
|
|
| BLAKE2b-256 |
d76b379c2a5e4b5e7b28777dba45d5835fb173040ccd4fc19924b6fad4594711
|
File details
Details for the file gptgpt-0.2.1-py3-none-any.whl.
File metadata
- Download URL: gptgpt-0.2.1-py3-none-any.whl
- Upload date:
- Size: 55.7 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.10.19
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
969ab84bc7f4cf45cfc2a318a5d2287cfb4f6b62324df146139091e2deb67cf7
|
|
| MD5 |
4e191cac9e87808161a44a600edb5b20
|
|
| BLAKE2b-256 |
23a07910f5a575241cb8e087f0527ef2a5661aae3acf321f6c73d345fa888b9c
|