A lightweight medical question-answering language model
Project description
llm-med ๐ฅ
A lightweight medical question-answering language model trained on the MedQuAD dataset. This package provides a transformer-based GPT architecture optimized for medical domain question answering.
Features
- ๐ง Custom GPT Architecture: Lightweight transformer model designed for medical QA
- ๐ Medical Domain: Trained on MedQuAD dataset with medical terminology
- โก Fast Inference: Optimized for quick medical question answering
- ๐ง Flexible: Easy to fine-tune on your own medical datasets
- ๐ฆ Lightweight: Small model size suitable for edge deployment
Installation
From PyPI (Recommended)
pip install llm-med
From Source
git clone https://github.com/yourusername/medllm.git
cd medllm
pip install -e .
With Optional Dependencies
# For development
pip install llm-med[dev]
# For training
pip install llm-med[training]
# All dependencies
pip install llm-med[dev,training]
Quick Start
Inference (Generate Medical Answers)
from inference.generator import MedicalQAGenerator
from model.architecture import GPTTransformer
from model.configs.model_config import get_small_config
# Load model
config = get_small_config()
model = GPTTransformer(config)
# Load your trained checkpoint
# model.load_state_dict(torch.load('path/to/checkpoint.pt'))
# Create generator
generator = MedicalQAGenerator(
model=model,
tokenizer_path='path/to/tokenizer.model'
)
# Generate answer
question = "What are the symptoms of diabetes?"
answer = generator.generate(
prompt=question,
max_length=100,
temperature=0.7
)
print(f"Q: {question}")
print(f"A: {answer}")
Using Command Line
# Generate answers
medllm-generate --prompt "What causes hypertension?" --max-length 100
# Train model
medllm-train --model-size small --num-epochs 10 --batch-size 16
Training Your Own Model
from training.train import main
from configs.train_config import get_default_config
from model.configs.model_config import get_small_config
# Configure training
train_config = get_default_config()
train_config.batch_size = 16
train_config.num_epochs = 10
train_config.learning_rate = 3e-4
# Start training
main()
Model Architecture
The model uses a custom GPT-based transformer architecture:
- Embedding: Token + positional embeddings
- Transformer Blocks: Multi-head self-attention + feed-forward networks
- Parameters: ~10M (small), ~50M (medium)
- Context Length: 512 tokens
- Vocabulary: Custom SentencePiece tokenizer trained on medical text
Configuration
Model Sizes
from model.configs.model_config import (
get_tiny_config, # ~2M parameters - for testing
get_small_config, # ~10M parameters - recommended
get_medium_config # ~50M parameters - higher quality
)
Training Configuration
from configs.train_config import TrainingConfig
config = TrainingConfig(
batch_size=16,
learning_rate=3e-4,
num_epochs=10,
warmup_steps=100,
grad_clip=1.0
)
Project Structure
llm-med/
โโโ model/
โ โโโ architecture/ # GPT transformer implementation
โ โโโ configs/ # Model configurations
โโโ inference/
โ โโโ generator.py # Text generation
โ โโโ sampling.py # Sampling strategies
โโโ training/
โ โโโ train.py # Training script
โ โโโ trainer.py # Training loop
โ โโโ dataset.py # Data loading
โโโ tokenizer/
โ โโโ train_tokenizer.py # SentencePiece tokenizer
โโโ configs/
โ โโโ train_config.py # Training configurations
โโโ utils/
โโโ checkpoints.py # Model checkpointing
โโโ logging.py # Training logging
Requirements
- Python >= 3.8
- PyTorch >= 2.0.0
- sentencepiece >= 0.1.99
- numpy >= 1.24.0
- tqdm >= 4.65.0
Documentation
For detailed documentation, visit GitHub Repository.
Key Guides
Performance
| Model Size | Parameters | Training Time | Inference Speed |
|---|---|---|---|
| Tiny | ~2M | 2 hours | ~100 tokens/sec |
| Small | ~10M | 8 hours | ~80 tokens/sec |
| Medium | ~50M | 24 hours | ~50 tokens/sec |
Tested on GTX 1080 8GB
Examples
Medical Question Answering
# Example 1: Symptoms inquiry
question = "What are the early signs of Alzheimer's disease?"
answer = generator.generate(question, temperature=0.7)
# Example 2: Treatment information
question = "How is Type 2 diabetes treated?"
answer = generator.generate(question, temperature=0.6)
# Example 3: Medical definitions
question = "What is hypertension?"
answer = generator.generate(question, temperature=0.5)
Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
- Fork the repository
- Create your feature branch (
git checkout -b feature/AmazingFeature) - Commit your changes (
git commit -m 'Add some AmazingFeature') - Push to the branch (
git push origin feature/AmazingFeature) - Open a Pull Request
Citation
If you use this model in your research, please cite:
@software{llm_med_2026,
author = {Your Name},
title = {llm-med: Medical Question-Answering Language Model},
year = {2026},
url = {https://github.com/yourusername/medllm}
}
License
This project is licensed under the MIT License - see the LICENSE file for details.
Acknowledgments
- MedQuAD dataset creators
- PyTorch team
- Hugging Face for inspiration
Disclaimer
โ ๏ธ Medical Disclaimer: This model is for research and educational purposes only. It should NOT be used for actual medical diagnosis or treatment decisions. Always consult qualified healthcare professionals for medical advice.
Support
- ๐ซ Issues: GitHub Issues
- ๐ฌ Discussions: GitHub Discussions
- ๐ง Email: your.email@example.com
Changelog
See CHANGELOG.md for version history.
Made with โค๏ธ for the medical AI community
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file gptgpt-0.2.0.tar.gz.
File metadata
- Download URL: gptgpt-0.2.0.tar.gz
- Upload date:
- Size: 46.8 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.10.19
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
2faa8002f636a224eb8d2b8efbeac6038314688c99906606e7e01e9be11f1e9d
|
|
| MD5 |
5daca433c6e5713f795d16cb82c91051
|
|
| BLAKE2b-256 |
7ba40fd5ecf76ee6af23f5b3ac6374fcb16da4822e589e9b40cf3c0008c1a9dc
|
File details
Details for the file gptgpt-0.2.0-py3-none-any.whl.
File metadata
- Download URL: gptgpt-0.2.0-py3-none-any.whl
- Upload date:
- Size: 55.5 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.10.19
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
0e64a001db729434bb8e56b6eb72accfe73cc9efaa9c7008dd81118c955e382c
|
|
| MD5 |
30c213e0887078a8fc75f42a41022ab3
|
|
| BLAKE2b-256 |
fadab60cfef3565e1fb0d5c5f101e91390330d9c01cd9d04621998df10c97868
|