Skip to main content

A lightweight medical question-answering language model

Project description

llm-med ๐Ÿฅ

A lightweight medical question-answering language model trained on the MedQuAD dataset. This package provides a transformer-based GPT architecture optimized for medical domain question answering.

PyPI version Python 3.8+ License: MIT

Features

  • ๐Ÿง  Custom GPT Architecture: Lightweight transformer model designed for medical QA
  • ๐Ÿ’Š Medical Domain: Trained on MedQuAD dataset with medical terminology
  • โšก Fast Inference: Optimized for quick medical question answering
  • ๐Ÿ”ง Flexible: Easy to fine-tune on your own medical datasets
  • ๐Ÿ“ฆ Lightweight: Small model size suitable for edge deployment

Installation

From PyPI (Recommended)

pip install llm-med

From Source

git clone https://github.com/yourusername/medllm.git
cd medllm
pip install -e .

With Optional Dependencies

# For development
pip install llm-med[dev]

# For training
pip install llm-med[training]

# All dependencies
pip install llm-med[dev,training]

Quick Start

Inference (Generate Medical Answers)

from inference.generator import MedicalQAGenerator
from model.architecture import GPTTransformer
from model.configs.model_config import get_small_config

# Load model
config = get_small_config()
model = GPTTransformer(config)

# Load your trained checkpoint
# model.load_state_dict(torch.load('path/to/checkpoint.pt'))

# Create generator
generator = MedicalQAGenerator(
    model=model,
    tokenizer_path='path/to/tokenizer.model'
)

# Generate answer
question = "What are the symptoms of diabetes?"
answer = generator.generate(
    prompt=question,
    max_length=100,
    temperature=0.7
)

print(f"Q: {question}")
print(f"A: {answer}")

Using Command Line

# Generate answers
medllm-generate --prompt "What causes hypertension?" --max-length 100

# Train model
medllm-train --model-size small --num-epochs 10 --batch-size 16

Training Your Own Model

from training.train import main
from configs.train_config import get_default_config
from model.configs.model_config import get_small_config

# Configure training
train_config = get_default_config()
train_config.batch_size = 16
train_config.num_epochs = 10
train_config.learning_rate = 3e-4

# Start training
main()

Model Architecture

The model uses a custom GPT-based transformer architecture:

  • Embedding: Token + positional embeddings
  • Transformer Blocks: Multi-head self-attention + feed-forward networks
  • Parameters: ~10M (small), ~50M (medium)
  • Context Length: 512 tokens
  • Vocabulary: Custom SentencePiece tokenizer trained on medical text

Configuration

Model Sizes

from model.configs.model_config import (
    get_tiny_config,   # ~2M parameters - for testing
    get_small_config,  # ~10M parameters - recommended
    get_medium_config  # ~50M parameters - higher quality
)

Training Configuration

from configs.train_config import TrainingConfig

config = TrainingConfig(
    batch_size=16,
    learning_rate=3e-4,
    num_epochs=10,
    warmup_steps=100,
    grad_clip=1.0
)

Project Structure

llm-med/
โ”œโ”€โ”€ model/
โ”‚   โ”œโ”€โ”€ architecture/      # GPT transformer implementation
โ”‚   โ””โ”€โ”€ configs/           # Model configurations
โ”œโ”€โ”€ inference/
โ”‚   โ”œโ”€โ”€ generator.py       # Text generation
โ”‚   โ””โ”€โ”€ sampling.py        # Sampling strategies
โ”œโ”€โ”€ training/
โ”‚   โ”œโ”€โ”€ train.py          # Training script
โ”‚   โ”œโ”€โ”€ trainer.py        # Training loop
โ”‚   โ””โ”€โ”€ dataset.py        # Data loading
โ”œโ”€โ”€ tokenizer/
โ”‚   โ””โ”€โ”€ train_tokenizer.py # SentencePiece tokenizer
โ”œโ”€โ”€ configs/
โ”‚   โ””โ”€โ”€ train_config.py   # Training configurations
โ””โ”€โ”€ utils/
    โ”œโ”€โ”€ checkpoints.py    # Model checkpointing
    โ””โ”€โ”€ logging.py        # Training logging

Requirements

  • Python >= 3.8
  • PyTorch >= 2.0.0
  • sentencepiece >= 0.1.99
  • numpy >= 1.24.0
  • tqdm >= 4.65.0

Documentation

For detailed documentation, visit GitHub Repository.

Key Guides

Performance

Model Size Parameters Training Time Inference Speed
Tiny ~2M 2 hours ~100 tokens/sec
Small ~10M 8 hours ~80 tokens/sec
Medium ~50M 24 hours ~50 tokens/sec

Tested on GTX 1080 8GB

Examples

Medical Question Answering

# Example 1: Symptoms inquiry
question = "What are the early signs of Alzheimer's disease?"
answer = generator.generate(question, temperature=0.7)

# Example 2: Treatment information
question = "How is Type 2 diabetes treated?"
answer = generator.generate(question, temperature=0.6)

# Example 3: Medical definitions
question = "What is hypertension?"
answer = generator.generate(question, temperature=0.5)

Contributing

Contributions are welcome! Please feel free to submit a Pull Request.

  1. Fork the repository
  2. Create your feature branch (git checkout -b feature/AmazingFeature)
  3. Commit your changes (git commit -m 'Add some AmazingFeature')
  4. Push to the branch (git push origin feature/AmazingFeature)
  5. Open a Pull Request

Citation

If you use this model in your research, please cite:

@software{llm_med_2026,
  author = {Your Name},
  title = {llm-med: Medical Question-Answering Language Model},
  year = {2026},
  url = {https://github.com/yourusername/medllm}
}

License

This project is licensed under the MIT License - see the LICENSE file for details.

Acknowledgments

  • MedQuAD dataset creators
  • PyTorch team
  • Hugging Face for inspiration

Disclaimer

โš ๏ธ Medical Disclaimer: This model is for research and educational purposes only. It should NOT be used for actual medical diagnosis or treatment decisions. Always consult qualified healthcare professionals for medical advice.

Support

Changelog

See CHANGELOG.md for version history.


Made with โค๏ธ for the medical AI community

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

gptgpt-0.2.0.tar.gz (46.8 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

gptgpt-0.2.0-py3-none-any.whl (55.5 kB view details)

Uploaded Python 3

File details

Details for the file gptgpt-0.2.0.tar.gz.

File metadata

  • Download URL: gptgpt-0.2.0.tar.gz
  • Upload date:
  • Size: 46.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.10.19

File hashes

Hashes for gptgpt-0.2.0.tar.gz
Algorithm Hash digest
SHA256 2faa8002f636a224eb8d2b8efbeac6038314688c99906606e7e01e9be11f1e9d
MD5 5daca433c6e5713f795d16cb82c91051
BLAKE2b-256 7ba40fd5ecf76ee6af23f5b3ac6374fcb16da4822e589e9b40cf3c0008c1a9dc

See more details on using hashes here.

File details

Details for the file gptgpt-0.2.0-py3-none-any.whl.

File metadata

  • Download URL: gptgpt-0.2.0-py3-none-any.whl
  • Upload date:
  • Size: 55.5 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.10.19

File hashes

Hashes for gptgpt-0.2.0-py3-none-any.whl
Algorithm Hash digest
SHA256 0e64a001db729434bb8e56b6eb72accfe73cc9efaa9c7008dd81118c955e382c
MD5 30c213e0887078a8fc75f42a41022ab3
BLAKE2b-256 fadab60cfef3565e1fb0d5c5f101e91390330d9c01cd9d04621998df10c97868

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page