Skip to main content

A lightweight GPT-based language model framework for training custom question-answering models on any domain

Project description

GptGpt ๐Ÿค–

A lightweight GPT-based language model framework for training custom question-answering models on any domain. This package provides a transformer-based GPT architecture that you can train on your own Q&A datasets - whether it's casual conversations, technical support, education, or any other domain.

PyPI version Python 3.8+ License: MIT

Features

  • ๐Ÿง  Custom GPT Architecture: Lightweight transformer model for any Q&A domain
  • ๐ŸŽฏ Domain-Agnostic: Train on any question-answering dataset (casual chat, tech support, education, etc.)
  • โšก Fast Inference: Optimized for quick question answering
  • ๐Ÿ”ง Flexible Training: Easy to train on your own custom datasets
  • ๐Ÿ“ฆ Lightweight: Small model size suitable for edge deployment
  • ๐Ÿ› ๏ธ Complete Toolkit: Includes tokenizer training, model training, and inference utilities

Installation

From PyPI (Recommended)

pip install gptgpt

From Source

git clone https://github.com/sigdelsanjog/gptgpt.git
cd gptgpt
pip install -e .

With Optional Dependencies

# For development
pip install gptgpt[dev]

# For training
pip install gptgpt[training]

# All dependencies
pip install gptgpt[dev,training]

Quick Start

Inference (Generate Answers)

from gptgpt.inference.generator import TextGenerator
from gptgpt.model.architecture import GPTTransformer
from gptgpt.model.configs.model_config import get_small_config

# Load model
config = get_small_config()
model = GPTTransformer(config)

# Load your trained checkpoint
# model.load_state_dict(torch.load('path/to/checkpoint.pt'))

# Create generator
generator = TextGenerator(
    model=model,
    tokenizer_path='path/to/tokenizer.model'
)

# Generate answer
question = "What's your favorite programming language?"
answer = generator.generate(
    prompt=question,
    max_length=100,
    temperature=0.7
)

print(f"Q: {question}")
print(f"A: {answer}")

Using Command Line

# Generate answers
gptgpt-generate --prompt "How do I train a custom model?" --max-length 100

# Train model
gptgpt-train --model-size small --num-epochs 10 --batch-size 16

Training Your Own Model

from gptgpt.training.train import main
from gptgpt.configs.train_config import get_default_config
from gptgpt.model.configs.model_config import get_small_config

# Configure training
train_config = get_default_config()
train_config.batch_size = 16
train_config.num_epochs = 10
train_config.learning_rate = 3e-4

# Start training
main()

Model Architecture

The model uses a custom GPT-based transformer architecture:

  • Embedding: Token + positional embeddings
  • Transformer Blocks: Multi-head self-attention + feed-forward networks
  • Parameters: ~10M (small), ~50M (medium)
  • Context Length: 512 tokens
  • Vocabulary: Custom SentencePiece tokenizer trained on your data

Configuration

Model Sizes

from gptgpt.model.configs.model_config import (
    get_tiny_config,   # ~2M parameters - for testing
    get_small_config,  # ~10M parameters - recommended
    get_medium_config  # ~50M parameters - higher quality
)

Training Configuration

from gptgpt.configs.train_config import TrainingConfig

config = TrainingConfig(
    batch_size=16,
    learning_rate=3e-4,
    num_epochs=10,
    warmup_steps=100,
    grad_clip=1.0
)

Project Structure

gptgpt/
โ”œโ”€โ”€ model/
โ”‚   โ”œโ”€โ”€ architecture/      # GPT transformer implementation
โ”‚   โ””โ”€โ”€ configs/           # Model configurations
โ”œโ”€โ”€ inference/
โ”‚   โ”œโ”€โ”€ generator.py       # Text generation
โ”‚   โ””โ”€โ”€ sampling.py        # Sampling strategies
โ”œโ”€โ”€ training/
โ”‚   โ”œโ”€โ”€ train.py          # Training script
โ”‚   โ”œโ”€โ”€ trainer.py        # Training loop
โ”‚   โ””โ”€โ”€ dataset.py        # Data loading
โ”œโ”€โ”€ tokenizer/
โ”‚   โ””โ”€โ”€ train_tokenizer.py # SentencePiece tokenizer
โ”œโ”€โ”€ configs/
โ”‚   โ””โ”€โ”€ train_config.py   # Training configurations
โ””โ”€โ”€ utils/
    โ”œโ”€โ”€ checkpoints.py    # Model checkpointing
    โ””โ”€โ”€ logging.py        # Training logging

Requirements

  • Python >= 3.8
  • PyTorch >= 2.0.0
  • sentencepiece >= 0.1.99
  • numpy >= 1.24.0
  • tqdm >= 4.65.0

Documentation

For detailed documentation, visit GitHub Repository.

Key Guides

Performance

Model Size Parameters Training Time Inference Speed
Tiny ~2M 2 hours ~100 tokens/sec
Small ~10M 8 hours ~80 tokens/sec
Medium ~50M 24 hours ~50 tokens/sec

Tested on GTX 1080 8GB

Examples

Medical Question Answering

# Example 1: Symptoms inquiry
question = "What are the early signs of Alzheimer's disease?"
answer = generator.generate(question, temperature=0.7)

# Example 2: Treatment information
question = "How is Type 2 diabetes treated?"
answer = generator.generate(question, temperature=0.6)

# Example 3: Medical definitions
question = "What is hypertension?"
answer = generator.generate(question, temperature=0.5)

Contributing

Contributions are welcome! Please feel free to submit a Pull Request.

  1. Fork the repository
  2. Create your feature branch (git checkout -b feature/AmazingFeature)
  3. Commit your changes (git commit -m 'Add some AmazingFeature')
  4. Push to the branch (git push origin feature/AmazingFeature)
  5. Open a Pull Request

Citation

If you use this model in your research, please cite:

@software{llm_med_2026,
  author = {Your Name},
  title = {llm-med: Medical Question-Answering Language Model},
  year = {2026},
  url = {https://github.com/yourusername/medllm}
}

License

This project is licensed under the MIT License - see the LICENSE file for details.

Acknowledgments

  • MedQuAD dataset creators
  • PyTorch team
  • Hugging Face for inspiration

Disclaimer

โš ๏ธ Medical Disclaimer: This model is for research and educational purposes only. It should NOT be used for actual medical diagnosis or treatment decisions. Always consult qualified healthcare professionals for medical advice.

Support

Changelog

See CHANGELOG.md for version history.


Made with โค๏ธ for the medical AI community

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

gptgpt-0.2.1.tar.gz (47.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

gptgpt-0.2.1-py3-none-any.whl (55.7 kB view details)

Uploaded Python 3

File details

Details for the file gptgpt-0.2.1.tar.gz.

File metadata

  • Download URL: gptgpt-0.2.1.tar.gz
  • Upload date:
  • Size: 47.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.10.19

File hashes

Hashes for gptgpt-0.2.1.tar.gz
Algorithm Hash digest
SHA256 81387825f390e5130ab731985483b926ac3af21c589ab70c650b34da0fc40321
MD5 c8e3165b6fbdd5c63648325b30212038
BLAKE2b-256 d76b379c2a5e4b5e7b28777dba45d5835fb173040ccd4fc19924b6fad4594711

See more details on using hashes here.

File details

Details for the file gptgpt-0.2.1-py3-none-any.whl.

File metadata

  • Download URL: gptgpt-0.2.1-py3-none-any.whl
  • Upload date:
  • Size: 55.7 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.10.19

File hashes

Hashes for gptgpt-0.2.1-py3-none-any.whl
Algorithm Hash digest
SHA256 969ab84bc7f4cf45cfc2a318a5d2287cfb4f6b62324df146139091e2deb67cf7
MD5 4e191cac9e87808161a44a600edb5b20
BLAKE2b-256 23a07910f5a575241cb8e087f0527ef2a5661aae3acf321f6c73d345fa888b9c

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page