Skip to main content

Democratized Small Language Model Training - Train, fine-tune, distill, and deploy sub-500M parameter models on Colab T4 in 30-40 minutes

Project description

🚀 LMFast

PyPI version License Python 3.10+

Democratized Small Language Model Training - Train, fine-tune, distill, and deploy sub-500M parameter models on Colab T4 in 30-40 minutes with enterprise-grade features.


✨ Features

Feature Description
🎯 T4 Optimized Train on free Colab T4 (12GB) with QLoRA + gradient checkpointing
Fast Training Unsloth integration for 2-5x faster fine-tuning
🧠 Distillation Transfer knowledge from larger models to tiny ones
🤖 Agents Tool-using agents and orchestration framework
📚 RAG Lightweight document retrieval and indexing
🌐 Browser Deploy to browser via ONNX/WebLLM (no server costs)
🛡️ Guardrails PII detection, toxicity filtering, prompt injection protection
📊 Observability Langfuse integration, metrics, attention visualization
🚀 Fast Inference vLLM backend with OpenAI-compatible API
📦 Easy Export GGUF, INT4, AWQ, GPTQ quantization
🧩 MCP Native Model Context Protocol server support

🚀 Quick Start

Installation

# Basic installation
pip install lmfast

# With all features
pip install lmfast[all]

# Specific extras
pip install lmfast[fast]        # Unsloth for faster training
pip install lmfast[guardrails]  # Safety features
pip install lmfast[observability]  # Monitoring
pip install lmfast[inference]   # vLLM serving

Train in 5 Lines

from lmfast import SLMTrainer, SLMConfig, TrainingConfig
from datasets import load_dataset

# Load data
dataset = load_dataset("yahma/alpaca-cleaned", split="train[:1000]")

# Train
trainer = SLMTrainer(
    SLMConfig(model_name="HuggingFaceTB/SmolLM-135M"),
    TrainingConfig(max_steps=500)
)
trainer.train(dataset)
trainer.save("./my_slm")

CLI Usage

# Train a model
lmfast train --model HuggingFaceTB/SmolLM-135M --data yahma/alpaca-cleaned --output ./my_model

# Knowledge distillation
lmfast distill --teacher Qwen/Qwen2-1.5B --student HuggingFaceTB/SmolLM-135M --data my_data.json

# Start inference server
lmfast serve --model ./my_model --port 8000

# Export to GGUF
lmfast export --model ./my_model --output ./model.gguf --format gguf

# Interactive chat
lmfast generate --model ./my_model --interactive

📚 Documentation

Training

from lmfast import SLMTrainer, SLMConfig, TrainingConfig

# Configure for T4 GPU
model_config = SLMConfig(
    model_name="HuggingFaceTB/SmolLM-135M",
    max_seq_length=2048,
    load_in_4bit=True,  # QLoRA
)

training_config = TrainingConfig(
    max_steps=500,
    batch_size=4,
    gradient_accumulation_steps=4,
    learning_rate=2e-4,
    lora_r=16,
    lora_alpha=32,
)

trainer = SLMTrainer(model_config, training_config)
trainer.train(dataset)

Knowledge Distillation

from lmfast.distillation import DistillationTrainer
from lmfast.core.config import DistillationConfig

config = DistillationConfig(
    teacher_model="Qwen/Qwen2-1.5B",
    temperature=2.0,
    alpha=0.5,
)

trainer = DistillationTrainer(
    student_model="HuggingFaceTB/SmolLM-135M",
    distillation_config=config,
)
trainer.distill(dataset)

Guardrails

from lmfast.guardrails import GuardrailsConfig, InputValidator, OutputFilter

config = GuardrailsConfig(
    enable_pii_detection=True,
    enable_toxicity_filter=True,
    enable_prompt_injection=True,
)

validator = InputValidator(config)
result = validator.validate(user_input)
if result.is_valid:
    # Process sanitized input
    output = model.generate(result.sanitized_input)

Observability

from lmfast.observability import SLMTracer, MetricsCollector

# Tracing (Langfuse integration)
tracer = SLMTracer(project_name="my_project")

with tracer.trace("inference") as span:
    span.set_attribute("model", "smollm-135m")
    response = model.generate(prompt)
    span.set_attribute("tokens", len(response))

# Metrics
collector = MetricsCollector()
collector.log("loss", 0.5, step=100)
collector.plot("loss")

Fast Inference

from lmfast.inference import SLMServer

# Create server
server = SLMServer("./my_model", use_vllm=True)

# Generate
response = server.generate("Hello, how are you?")

# Batch generation
responses = server.generate_batch(["Prompt 1", "Prompt 2"])

# Start OpenAI-compatible API
server.serve(port=8000)

🎯 Supported Models

Model Parameters T4 Compatible Notes
SmolLM-135M 135M Fastest training
SmolLM-360M 360M Good balance
TinyLlama-1.1B 1.1B ✅ (with QLoRA) More capable
Qwen2-0.5B 500M Multilingual
Phi-3-mini 3.8B ⚠️ (tight) Most capable

📦 Package Structure

lmfast/
├── core/           # Config and model loading
├── training/       # Training and data processing
├── distillation/   # Knowledge distillation
├── guardrails/     # Safety and filtering
├── observability/  # Tracing and metrics
├── inference/      # Serving and quantization
└── cli/            # Command-line interface

🧪 Development

# Clone
git clone https://github.com/lmfast/lmfast
cd lmfast

# Create environment
conda env create -f environment.yml
conda activate lmfast

# Install in dev mode
pip install -e ".[dev]"

# Run tests
pytest tests/ -v

# Format code
black lmfast/ tests/
ruff check lmfast/ tests/

📄 License

Apache 2.0 - See LICENSE for details.


🙏 Acknowledgments

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

lmfast-0.3.3.tar.gz (124.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

lmfast-0.3.3-py3-none-any.whl (90.7 kB view details)

Uploaded Python 3

File details

Details for the file lmfast-0.3.3.tar.gz.

File metadata

  • Download URL: lmfast-0.3.3.tar.gz
  • Upload date:
  • Size: 124.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for lmfast-0.3.3.tar.gz
Algorithm Hash digest
SHA256 38cebb70825e35a2072d659748ce20d99453eebe11eb2ce5903fec7e59bdde71
MD5 8a73766712dc1d4b90ecd3143c93e8b5
BLAKE2b-256 8399ca989d14b20a43d1d52e81c494fa666ae2b888c033787a22b6d1aac599b2

See more details on using hashes here.

Provenance

The following attestation bundles were made for lmfast-0.3.3.tar.gz:

Publisher: publish.yml on 2796gaurav/lmfast

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file lmfast-0.3.3-py3-none-any.whl.

File metadata

  • Download URL: lmfast-0.3.3-py3-none-any.whl
  • Upload date:
  • Size: 90.7 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for lmfast-0.3.3-py3-none-any.whl
Algorithm Hash digest
SHA256 69ee3667405d63924bbb6f2100698f5aa0a6270a4bf45c3763bb06be565a2af5
MD5 37dd82ad7e7a291327b620ce13097423
BLAKE2b-256 012a25cde0e477c13744eb8051f298bc32b0918441e3c33ae84bc9d602cd9dd6

See more details on using hashes here.

Provenance

The following attestation bundles were made for lmfast-0.3.3-py3-none-any.whl:

Publisher: publish.yml on 2796gaurav/lmfast

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page