Skip to main content

Democratized Small Language Model Training - Train, fine-tune, distill, and deploy sub-500M parameter models on Colab T4 in 30-40 minutes

Project description

🚀 LMFast

PyPI version License Python 3.10+

Democratized Small Language Model Training - Train, fine-tune, distill, and deploy sub-500M parameter models on Colab T4 in 30-40 minutes with enterprise-grade features.


✨ Features

Feature Description
🎯 T4 Optimized Train on free Colab T4 (12GB) with QLoRA + gradient checkpointing
Fast Training Unsloth integration for 2-5x faster fine-tuning
🧠 Distillation Transfer knowledge from larger models to tiny ones
🤖 Agents Tool-using agents and orchestration framework
📚 RAG Lightweight document retrieval and indexing
🌐 Browser Deploy to browser via ONNX/WebLLM (no server costs)
🛡️ Guardrails PII detection, toxicity filtering, prompt injection protection
📊 Observability Langfuse integration, metrics, attention visualization
🚀 Fast Inference vLLM backend with OpenAI-compatible API
📦 Easy Export GGUF, INT4, AWQ, GPTQ quantization
🧩 MCP Native Model Context Protocol server support

🚀 Quick Start

Installation

# Basic installation
pip install lmfast

# With all features
pip install lmfast[all]

# Specific extras
pip install lmfast[fast]        # Unsloth for faster training
pip install lmfast[guardrails]  # Safety features
pip install lmfast[observability]  # Monitoring
pip install lmfast[inference]   # vLLM serving

Train in 5 Lines

from lmfast import SLMTrainer, SLMConfig, TrainingConfig
from datasets import load_dataset

# Load data
dataset = load_dataset("yahma/alpaca-cleaned", split="train[:1000]")

# Train
trainer = SLMTrainer(
    SLMConfig(model_name="HuggingFaceTB/SmolLM-135M"),
    TrainingConfig(max_steps=500)
)
trainer.train(dataset)
trainer.save("./my_slm")

CLI Usage

# Train a model
lmfast train --model HuggingFaceTB/SmolLM-135M --data yahma/alpaca-cleaned --output ./my_model

# Knowledge distillation
lmfast distill --teacher Qwen/Qwen2-1.5B --student HuggingFaceTB/SmolLM-135M --data my_data.json

# Start inference server
lmfast serve --model ./my_model --port 8000

# Export to GGUF
lmfast export --model ./my_model --output ./model.gguf --format gguf

# Interactive chat
lmfast generate --model ./my_model --interactive

📚 Documentation

Training

from lmfast import SLMTrainer, SLMConfig, TrainingConfig

# Configure for T4 GPU
model_config = SLMConfig(
    model_name="HuggingFaceTB/SmolLM-135M",
    max_seq_length=2048,
    load_in_4bit=True,  # QLoRA
)

training_config = TrainingConfig(
    max_steps=500,
    batch_size=4,
    gradient_accumulation_steps=4,
    learning_rate=2e-4,
    lora_r=16,
    lora_alpha=32,
)

trainer = SLMTrainer(model_config, training_config)
trainer.train(dataset)

Knowledge Distillation

from lmfast.distillation import DistillationTrainer
from lmfast.core.config import DistillationConfig

config = DistillationConfig(
    teacher_model="Qwen/Qwen2-1.5B",
    temperature=2.0,
    alpha=0.5,
)

trainer = DistillationTrainer(
    student_model="HuggingFaceTB/SmolLM-135M",
    distillation_config=config,
)
trainer.distill(dataset)

Guardrails

from lmfast.guardrails import GuardrailsConfig, InputValidator, OutputFilter

config = GuardrailsConfig(
    enable_pii_detection=True,
    enable_toxicity_filter=True,
    enable_prompt_injection=True,
)

validator = InputValidator(config)
result = validator.validate(user_input)
if result.is_valid:
    # Process sanitized input
    output = model.generate(result.sanitized_input)

Observability

from lmfast.observability import SLMTracer, MetricsCollector

# Tracing (Langfuse integration)
tracer = SLMTracer(project_name="my_project")

with tracer.trace("inference") as span:
    span.set_attribute("model", "smollm-135m")
    response = model.generate(prompt)
    span.set_attribute("tokens", len(response))

# Metrics
collector = MetricsCollector()
collector.log("loss", 0.5, step=100)
collector.plot("loss")

Fast Inference

from lmfast.inference import SLMServer

# Create server
server = SLMServer("./my_model", use_vllm=True)

# Generate
response = server.generate("Hello, how are you?")

# Batch generation
responses = server.generate_batch(["Prompt 1", "Prompt 2"])

# Start OpenAI-compatible API
server.serve(port=8000)

🎯 Supported Models

Model Parameters T4 Compatible Notes
SmolLM-135M 135M Fastest training
SmolLM-360M 360M Good balance
TinyLlama-1.1B 1.1B ✅ (with QLoRA) More capable
Qwen2-0.5B 500M Multilingual
Phi-3-mini 3.8B ⚠️ (tight) Most capable

📦 Package Structure

lmfast/
├── core/           # Config and model loading
├── training/       # Training and data processing
├── distillation/   # Knowledge distillation
├── guardrails/     # Safety and filtering
├── observability/  # Tracing and metrics
├── inference/      # Serving and quantization
└── cli/            # Command-line interface

🧪 Development

# Clone
git clone https://github.com/lmfast/lmfast
cd lmfast

# Create environment
conda env create -f environment.yml
conda activate lmfast

# Install in dev mode
pip install -e ".[dev]"

# Run tests
pytest tests/ -v

# Format code
black lmfast/ tests/
ruff check lmfast/ tests/

📄 License

Apache 2.0 - See LICENSE for details.


🙏 Acknowledgments

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

lmfast-0.3.5.tar.gz (125.2 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

lmfast-0.3.5-py3-none-any.whl (91.5 kB view details)

Uploaded Python 3

File details

Details for the file lmfast-0.3.5.tar.gz.

File metadata

  • Download URL: lmfast-0.3.5.tar.gz
  • Upload date:
  • Size: 125.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for lmfast-0.3.5.tar.gz
Algorithm Hash digest
SHA256 a1f679bbc82543d73819b2970dd763d23e6a495a4b7e104c47cf609ad83095cb
MD5 6a7ba5d28cd49f21d1b1c940f095d9fb
BLAKE2b-256 de1ced8aea0c512c5edfc31d2e4c494c9b159bedad5a612ef1490117162312d2

See more details on using hashes here.

Provenance

The following attestation bundles were made for lmfast-0.3.5.tar.gz:

Publisher: publish.yml on 2796gaurav/lmfast

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file lmfast-0.3.5-py3-none-any.whl.

File metadata

  • Download URL: lmfast-0.3.5-py3-none-any.whl
  • Upload date:
  • Size: 91.5 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for lmfast-0.3.5-py3-none-any.whl
Algorithm Hash digest
SHA256 874fbe0fd49ce32e830ce0b53fc29317d14c3d6750bf191c01baf3e9ba90bfb6
MD5 865f1473db28aa8ac85416df236440c2
BLAKE2b-256 2c6aa2f7add3bee2c2f92e20c481525aa6f3d0770a8738711b1ae7a2d8ddcc0d

See more details on using hashes here.

Provenance

The following attestation bundles were made for lmfast-0.3.5-py3-none-any.whl:

Publisher: publish.yml on 2796gaurav/lmfast

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page