Skip to main content

Production-grade LLM fine-tuning framework with CLI

Project description

xlmtec

PyPI version Python License

xlmtec is a command-line toolkit for fine-tuning large language models. Describe your task in plain English, get a ready-to-run config, browse HuggingFace models, and train — all from the terminal.


Features

  • AI-powered config generation — describe your task, get a YAML config from Claude, Gemini, or GPT
  • Model Hub browser — search and inspect HuggingFace models without leaving the terminal
  • 5 fine-tuning methods — LoRA, QLoRA, Full, Instruction, DPO
  • Config validation — catch errors before training starts
  • Dry-run mode — preview your training plan without loading a model
  • Rich terminal UI — progress bars, panels, colour output throughout

Installation

# Core (lightweight — no ML deps)
pip install xlmtec

# With training support
pip install xlmtec[ml]

# With AI suggestions (pick your provider)
pip install xlmtec[claude]    # Anthropic
pip install xlmtec[gemini]    # Google
pip install xlmtec[codex]     # OpenAI
pip install xlmtec[ai]        # All three

# Everything
pip install xlmtec[full]

Quickstart

1. Get an AI-generated config

xlmtec ai-suggest "fine-tune a small model for customer support" --provider claude

Outputs a ready-to-run YAML config and the exact command to run.

2. Browse models on HuggingFace

xlmtec hub search "bert" --task text-classification --limit 5
xlmtec hub trending
xlmtec hub info google/bert-base-uncased

3. Validate your config

xlmtec config validate config.yaml

4. Train

# Preview without loading model
xlmtec train --config config.yaml --dry-run

# Start training
xlmtec train --config config.yaml

Commands

Command Description
xlmtec ai-suggest "<task>" Generate a config from plain English
xlmtec hub search "<query>" Search HuggingFace models
xlmtec hub info <model-id> Show model details
xlmtec hub trending Top trending models
xlmtec config validate <file> Validate a YAML config
xlmtec train --config <file> Fine-tune a model
xlmtec train --config <file> --dry-run Preview training plan
xlmtec recommend Get method recommendation for your hardware
xlmtec evaluate Evaluate a fine-tuned model
xlmtec benchmark Compare multiple runs
xlmtec merge Merge LoRA adapter into base model
xlmtec upload Upload model to HuggingFace Hub
xlmtec --version Show installed version

Fine-tuning methods

Method VRAM Best for
lora Low (4–8 GB) Most tasks, fast convergence
qlora Very low (4 GB) Large models on limited hardware
full High (24 GB+) Best quality, small models
instruction Low (4–8 GB) Prompt/response style tasks
dpo Low (4–8 GB) Preference learning from pairs

AI Providers

Set your API key as an environment variable, then pass --provider:

export ANTHROPIC_API_KEY=sk-ant-...
xlmtec ai-suggest "summarise legal documents" --provider claude

export GEMINI_API_KEY=...
xlmtec ai-suggest "summarise legal documents" --provider gemini

export OPENAI_API_KEY=sk-...
xlmtec ai-suggest "summarise legal documents" --provider codex

Example config

model:
  name: gpt2

dataset:
  source: local_file
  path: data/train.jsonl

lora:
  r: 16
  alpha: 32
  target_modules: [c_attn]

training:
  output_dir: output/run1
  num_epochs: 3
  batch_size: 4
  learning_rate: 2e-4

Development

git clone https://github.com/Abdur-azure/xlmtec.git
cd xlmtec
pip install -e ".[full,dev]"
pytest tests/ -v --ignore=tests/test_integration.py

Changelog

See CHANGELOG.md for full release history.

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

xlmtec-3.27.0.tar.gz (153.5 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

xlmtec-3.27.0-py3-none-any.whl (149.0 kB view details)

Uploaded Python 3

File details

Details for the file xlmtec-3.27.0.tar.gz.

File metadata

  • Download URL: xlmtec-3.27.0.tar.gz
  • Upload date:
  • Size: 153.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.14

File hashes

Hashes for xlmtec-3.27.0.tar.gz
Algorithm Hash digest
SHA256 ff2e525a7c1b8c3f102564482ca4317b56d44a9fe43cd473f3b5316eb6bd0947
MD5 d00beb23caaa05c96c7435ea4141e2bd
BLAKE2b-256 fd699b851f2de42ffc75348d7ecd5144f837c9ce2e2f3d02455e778e1fc809b3

See more details on using hashes here.

File details

Details for the file xlmtec-3.27.0-py3-none-any.whl.

File metadata

  • Download URL: xlmtec-3.27.0-py3-none-any.whl
  • Upload date:
  • Size: 149.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.14

File hashes

Hashes for xlmtec-3.27.0-py3-none-any.whl
Algorithm Hash digest
SHA256 5a11684669aa15249be7194b3b2f80c6435027799d1e6097adce664df04a83cb
MD5 83dd183fee9b7570c9d9e766eea1d989
BLAKE2b-256 b6661e69a470ae607c28a179596f19c763bce3e20f247e59cdcbea8ec18e7379

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page