Skip to main content

Production-grade LLM fine-tuning framework with CLI

Project description

xlmtec

PyPI version Python License

xlmtec is a command-line toolkit for fine-tuning large language models. Describe your task in plain English, get a ready-to-run config, browse HuggingFace models, and train — all from the terminal.


Features

  • AI-powered config generation — describe your task, get a YAML config from Claude, Gemini, or GPT
  • Model Hub browser — search and inspect HuggingFace models without leaving the terminal
  • 5 fine-tuning methods — LoRA, QLoRA, Full, Instruction, DPO
  • Config validation — catch errors before training starts
  • Dry-run mode — preview your training plan without loading a model
  • Rich terminal UI — progress bars, panels, colour output throughout

Installation

# Core (lightweight — no ML deps)
pip install xlmtec

# With training support
pip install xlmtec[ml]

# With AI suggestions (pick your provider)
pip install xlmtec[claude]    # Anthropic
pip install xlmtec[gemini]    # Google
pip install xlmtec[codex]     # OpenAI
pip install xlmtec[ai]        # All three

# Everything
pip install xlmtec[full]

Quickstart

1. Get an AI-generated config

xlmtec ai-suggest "fine-tune a small model for customer support" --provider claude

Outputs a ready-to-run YAML config and the exact command to run.

2. Browse models on HuggingFace

xlmtec hub search "bert" --task text-classification --limit 5
xlmtec hub trending
xlmtec hub info google/bert-base-uncased

3. Validate your config

xlmtec config validate config.yaml

4. Train

# Preview without loading model
xlmtec train --config config.yaml --dry-run

# Start training
xlmtec train --config config.yaml

Commands

Command Description
xlmtec ai-suggest "<task>" Generate a config from plain English
xlmtec hub search "<query>" Search HuggingFace models
xlmtec hub info <model-id> Show model details
xlmtec hub trending Top trending models
xlmtec config validate <file> Validate a YAML config
xlmtec train --config <file> Fine-tune a model
xlmtec train --config <file> --dry-run Preview training plan
xlmtec recommend Get method recommendation for your hardware
xlmtec evaluate Evaluate a fine-tuned model
xlmtec benchmark Compare multiple runs
xlmtec merge Merge LoRA adapter into base model
xlmtec upload Upload model to HuggingFace Hub
xlmtec --version Show installed version

Fine-tuning methods

Method VRAM Best for
lora Low (4–8 GB) Most tasks, fast convergence
qlora Very low (4 GB) Large models on limited hardware
full High (24 GB+) Best quality, small models
instruction Low (4–8 GB) Prompt/response style tasks
dpo Low (4–8 GB) Preference learning from pairs

AI Providers

Set your API key as an environment variable, then pass --provider:

export ANTHROPIC_API_KEY=sk-ant-...
xlmtec ai-suggest "summarise legal documents" --provider claude

export GEMINI_API_KEY=...
xlmtec ai-suggest "summarise legal documents" --provider gemini

export OPENAI_API_KEY=sk-...
xlmtec ai-suggest "summarise legal documents" --provider codex

Example config

model:
  name: gpt2

dataset:
  source: local_file
  path: data/train.jsonl

lora:
  r: 16
  alpha: 32
  target_modules: [c_attn]

training:
  output_dir: output/run1
  num_epochs: 3
  batch_size: 4
  learning_rate: 2e-4

Development

git clone https://github.com/Abdur-azure/xlmtec.git
cd xlmtec
pip install -e ".[full,dev]"
pytest tests/ -v --ignore=tests/test_integration.py

Changelog

See CHANGELOG.md for full release history.

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

xlmtec-3.25.0.tar.gz (142.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

xlmtec-3.25.0-py3-none-any.whl (139.1 kB view details)

Uploaded Python 3

File details

Details for the file xlmtec-3.25.0.tar.gz.

File metadata

  • Download URL: xlmtec-3.25.0.tar.gz
  • Upload date:
  • Size: 142.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.14

File hashes

Hashes for xlmtec-3.25.0.tar.gz
Algorithm Hash digest
SHA256 1fedbfb115072bb6eb946a8fa70b913326af44b45e90c450f825cc7ba060111d
MD5 e830348500a132695e19b0a0dc6e2a2d
BLAKE2b-256 2eb2a46d4c39dbd7374a20ddc057d6ccf9ffbbb93baa59850fbbcde58fb31663

See more details on using hashes here.

File details

Details for the file xlmtec-3.25.0-py3-none-any.whl.

File metadata

  • Download URL: xlmtec-3.25.0-py3-none-any.whl
  • Upload date:
  • Size: 139.1 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.14

File hashes

Hashes for xlmtec-3.25.0-py3-none-any.whl
Algorithm Hash digest
SHA256 3f553c336383136fad2700fd02f0bf367992bdb7441c75fb9453590df1b2d144
MD5 4357a0a808f24612654173b3e62b1da9
BLAKE2b-256 ab539d96d7acdb8bbdf2e6c422e1731fb16f3c52f4c3517730b8d25452bdbce3

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page