Skip to main content

LLM fine-tuning hardware planner with CLI and API.

Project description

Train in Silence

The first Task-Aware MCP server for LLM fine-tuning. Stop comparing GPU prices. Start training.

License Python 中文


You want to fine-tune an LLM. You open Vast.ai, RunPod, AWS, etc. -- a dozen tabs, a dozen pricing models, a dozen different ways to describe a GPU. Which option can run your code, and do so more cheaply and quickly? An hour later you're still in a spreadsheet and haven't written a single line of training code.

Train in Silence is the first Task-Aware MCP server for LLM fine-tuning. It doesn't just list prices; it understands your workload. Describe your training job once, and it calculates the required VRAM/FLOPs to return the cheapest, fastest, and most balanced hardware options across a dozen cloud providers -- in seconds.

Quickstart

Option A: Ask Claude Code (recommended)

Install the library and register it as a tool in Claude Code:

pip install train-in-silence
claude mcp add tis --scope user -- tis-mcp

Then just ask in natural language:

> I want to run the fine-tune code in my current directory, and finish it within 20 hours.
  Find me the best GPU options across Vast.ai, RunPod, and Lambda.

Claude Code calls TIS behind the scenes and returns a structured recommendation -- no YAML, no config files, no manual comparison.

Option B: CLI

pip install train-in-silence
tis recommend examples/request.yaml
$ tis recommend examples/request.yaml

  Found 5 viable configurations
  Lowest cost: $4.32 | Fastest runtime: 2.1 hours

  #1 [cheapest]  RunPod 1x A6000 (48 GB)    $4.32 / 6.8 h
  #2 [fastest]   Vast.ai 2x A100 (80 GB)    $9.10 / 2.1 h
  #3 [balanced]  RunPod 1x A100 (80 GB)     $6.40 / 3.2 h
  ...

Note: Output above is illustrative. Actual results depend on live market data.

Use It Your Way

Channel Command Docs
CLI tis recommend request.yaml CLI Guide
REST API uvicorn tis.api.server:app API Reference
Claude Code claude mcp add tis --scope user --tis-mcp MCP Guide
Claude Desktop Add tis-mcp to claude_desktop_config.json MCP Guide

Market Providers

TIS aggregates live pricing across a dozen GPU clouds. API keys are optional: if not provided, TIS automatically falls back sequentially to universal live aggregators (GPUHunt/GPUFinder) or bundled sample data.

Provider Class Included Platforms Auth Required
Dedicated Vast.ai, RunPod Optional (Highly Recommended)
Aggregated Vast.ai, RunPod, AWS, CoreWeave, Lambda Labs, Tensordock, Vultr, GCP, Azure, OCI, Nebius, CloudRift, Cudo Compute, Verda None (Auto-fallback)

Every recommendation clearly identifies its Source of Truth (e.g., live:official, live:gpuhunt, live:gpufinder, or sample) so you always know how fresh the data is. -> Provider details

Architecture at a Glance

YAML request -> Estimator -> Market Aggregator -> Optimizer -> Pareto Frontier -> Ranked Output
                  |                |                 |
              VRAM/FLOPs     10+ GPU Clouds    Cost vs. Time

Each recommendation shows where the data came from (live or sample) and flags any estimated fields -- no silent guesswork. -> Architecture deep-dive

Known Limitations

  • Estimation model is fixed with no built-in calibration; future versions will calibrate using real runtimes.
  • Upstream Provider API schema changes will require synchronized mapping updates.

🚧 Project Status & Contribution

This project is currently in the experimental development stage (Experimental).

  • Issues & Suggestions: If you encounter any bugs, inaccurate estimations, or have suggestions for improvement, please feel free to submit a GitHub Issue.
  • Contribute: If you'd like to improve the code or supplement hardware metadata, Pull Requests are highly welcome! We look forward to refining this LLM hardware planner with the community.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

train_in_silence-0.1.3.tar.gz (54.9 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

train_in_silence-0.1.3-py3-none-any.whl (61.6 kB view details)

Uploaded Python 3

File details

Details for the file train_in_silence-0.1.3.tar.gz.

File metadata

  • Download URL: train_in_silence-0.1.3.tar.gz
  • Upload date:
  • Size: 54.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.5

File hashes

Hashes for train_in_silence-0.1.3.tar.gz
Algorithm Hash digest
SHA256 dc534a8b6e41320eac9fdbf0e82da87ee7ff49fbd64a3329705fb0bd1ac33d4d
MD5 06e28142f0ebc61e54f2926f41b551d0
BLAKE2b-256 d922ce60b81d14625097f790b3adb01aa1ae8716c924888c3f03150e5bc0d9cd

See more details on using hashes here.

File details

Details for the file train_in_silence-0.1.3-py3-none-any.whl.

File metadata

File hashes

Hashes for train_in_silence-0.1.3-py3-none-any.whl
Algorithm Hash digest
SHA256 52c407e3b17ba691c0c3bba18e2a091686c9dcf688a1f6e2ea2cb454aa40b139
MD5 8125fe6c1103a3bf72e351fd90cf0f77
BLAKE2b-256 814909dcdd46826b93d4727840a7e9efec7f77cd77667e559b8f0017f62b41c8

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page