LLM fine-tuning hardware planner with CLI and API.
Project description
Train in Silence
The first Task-Aware MCP server for LLM fine-tuning. Stop comparing GPU prices. Start training.You want to fine-tune an LLM. You open Vast.ai, RunPod, AWS, etc. -- a dozen tabs, a dozen pricing models, a dozen different ways to describe a GPU. Which option can run your code, and do so more cheaply and quickly? An hour later you're still in a spreadsheet and haven't written a single line of training code.
Train in Silence is the first Task-Aware MCP server for LLM fine-tuning. It doesn't just list prices; it understands your workload. Describe your training job once, and it calculates the required VRAM/FLOPs to return the cheapest, fastest, and most balanced hardware options across a dozen cloud providers -- in seconds.
Quickstart
Option A: Ask Claude Code (recommended)
Install the library and register it as a tool in Claude Code:
pip install train-in-silence
claude mcp add tis --scope user -- tis-mcp
Then just ask in natural language:
> I want to run the fine-tune code in my current directory, and finish it within 20 hours.
Find me the best GPU options across Vast.ai, RunPod, and Lambda.
Claude Code calls TIS behind the scenes and returns a structured recommendation -- no YAML, no config files, no manual comparison.
Option B: CLI
pip install train-in-silence
tis recommend examples/request.yaml
$ tis recommend examples/request.yaml
Found 5 viable configurations
Lowest cost: $4.32 | Fastest runtime: 2.1 hours
#1 [cheapest] RunPod 1x A6000 (48 GB) $4.32 / 6.8 h
#2 [fastest] Vast.ai 2x A100 (80 GB) $9.10 / 2.1 h
#3 [balanced] RunPod 1x A100 (80 GB) $6.40 / 3.2 h
...
Note: Output above is illustrative. Actual results depend on live market data.
Use It Your Way
| Channel | Command | Docs |
|---|---|---|
| CLI | tis recommend request.yaml |
CLI Guide |
| REST API | uvicorn tis.api.server:app |
API Reference |
| Claude Code | claude mcp add tis --scope user --tis-mcp |
MCP Guide |
| Claude Desktop | Add tis-mcp to claude_desktop_config.json |
MCP Guide |
Market Providers
TIS aggregates live pricing across a dozen GPU clouds. API keys are optional: if not provided, TIS automatically falls back sequentially to universal live aggregators (GPUHunt/GPUFinder) or bundled sample data.
| Provider Class | Included Platforms | Auth Required |
|---|---|---|
| Dedicated | Vast.ai, RunPod | Optional (Highly Recommended) |
| Aggregated | Vast.ai, RunPod, AWS, CoreWeave, Lambda Labs, Tensordock, Vultr, GCP, Azure, OCI, Nebius, CloudRift, Cudo Compute, Verda | None (Auto-fallback) |
Every recommendation clearly identifies its Source of Truth (e.g., live:official, live:gpuhunt, live:gpufinder, or sample) so you always know how fresh the data is. -> Provider details
Architecture at a Glance
YAML request -> Estimator -> Market Aggregator -> Optimizer -> Pareto Frontier -> Ranked Output
| | |
VRAM/FLOPs 10+ GPU Clouds Cost vs. Time
Each recommendation shows where the data came from (live or sample) and flags any estimated fields -- no silent guesswork. -> Architecture deep-dive
Known Limitations
- Estimation model is fixed with no built-in calibration; future versions will calibrate using real runtimes.
- Upstream Provider API schema changes will require synchronized mapping updates.
🚧 Project Status & Contribution
This project is currently in the experimental development stage (Experimental).
- Issues & Suggestions: If you encounter any bugs, inaccurate estimations, or have suggestions for improvement, please feel free to submit a GitHub Issue.
- Contribute: If you'd like to improve the code or supplement hardware metadata, Pull Requests are highly welcome! We look forward to refining this LLM hardware planner with the community.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file train_in_silence-0.1.5.tar.gz.
File metadata
- Download URL: train_in_silence-0.1.5.tar.gz
- Upload date:
- Size: 56.5 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.13.5
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
51454958699e5faf8ad2c136aace6a31d23c053258ae60f409085c30797f186f
|
|
| MD5 |
3a5495ecbad768f93e7257d5e7e226ec
|
|
| BLAKE2b-256 |
7e4ca1b88019646f33af67317f970f9b1c1c23fcdfcc91e743347603aede8023
|
File details
Details for the file train_in_silence-0.1.5-py3-none-any.whl.
File metadata
- Download URL: train_in_silence-0.1.5-py3-none-any.whl
- Upload date:
- Size: 63.2 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.13.5
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
654f1d041a30c223d7810812e85bc3023f4315dbf852987cf0f52dca0aeb01f2
|
|
| MD5 |
7a951c45c2e86361f6ef6f354c63df4b
|
|
| BLAKE2b-256 |
7992d1cfa84de7e7cd372a73bfe29c37ef40f198124433838f9fe565cb967520
|