Implementations of post-training algorithms using the Tinker API
Project description
Tinker Cookbook
We provide two libraries for the broader community to customize their language models: tinker and tinker-cookbook.
tinkeris a training SDK for researchers and developers to fine-tune language models. You send API requests to us and we handle the complexities of distributed training.tinker-cookbookincludes realistic examples of fine-tuning language models. It builds on the Tinker API and provides common abstractions to fine-tune language models.
Installation
- Sign up for Tinker here.
- Once you have access, create an API key from the console and export it as environment variable
TINKER_API_KEY. - Install
tinker-cookbook(includes thetinkerSDK as a dependency):# Latest stable release from PyPI uv pip install tinker-cookbook # Or install the nightly build uv pip install 'tinker-cookbook @ git+https://github.com/thinking-machines-lab/tinker-cookbook.git@nightly'
Tinker
Refer to the docs to start from basics. Here we introduce a few Tinker primitives - the basic components to fine-tune LLMs:
import tinker
service_client = tinker.ServiceClient()
training_client = service_client.create_lora_training_client(
base_model="meta-llama/Llama-3.2-1B", rank=32,
)
training_client.forward_backward(...)
training_client.optim_step(...)
training_client.save_state(...)
training_client.load_state(...)
sampling_client = training_client.save_weights_and_get_sampling_client()
sampling_client.sample(...)
See tinker_cookbook/recipes/sl_loop.py and tinker_cookbook/recipes/rl_loop.py for minimal examples of using these primitives to fine-tune LLMs.
Tutorials
New to Tinker? The tutorials/ directory has 6 progressive marimo notebooks that guide you from your first API call to building custom RL training pipelines:
| # | Notebook | What you'll learn |
|---|---|---|
| 01 | Hello Tinker | Architecture overview, client hierarchy, sampling |
| 02 | First SFT | Renderers, datum construction, training loop, Kimi K2.5 scaling demo |
| 03 | Efficient Sampling | Concurrent futures, num_samples, batch evaluation throughput |
| 04 | First RL | GRPO on GSM8K: rewards, advantages, degenerate groups |
| 05 | Cookbook RL Abstractions | Env, EnvGroupBuilder, RLDataset, ProblemEnv |
| 06 | Custom RL Environment | Build your own ProblemEnv and RLDataset |
Run any tutorial with marimo edit tutorials/01_hello_tinker.py. Rendered versions are available on the Tinker docs site.
To download the weights of any model:
rest_client = service_client.create_rest_client()
future = rest_client.get_checkpoint_archive_url_from_tinker_path(sampling_client.model_path)
with open(f"model-checkpoint.tar.gz", "wb") as f:
f.write(future.result())
Tinker Cookbook
Besides these primitives, we also offer Tinker Cookbook (a.k.a. this repo), a library of a wide range of abstractions to help you customize training environments.
tinker_cookbook/recipes/sl_basic.py and tinker_cookbook/recipes/rl_basic.py contain minimal examples to configure supervised learning and reinforcement learning.
We also include a wide range of more sophisticated examples in the tinker_cookbook/recipes/ folder:
- Chat supervised learning: supervised fine-tuning on conversational datasets like Tulu3.
- Math reasoning: improve LLM reasoning capability by rewarding it for answering math questions correctly.
- Preference learning: showcase a three-stage RLHF pipeline: 1) supervised fine-tuning, 2) learning a reward model, 3) RL against the reward model.
- Tool use: train LLMs to better use retrieval tools to answer questions more accurately.
- Prompt distillation: internalize long and complex instructions into LLMs.
- Multi-Agent: optimize LLMs to play against another LLM or themselves.
These examples are located in each subfolder, and their README.md files will walk you through the key implementation details, the commands to run them, and the expected performance.
Documentation
For the full Tinker documentation, visit tinker-docs.thinkingmachines.ai.
Import our utilities
Tinker cookbook includes several utilities. Here's a quick overview:
renderersconverts tokens from/to structured chat message objectshyperparam_utilshelps calculate hyperparameters suitable for LoRAsevaluationprovides abstractions for evaluating Tinker models andinspect_evaluationshows how to integrate with InspectAI to make evaluating on standard benchmarks easy.
Claude Code Skills
Tinker Cookbook ships with Claude Code skills that teach Claude how to use the Tinker API. Install them so Claude can help you write training code in any project:
/plugin marketplace add thinking-machines-lab/tinker-cookbook
Then install the tinker plugin from the Discover tab (/plugin → Discover). Once installed, the following skills are available:
| Command | What it does |
|---|---|
/tinker:core |
Getting started — installation, models, SDK basics, hyperparameters |
/tinker:sft |
Supervised fine-tuning, datasets, renderers, distillation |
/tinker:rl |
Reinforcement learning — GRPO, custom environments, multi-turn |
/tinker:preferences |
DPO and RLHF pipelines |
/tinker:ops |
Checkpoints, weight export, logging, evaluation |
/tinker:debug |
Diagnose slow training, hangs, output mismatches, errors |
/tinker:dev |
Contributing to this repo — tests, CI, recipes |
Skills also trigger automatically based on context — ask Claude to "set up SFT training" and it will load the right skill without a slash command. Skills update automatically when the repo is updated.
Development Setup
uv sync --extra dev
pre-commit install
This installs dev dependencies and registers pre-commit hooks that run ruff formatting and linting on every commit. CI enforces these checks on all pull requests.
Contributing
This project is built in the spirit of open science and collaborative development. We believe that the best tools emerge through community involvement and shared learning.
We welcome PR contributions after our private beta is over. If you have any feedback, please email us at tinker@thinkingmachines.ai.
Citation
If you use Tinker for your research, please cite it as:
Thinking Machines Lab, 2025. Tinker. https://thinkingmachines.ai/tinker/.
Or use this BibTeX citation:
@misc{tml2025tinker,
author = {Thinking Machines Lab},
title = {Tinker},
year = {2025},
url = {https://thinkingmachines.ai/tinker/},
}
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file tinker_cookbook-0.2.2.tar.gz.
File metadata
- Download URL: tinker_cookbook-0.2.2.tar.gz
- Upload date:
- Size: 4.4 MB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.11.3 {"installer":{"name":"uv","version":"0.11.3","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
3b3ae0b94e0574af7f719145ef3088705be68048fd6a06745c9842df13d4632f
|
|
| MD5 |
761d0424ed421e9d1eb1cfa822e6c3b0
|
|
| BLAKE2b-256 |
901589f531acc5ce1aa68d3c2b140f06a740f9acd9f6455dcdc813eca76c3d3d
|
File details
Details for the file tinker_cookbook-0.2.2-py3-none-any.whl.
File metadata
- Download URL: tinker_cookbook-0.2.2-py3-none-any.whl
- Upload date:
- Size: 709.3 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.11.3 {"installer":{"name":"uv","version":"0.11.3","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
de11eb0c1091ba7584079536aa934feb92611e048972bf9b3c0b7d60005988ae
|
|
| MD5 |
eb91eae17150fad97f481bac5ed0c700
|
|
| BLAKE2b-256 |
1442722af3200e3574faf53082f8c42392a83f6d37019f20c42560dc3479be6c
|