Nano: A minimal, zero-frills coding-agent for research on agent-in-the-loop training
Project description
Nano
A minimal, no‑magic coding‑agent for:
- agent‑in‑the‑loop reinforcement learning
- understanding coding agents in clear, minimal terms
- running neat little code fixes with modern LLMs
What it is
Nano is a zero‑bloat wrapper that turns any tool-enabled LLM into a coding agent with two tools:
shell(cmd) # ls, cat, grep … (stateful, runs in rbash)
apply_patch({...}) # search/replace on one file
Note: Nano runs commands in
rbash(restricted bash), which helps provide a safer execution environment by limiting access to certain operations.
Nothing else.
No internal state modeling, no fuzzy patching, no hidden prompts or repo graphs.
You get the raw reasoning, tool calls, and results. I.e. exactly what the model saw and did!
Why it exists
Most coding agents (e.g. Aider, SWE-Agent, Devin) are designed to perform well. To achieve that, they bake in layers of human-designed heuristics:
navigation memory, prompt rewriting, hand-crafted repo maps, retry logic...
These make agents more capable, but also more opaque. They're hard to analyze, and thus harder to adopt.
Nano takes the opposite stance:
Inspired by The Bitter Lesson, we believe that long-term performance comes not from human intuition, but from letting models learn their own strategies, even if they start out worse.
That's what Nano tries to provide.
Install
git clone git@github.com:BjarniHaukur/nano-agent.git && cd nano-agent && pip install -e .
# or
pip install nano-agent # TODO: publish
Then you just need an API key for your chosen provider or host them yourself with vLLM. See litellm documentation for more details.
Example: rollout to Tensor
from transformers import AutoTokenizer
from nano_agent import Agent
agent = Agent(model="openrouter/qwen/qwen3-8b", thinking=True)
agent.run(".", "There is a bug in this repo...")
tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen3-8B")
tokens = tokenizer.apply_chat_template(
agent.messages,
tools=agent.tools,
tokenize=True,
return_format="pt"
)
Example: minimal SWE‑Gym rollout
import tempfile
from git import Repo # git-python
from nano_agent import Agent
from datasets import load_dataset
run = load_dataset("SWE-Gym/SWE-Gym", split="train[:1]")[0]
tempdir = tempfile.mkdtemp()
Repo.clone_from(f"https://github.com/{run['repo']}.git", tempdir)
agent = Agent(
model="hosted_vllm/qwen/qwen3-8b",
api_base="http://localhost:8000/v1",
thinking=True # enables <think> ... </think> reasoning blocks
)
diff = agent.run(run["problem_statement"], repo_root=tempdir)
print(diff) # the unified diff produced by the agent
print(agent.messages, agent.tools) # or access in `.nano/<timestamp>/
Use with HuggingFace TRL
Because Nano can communicate with any OpenAI compatible endpoint and produces token-level message logs, it works "cleanly" as a data generator inside TRL's GPROTrainer.
Note: "cleanly" refers to modifications made in our TRL fork to enable direct agent integration. These changes support the CodeRepairRL project but may not be merged into the main HuggingFace repository.
To use it:
- Write a rollout client that wraps
Agent.run() - Extract the diff and messages for each training example
- Feed those into TRL's reward modeling or fine-tuning pipelines
This lets you train models that learn to use tools directly, grounded in interaction data — no custom env needed.
This approach acknowledges that the agent may initially fail in certain situations; however, these failures are valuable learning opportunities. We can then directly reinforce favorable behaviors and successful outcomes using outcome supervision, progressively refining the agent's strategies.
Citation
@misc{nano-agent2025,
author = {Bjarni Haukur},
title = {Nano: a minimalist coding agent for agent-in-the-loop training},
howpublished = {\url{https://github.com/BjarniHaukur/nano-agent}},
year = {2025}
}
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file nano_agent-0.1.2.tar.gz.
File metadata
- Download URL: nano_agent-0.1.2.tar.gz
- Upload date:
- Size: 10.8 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.11.11
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
deb18ab7167d22c8936bb0ee66e30e82adf8bd45f03b441c36021f7043137b16
|
|
| MD5 |
2e21c2ecf229ddcef1dd18c0d06383b4
|
|
| BLAKE2b-256 |
d58df6ec958433207aa0bcfbd5be4e4f9f1cb1fb8b5ef9210eaa9c0d5995d929
|
File details
Details for the file nano_agent-0.1.2-py3-none-any.whl.
File metadata
- Download URL: nano_agent-0.1.2-py3-none-any.whl
- Upload date:
- Size: 9.8 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.11.11
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
9d71c35c184e58ac121c4c083a8bc7951a27ceb0acf7fcf04e8c1dd8b41cedb0
|
|
| MD5 |
6edfe82b500cce5018c47d0980f29cd7
|
|
| BLAKE2b-256 |
22b4a333d6b63722addaeeaf06c2aaa53ac335c1e43ea5ea30e323b4024b127b
|