Lightweight all-in-one framework for building, instrumenting, and fine-tuning LLM code agents.
Project description
HAMLET is a lightweight, all-in-one framework for building and iterating on LLM agents. It lets you define agents, launch a Gradio GUI, instrument runs with Langfuse, and fine-tune models using the built-in GRPO trainer. The framework is modified primarily on two open-source projects: smolagents for the core agent structure (src/hamlet/core) and verifiers for the training stack (src/hamlet/train).
Installation
Install the published package when you just need the library, or clone the repo if you plan to modify HAMLET.
Option A: Install the package
# inside your project
uv add minds-hamlet
# or with pip
pip install minds-hamlet
This exposes the hamlet Python package for imports.
Option B: Work from source
Follow the steps below to set up a local development environment. The project uses uv to manage Python dependencies because it keeps lock files fast and reproducible. You can still use pip, but uv is the recommended path.
1. Prerequisites
- Python 3.10 or newer (3.11+ works as well)
- uv (install with the command below or follow the instructions in the uv repository)
pip install uv
2. Clone the repository
git clone https://github.com/MINDS-THU/HAMLET.git
cd HAMLET
3. Install dependencies
Install the base runtime:
uv sync
Optional extras:
uv sync --extra toolsfor the toolchain utilities (file editing, retrieval, visual QA, etc.).uv sync --extra trainfor the training stack (GRPO trainer, vLLM client, etc.).
4. Use the environment
- Run commands inside the uv-managed env with
uv run, e.g.uv run pytestoruv run python examples\gradio_gui_example.py. - Alternatively, activate the virtual environment directly:
.\.venv\Scripts\activateon Windows orsource .venv/bin/activateon Unix shells.
5. Configure API keys
Create a .env file (or export environment variables) with the credentials your agent or tools need. Common entries include OPENAI_API_KEY, OPENAI_BASE_URL, and any service-specific tokens (HuggingFace, Langfuse, etc.). The examples load this file via dotenv, so keep it at the repo root and never commit it.
Getting Started
All commands below assume you are in the repository root after running uv sync.
1. Gradio GUI walkthrough
uv run python examples\gradio_gui_example.py
Launches the interactive UI so you can chat with an agent, observe tool traces, and any artifact generated by the agent.
2. Parallel code blocks
uv run python examples\parallel_code_blocks_example.py
Shows how HAMLET dispatches multiple LLM-generated code blocks concurrently and uses an optional Early Stop Strategy: code snippet to stop remaining executions once a shared goal is met.
Example run:
Thought:
I'll compute the sum of squares 1^2 + 2^2 + ... + 75^2 using two independent strategies:
1) Closed-form formula: n(n+1)(2n+1)/6.
2) Direct summation (Python sum over a generator).
I'll print the computed totals (using the same variable name total in both blocks). I'll use Early Stop Strategy: code to check that the computed total lies within the 0–500000 bound (0 <= total <= 500000).
Code#1:
<code>
# Code#1: closed-form formula for sum of squares
n = 75
total = n * (n + 1) * (2 * n + 1) // 6
print("Method: formula, total:", total)
</code>
Code#2:
<code>
# Code#2: direct iterative summation
total = sum(i * i for i in range(1, 76))
print("Method: iterative sum, total:", total)
</code>
Early Stop Strategy: code
Early Stop Code:
<code>
# Early Stop Code: check whether 'total' is within the requested 0-500000 bound
early_stop_result = (0 <= total <= 500000)
</code>
Log excerpt:
Early stop strategy detected: 'code'. Will check each code execution result accordingly.
Execution logs for Code#1 (2 in total):
Method: formula, total: 143450
Executing parsed early stop code for Code#1 (2 in total)...
Evaluation for the execution result of Code#1 passed (2 in total), other code executions will be cancelled.
3. Structured schema I/O
uv run python examples\structured_schema_example.py
Demonstrates attaching JSON schemas to agent inputs and outputs so downstream automation receives validated, typed payloads. The sample defines:
class PersonaRequest(BaseModel):
persona: str
goal: str
tone: str
class PersonaResponse(BaseModel):
summary: str
talking_points: list[str]
next_steps: list[str]
HAMLET hands these schemas to the LLM, validates the JSON that comes back, and surfaces parsing errors if the reply drifts from the contract.
Tip: prepend uv run to each script while uv manages the environment, or activate .venv if you prefer calling python directly.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file minds_hamlet-0.1.0.tar.gz.
File metadata
- Download URL: minds_hamlet-0.1.0.tar.gz
- Upload date:
- Size: 9.4 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.8.22
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
75ce5751357781f1a266d821677bf90f52a5f4a053ebb2d1887cae1d5114be50
|
|
| MD5 |
cdcb4afbbd61480d251db25630b589a7
|
|
| BLAKE2b-256 |
6bed4eef0c7918ed7c86c8787158226d7cdfd4cb6d61989365315299fd412111
|
File details
Details for the file minds_hamlet-0.1.0-py3-none-any.whl.
File metadata
- Download URL: minds_hamlet-0.1.0-py3-none-any.whl
- Upload date:
- Size: 8.7 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.8.22
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
eb1fe721841b7d9df7734d2cde28aad79bb0b5ac6fdcaba6d18a59048256492d
|
|
| MD5 |
628c7f1e8617ee6bf7ce45b104f9120b
|
|
| BLAKE2b-256 |
6b08a5de3089d8b891647e7e52d3166f83e69e1c488d234b1f8b81311a34bff5
|