Skip to main content

Collect data from multiple sources and run autonomous LLM training via autoresearch

Project description

MultiAgentTrainer

Collect data from multiple sources, run autonomous LLM training experiments using autoresearch, and fine-tune open-source or managed models on your corpus. Configure everything in a YAML file and let mat handle ingestion, corpus building, training, and fine-tuning.

MultiAgentTrainer can be used standalone, but is designed as a companion to AgentTester — use AgentTester to evaluate and compare coding agents, then use MultiAgentTrainer to train models on the data those agents produce and consume.

Install

uv pip install -e ".[dev]"

# For open-source fine-tuning (HuggingFace + PEFT/LoRA):
uv pip install -e ".[opensource]"

Quick Start

# List configured data sources
mat sources

# Ingest data sources without training (inspect the corpus)
mat ingest

# Run the full pipeline: ingest → corpus → train
mat train

# Run with overrides
mat train --max-experiments 10 --output-dir ./my-runs

# Label a run so it shows up clearly in mat watch
mat train --name llama3

# Run multiple models in parallel and watch progress live
mat train --config llama3.yaml --name llama3 &
mat train --config mistral.yaml --name mistral &
mat watch

# Check past training runs
mat status

Data Sources

Configure data sources in multiagenttrainer.yaml:

sources:
  # Local git repository
  - type: local_repo
    path: /home/user/my-project

  # Any git-cloneable URL
  - type: remote_repo
    url: "https://github.com/user/repo.git"
    branch: main

  # GitHub repository (web URL)
  - type: github_repo
    url: "https://github.com/user/repo"

  # All repos in a GitHub organisation
  - type: github_org
    url: "https://github.com/my-org"
    max_repos: 50
    visibility: all   # all | public | private

  # AWS Bedrock knowledge base
  - type: bedrock_knowledge_base
    knowledge_base_id: "ABCDEF1234"
    region: "us-east-1"
    query: "training data for code generation"
    max_results: 100

Configuration

Copy config.example.yaml to multiagenttrainer.yaml in your working directory.

Top-level sections

Section Description
autoresearch Autoresearch repo URL/path, branch, train time, optional program.md override
sources List of data sources to ingest
training Agent command, max experiments, output directory, execution target

Execution targets

By default experiments run locally. Set training.execution.type to run on a remote host or inside a container instead.

SSH — rsync the workspace to a remote machine and run experiments over SSH:

training:
  execution:
    type: ssh
    ssh_host: user@gpu-box.example.com   # required
    ssh_key: ~/.ssh/id_ed25519           # optional; uses SSH default otherwise
    remote_dir: /tmp/mat-runs            # base dir on the remote host

Docker — copy the workspace into a running container and exec commands inside it:

training:
  execution:
    type: docker
    container: my-training-container     # required; must already be running
    container_dir: /tmp/mat-runs         # base dir inside the container

Source Types

Type Required Fields Optional Fields
local_repo path include, exclude, name
remote_repo url branch, name
github_repo url branch, name
github_org url max_repos, visibility, name
bedrock_knowledge_base knowledge_base_id region, query, max_results, name

Fine-Tuning

Fine-tune models directly on your ingested corpus using mat finetune. Two backends are supported today; more can be added by subclassing FineTuner.

Open-source models (HuggingFace + LoRA/QLoRA)

Requires a local GPU and pip install 'multiagenttrainer[opensource]'.

# multiagenttrainer.yaml
finetuner:
  backend: opensource
  jobs_dir: ./finetune-jobs
  opensource:
    model_id: meta-llama/Llama-3.2-1B
    output_dir: ./finetuned-models
    lora_r: 16
    lora_alpha: 32
    num_epochs: 3
    batch_size: 4
    use_4bit: true          # QLoRA — requires bitsandbytes + CUDA
# Ingest sources first (or skip if you already have a corpus)
mat ingest

# Fine-tune on the ingested corpus
mat finetune start

# Or point at an arbitrary corpus file
mat finetune start --corpus /path/to/corpus.txt --name my-run

# List all jobs
mat finetune list

# Check a job
mat finetune status <job-id>

Training runs in-process and blocks until complete. The LoRA adapter and tokenizer are saved to output_dir/<job-id>/.

AWS Bedrock model customization

Uses your existing boto3 credentials. Submits a Bedrock customization job and returns immediately — poll with mat finetune status.

finetuner:
  backend: bedrock
  jobs_dir: ./finetune-jobs
  bedrock:
    base_model_id: amazon.titan-text-lite-v1
    region: us-east-1
    role_arn: arn:aws:iam::123456789012:role/BedrockFineTuningRole
    output_s3_uri: s3://my-bucket/finetuned-models/
    training_data_s3_uri: s3://my-bucket/training-data/
    customization_type: CONTINUED_PRE_TRAINING   # or FINE_TUNING
    epochs: 1
mat finetune start
mat finetune status <job-arn>
mat finetune cancel <job-arn>

Adding a new backend

Subclass FineTuner, implement the four abstract methods, add a config dataclass, and register it in finetuner/registry.py:

# finetuner/finetuner.py
class AnthropicFineTuner(FineTuner):
    def prepare_dataset(self, corpus_path): ...
    def start_job(self, dataset, job_name): ...
    def get_status(self, job_id): ...
    def cancel_job(self, job_id): ...
    def describe(self): ...

# finetuner/registry.py
if cfg.backend == "anthropic":
    return AnthropicFineTuner(cfg.anthropic, jobs_dir, console)

How It Works

  1. Ingest — Fetch data from all configured sources (clone repos, query Bedrock KBs)
  2. Build corpus — Walk fetched files, filter by include/exclude globs, concatenate into a single corpus
  3. Setup — Clone autoresearch, inject the corpus, optionally override program.md
  4. Train — Launch the agent command iteratively for up to max_experiments rounds
  5. Report — Generate a markdown report with experiment results, best val_bpb, and stats
  6. Fine-tune (optional) — Run mat finetune start to fine-tune a model on the same corpus

Development

uv pip install -e ".[dev]"
ruff check src/ tests/
ruff format src/ tests/
pytest

Docker

docker compose run --rm mat train
docker compose run --rm mat sources

Library Usage

import asyncio
from pathlib import Path
from multiagenttrainer import Ingester, Runner, load_config

async def main():
    cfg = load_config()
    ingester = Ingester(cfg.sources, Path(".staging"))
    ingester.fetch_all()
    ingester.build_corpus(Path("corpus.txt"))

    runner = Runner(cfg.autoresearch, cfg.training, name="my-run")
    workspace = runner.setup_workspace(Path("corpus.txt"))
    results = await runner.run_experiments(workspace)
    for r in results:
        print(f"experiment {r.experiment_id}: val_bpb={r.val_bpb}")

asyncio.run(main())

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

multiagenttrainer-1.2.1.tar.gz (215.7 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

multiagenttrainer-1.2.1-py3-none-any.whl (36.9 kB view details)

Uploaded Python 3

File details

Details for the file multiagenttrainer-1.2.1.tar.gz.

File metadata

  • Download URL: multiagenttrainer-1.2.1.tar.gz
  • Upload date:
  • Size: 215.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for multiagenttrainer-1.2.1.tar.gz
Algorithm Hash digest
SHA256 ac5257a3e7e2e5c5e2e9ed73cf837e71cc4cb5bfc23e3f058629173b2c1f8419
MD5 4f281097a42c4c8b89a2e6ec488bfcd7
BLAKE2b-256 373b4c1fe9fc17a842ec863b871636d8c22e715afc50f6fd3443f55c5c4b50c5

See more details on using hashes here.

Provenance

The following attestation bundles were made for multiagenttrainer-1.2.1.tar.gz:

Publisher: publish.yml on sroomberg/MultiAgentTrainer

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file multiagenttrainer-1.2.1-py3-none-any.whl.

File metadata

File hashes

Hashes for multiagenttrainer-1.2.1-py3-none-any.whl
Algorithm Hash digest
SHA256 b29b7cc4e14255a565779e9d5736c387154ba0138282c898e2367b0bc0cab7b3
MD5 fec6d2557ca9e7a0c2ad04cd8f3993d7
BLAKE2b-256 72cdb006075e1809ff60ea8fc39053930fdc88a30c32bd2796b5b1ecdbb45ff3

See more details on using hashes here.

Provenance

The following attestation bundles were made for multiagenttrainer-1.2.1-py3-none-any.whl:

Publisher: publish.yml on sroomberg/MultiAgentTrainer

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page