Run one OpenAI agent per input across N parallel Modal sandboxes, and aggregate structured results (plus returned files/folders) locally.
Project description
wide-research
Spawn parallel subtasks in Modal sandboxes — each running an OpenAI Agent on one input from a shared prompt template — and aggregate structured results (plus files or folders) back to your machine.
Think Pool.map() for research agents.
Install
CLI
uv tool install wide-research
wide-research --help # or the short alias: wr --help
wide-research doctor # check credentials + paths
uv tool install puts wide-research and wr on your PATH in an isolated
uv-managed environment. Upgrade with uv tool install --reinstall wide-research;
uninstall with uv tool uninstall wide-research.
As a skill
npx skills add datacurve-ai/wide-research
Skill layout follows agentskills.io. You still need the CLI installed (above).
From source (for development)
git clone https://github.com/datacurve-ai/wide-research ~/code/wide-research
cd ~/code/wide-research
uv sync --extra test
source .venv/bin/activate
Credentials
wide-research doctor will tell you what's missing. If you use a custom env
file, check it with wide-research doctor --env-file PATH. Short version:
- Modal:
modal token set --token-id … --token-secret … OPENAI_API_KEYand optionalOPENAI_BASE_URL— any of (first hit wins):--env-file PATH./.env(current directory)~/.wide-research/.env(recommended for global installs)- shell environment
- repo
.env(source checkouts only)
The OpenAI credential and base URL are used by the host-side Agent runner and
are not injected into sandboxes. Use TOML secrets = [...] only for
credentials that the sandbox itself should be able to read.
Where things live
Everything the tool owns is under ~/.wide-research/:
~/.wide-research/.env— credentials~/.wide-research/runs/— one subdirectory per run
Override with WIDE_RESEARCH_HOME=/somewhere/else (whole base) or
WIDE_RESEARCH_RUNS_DIR=/somewhere/else (runs only).
Quickstart
wide-research run defaults to spawn-detached + auto-tail: it starts a
worker process, then follows the live merged log in your terminal. Ctrl+C only
detaches the tail; the worker keeps running. Use --detach / -d to spawn and
exit immediately, or --foreground / -f to run in-process for scripts/CI.
# 1. Create the smallest smoke-test config.
cat > echo.toml <<'EOF'
brief = "Echo each input."
name = "echo_test"
title = "Echo Test"
target_count = 3
inputs = ["alpha", "beta", "gamma"]
prompt_template = """
Return the input exactly, then call submit(success=true, output={"echo": "{{ input }}"}).
"""
[[output_schema]]
name = "echo"
type = "string"
title = "Echo"
description = "The echoed input."
EOF
# 2. Dry-run — validate a config without spawning sandboxes
wide-research run echo.toml --dry-run
# 3. Render just the first prompt
wide-research plan echo.toml --index 0
# 4. Smoke-test, then full run. Use -f / --foreground for scripts/CI:
wide-research run echo.toml --sample 1 --foreground
wide-research run echo.toml # detaches, then auto-tails
wide-research run echo.toml --detach # pure detached; wr wait to sync
# 5. Look at past runs
wide-research list
wide-research inspect ~/.wide-research/runs/<dir>
wide-research inspect ~/.wide-research/runs/<dir> --failed-only
If you cloned the repository, you can also run the checked-in examples under
examples/.
Config shape
TOML. Six required top-level keys + an output_schema array-of-tables:
brief = "Find the current CEO of each company."
name = "find_ceos"
title = "Find CEOs"
target_count = 3
inputs = ["Apple", "Microsoft", "Alphabet"]
prompt_template = """
Research the current CEO of {{ input }}.
Use web_search to verify, then call submit(success=true, output={"ceo": "..."}).
"""
[[output_schema]]
name = "ceo"
type = "string"
title = "CEO"
description = "Verified name of the current CEO."
See skills/wide-research/references/CONFIG.md
for the full schema, and skills/wide-research/references/PRESETS.md
for ready-to-paste [resources] / [image] blocks (research / coding / DinD / GPU).
Design docs
docs/agent-design.md— harness/compute split, the SDK-native tool set, and how the loop terminates.docs/file-io.md— how files and directories move in and out of sandboxes (<file>tags,mount_files,file/directoryoutput fields, size limits).
Inline / stdin configs
For short configs you'd rather not leave on disk:
# Inline string
wr run --inline "$(cat <<'EOF'
brief = "tiny inline smoke"
name = "inline_smoke"
title = "Inline Smoke"
prompt_template = "Echo {{ input }}"
target_count = 2
inputs = ["hi", "there"]
[[output_schema]]
name = "echo"
type = "string"
title = "Echo"
description = "echoed"
EOF
)"
# Stdin via '-'
cat my-config.toml | wr run -
wr plan - < my-config.toml
Watching a run
wr run prints the run dir + pid and auto-tails by default. For a run started
with --detach, or after Ctrl+C detaches the tail, follow along with:
wr tail ~/.wide-research/runs/<dir> # watch what each subtask is doing
wr wait ~/.wide-research/runs/<dir> # block silently; non-zero exit if any fail
tail -f ~/.wide-research/runs/<dir>/worker.log # low-level worker log
# Stop a run early (tears down every live Modal sandbox first):
wr stop ~/.wide-research/runs/<dir>
# Or, if you'd rather have `run` block the current terminal:
wr run job.toml --foreground
Cost Estimates
Run summaries include token usage and best-effort cost estimates. By default,
wide-research fetches the public OpenRouter model catalog from
https://openrouter.ai/api/v1/models and caches it under
~/.wide-research/cache/ for a week. Set WR_DISABLE_PRICE_FETCH=1 to avoid
that network request, or provide your own prices with WR_PRICE_TABLE_JSON.
License
MIT. See LICENSE.
Publishing
Maintainers can build and validate the package locally with:
uv build
uvx twine check dist/*
Publish with uv publish after the package version has been bumped and the
Modal/OpenAI example tests pass with live credentials.
Layout
See AGENTS.md.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file wide_research-0.2.0.tar.gz.
File metadata
- Download URL: wide_research-0.2.0.tar.gz
- Upload date:
- Size: 56.3 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.9.27 {"installer":{"name":"uv","version":"0.9.27","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null}
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
965dba2e66ba4af0711137248c41b1a99fa2d87e9231d08296c39b9bff4793fb
|
|
| MD5 |
90fa9492d02ac390e5d5d1a54197d0b9
|
|
| BLAKE2b-256 |
d8dc36478626dd85fbac395e4ef1986cb108cf50f73f51961dd7970c0ac60338
|
File details
Details for the file wide_research-0.2.0-py3-none-any.whl.
File metadata
- Download URL: wide_research-0.2.0-py3-none-any.whl
- Upload date:
- Size: 43.5 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.9.27 {"installer":{"name":"uv","version":"0.9.27","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null}
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
587c0a4b382d94be856a56dcae589510f75c66ba53db954ca793adc16ae0075b
|
|
| MD5 |
12f58531ea4d6b42c2124f9cd10a1e65
|
|
| BLAKE2b-256 |
bcf461605020cec32aefd091e72b4423abffffd7b9c9051a26c030a3a7d6bbe7
|