Skip to main content

Fetch everything, for agents. Universal data acquisition with smart routing.

Project description

maestro-fetch

One interface. Any source. Agent-ready output.

PyPI version Downloads Python 3.11+ CI License: MIT Skills Ecosystem

Give it any URL -- web page, PDF, spreadsheet, cloud file, video, binary dataset -- and get back clean markdown or structured data. Smart routing picks the right adapter; pluggable browser backends handle anti-bot and authentication. No API key required.


Quickstart

For AI Agents

# Claude Code -- install as a skill (Vercel skills ecosystem)
npx skills add maestro-ai-stack/maestro-fetch -y -g

# Claude Code -- install as a plugin (marketplace)
/plugin marketplace add maestro-ai-stack/maestro-fetch
/plugin install maestro-fetch@maestro-fetch

Works with: Claude Code | Cursor | Codex | Gemini CLI | OpenCode | Trae and any agent that speaks MCP or CLI tools.

For Developers

# Recommended (global command, no venv needed)
uv tool install maestro-fetch

# Or with all extras (PDF, media, browser, LLM, social)
uv tool install "maestro-fetch[all]"

# Classic pip
pip install maestro-fetch
mfetch "https://example.com"

Try it now:

$ mfetch "https://api.worldbank.org/v2/country/CN/indicator/NY.GDP.MKTP.CD?format=json&per_page=5"

## GDP (current US$) - China

| Year | GDP (USD)            |
|------|----------------------|
| 2024 | $17,794,782,410,032  |
| 2023 | $17,662,434,751,902  |
| 2022 | $17,963,170,547,847  |
| 2021 | $17,734,062,645,371  |
| 2020 | $14,687,674,437,370  |
$ mfetch "https://arxiv.org/pdf/2301.07041"

## Dissociating language and thought in large language models ...
(full paper text as clean markdown)

If you find this useful, consider giving it a star -- it helps others discover the project.


Why maestro-fetch?

maestro-fetch Firecrawl Jina Reader crawl4ai
Source types 7 built-in adapters + community sources Web pages only Web pages only Web pages only
PDF / Excel / CSV Native parsing (Docling) Requires separate tool No No
Video transcription yt-dlp + Whisper No No No
Cloud storage Google Drive, Dropbox, Baidu Pan No No No
Binary datasets GeoTIFF, NetCDF, Parquet, HDF5, ... No No No
Browser backends 3 pluggable (bb-browser, Cloudflare, Playwright) Hosted only Hosted only Playwright only
Hosting Self-hosted, no API key required SaaS SaaS Self-hosted
Community adapters Extensible (economics, finance, climate, ...) No No No
Cache SQLite with TTL and LRU eviction No No No

maestro-fetch treats "fetch" as a universal problem -- not just web scraping. Give it any URL and it figures out the rest: route to the right adapter, pick a browser backend if needed, parse the content, return markdown or structured data.


Supported Sources

Adapter Source types Examples
web HTML pages, APIs, SPAs Any URL; falls back through crawl4ai, httpx, Cloudflare, bb-browser, Playwright
doc Documents and spreadsheets .pdf, .xlsx, .xls, .ods, .csv
binary Archives, geospatial, data science .zip, .parquet, .tif, .nc, .hdf5, .shp, .feather
cloud Cloud storage Google Drive, Google Docs/Sheets, Dropbox
media Video and audio YouTube, Vimeo (transcription via yt-dlp + Whisper)
baidu_pan Baidu Pan pan.baidu.com links via OAuth + PCS API
browser Authenticated / JS-heavy pages Playwright interactive sessions
source Community adapters World Bank, FRED, NOAA, academic datasets, ...

CLI Usage

Fetch any URL

mfetch "https://example.com"                       # auto-detect, markdown output
mfetch "https://example.com/report.pdf"            # PDF -> markdown
mfetch "https://example.com" --output json         # JSON output
mfetch "https://example.com" --timeout 120         # custom timeout
mfetch "https://example.com" --batch urls.txt      # batch from file

Community source adapters

mfetch source update                               # pull latest adapters
mfetch source list                                 # show all adapters
mfetch source list --category economics            # filter by category
mfetch source info worldbank/gdp                   # show args and examples
mfetch source run worldbank/gdp CN                 # fetch World Bank GDP for China

Interactive browser sessions

mfetch session start "https://login-required.com"
mfetch session fill "#email" "user@example.com"
mfetch session click "#submit"
mfetch session snapshot                            # current page as markdown
mfetch session screenshot                          # save screenshot
mfetch session end

Cache management

mfetch cache list                                  # show cached entries
mfetch cache clear                                 # clear all
mfetch cache clear --older-than 7d                 # evict old entries

Configuration

mfetch config init                                 # generate ~/.maestro-fetch/config.toml
mfetch config show                                 # display current config

Python SDK

from maestro_fetch import fetch, batch_fetch

# Auto-detect and fetch
result = await fetch("https://example.com/data")
result.content       # markdown text
result.source_type   # "web" | "doc" | "cloud" | "media" | "binary"
result.tables        # list[pd.DataFrame] (if tabular data found)
result.metadata      # provenance dict
result.raw_path      # Path to cached raw file

# Batch with concurrency
results = await batch_fetch(urls, concurrency=10)

# LLM structured extraction (requires ANTHROPIC_API_KEY or OPENAI_API_KEY)
result = await fetch(
    "https://worldbank.org/report.pdf",
    schema={"country": str, "gdp": float},
    provider="anthropic",
)

Installation

# Core -- web, cloud, doc adapters. No API key needed.
pip install maestro-fetch

# Optional extras
pip install maestro-fetch[pdf]       # PDF and Excel parsing (Docling, openpyxl)
pip install maestro-fetch[media]     # YouTube/audio transcription (yt-dlp, Whisper)
pip install maestro-fetch[browser]   # Interactive sessions (Playwright)
pip install maestro-fetch[anthropic] # Claude LLM extraction
pip install maestro-fetch[openai]    # GPT LLM extraction
pip install maestro-fetch[all]       # Everything

Development setup

git clone https://github.com/maestro-ai-stack/maestro-fetch.git
cd maestro-fetch
python3.11 -m venv .venv && source .venv/bin/activate
pip install -e ".[dev]"
pytest tests/ -v

Works With

maestro-fetch integrates as a tool or skill in these AI agent environments:

  • Claude Code -- via skills ecosystem or plugin marketplace
  • Cursor -- as a CLI tool in agent mode
  • OpenAI Codex -- as a shell tool
  • Gemini CLI -- as an MCP tool
  • OpenCode / Trae -- via CLI or MCP bridge

See the maestro-fetch skill definition for integration details.


Architecture

CLI / SDK  -->  Router (URL detection)  -->  Adapters: web | doc | cloud | media | binary | source
                                                 |
                                        Web fallback chain:
                                  crawl4ai -> httpx -> Cloudflare -> bb-browser -> Playwright

Router decision chain: (1) match community source adapter (@meta) -- dispatch to source; (2) match built-in adapter -- dispatch directly; (3) web fallback chain for everything else.


Configuration

Config lives at ~/.maestro-fetch/config.toml. Generate with mfetch config init.

[cache]
max_size = "2GB"
default_ttl = 86400

[backends]
priority = ["bb-browser", "cloudflare", "playwright"]

Storage: ~/.maestro-fetch/ contains config.toml, cache.db, cache/, sources/, custom/, sessions/.


Contributing

Core improvements -- open issues and PRs on this repo.

New source adapters -- contribute to maestro-ai-stack/maestro-fetch-sources. Each adapter is a single Python file with an @meta header and an async def run(ctx, ...) function.


License

MIT


Built by Maestro -- Singapore AI product studio.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

maestro_fetch-0.2.1.tar.gz (157.9 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

maestro_fetch-0.2.1-py3-none-any.whl (98.6 kB view details)

Uploaded Python 3

File details

Details for the file maestro_fetch-0.2.1.tar.gz.

File metadata

  • Download URL: maestro_fetch-0.2.1.tar.gz
  • Upload date:
  • Size: 157.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.9

File hashes

Hashes for maestro_fetch-0.2.1.tar.gz
Algorithm Hash digest
SHA256 a5410e47bc8c8e4542aac5cb09fb4a4413e18ea6d6f0c737d6875b04fd369652
MD5 0697a41fea82e7de3e8b59ae4be032fb
BLAKE2b-256 3f3aaad23191279eb25470f08d6f0a68f3ddaeff7bc2e8fdf9f337a7cca83b39

See more details on using hashes here.

File details

Details for the file maestro_fetch-0.2.1-py3-none-any.whl.

File metadata

  • Download URL: maestro_fetch-0.2.1-py3-none-any.whl
  • Upload date:
  • Size: 98.6 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.9

File hashes

Hashes for maestro_fetch-0.2.1-py3-none-any.whl
Algorithm Hash digest
SHA256 563a6cb63e9d5d935ec8172d3192e7abf2f66ed6f826d03b6ac299c34a64b8a2
MD5 a1ee92745a2ea707aba6ced54277f7c1
BLAKE2b-256 0aeea44a59d301d47a29dcac3438d93ff499803da91b1c0774527957bc6a05a2

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page