Skip to main content

LLM data collection and synthetic fine-tuning dataset pipeline

Project description

DataForge

An interactive CLI pipeline that turns websites into fine-tuning datasets for LLMs. Discovers URLs, scrapes content, chunks it, generates synthetic Q&A / instruction / conversation samples, scores them for quality, and exports to HuggingFace Hub, Kaggle, or local files.


Installation

uv (recommended)

uv tool install llm-web-crawler
dataforge

Update:

uv tool upgrade llm-web-crawler

Uninstall:

uv tool uninstall llm-web-crawler

pip

pip install llm-web-crawler
dataforge

Update:

pip install --upgrade llm-web-crawler

Uninstall:

pip uninstall llm-web-crawler

From source

git clone https://github.com/ianktoo/data-forge.git
cd data-forge
uv sync
uv run dataforge

Standalone executables (no Python required)

Download pre-built binaries for your platform from GitHub Releases:

Platform File
Windows dataforge-windows-x64.exe
macOS dataforge-macos-x64
Linux dataforge-linux-x64

Quick start

dataforge          # interactive guided pipeline
dataforge explore <url>   # preview URL discovery without running the full pipeline
dataforge config   # set your LLM provider and API key
dataforge sessions # list past sessions
dataforge resume <id>     # resume a paused session
dataforge update   # update to the latest version

Features

URL Discovery

  • Automatically finds and parses XML sitemaps (including sitemap indexes)
  • Checks robots.txt for Sitemap: directives
  • BFS crawler fallback — if no sitemap is found, crawls the site up to a configurable depth and page limit
  • SPA support — detects JavaScript-rendered pages (few links, rich body) and retries with Playwright if installed
  • Parallel discovery across multiple seed URLs

Zero-trust input handling

  • All user-supplied URLs are sanitised before entering the pipeline
  • Strips control characters, URL fragments, and tracking parameters (utm_*, fbclid, gclid, etc.)
  • Auto-corrects bare domains (adds https://) and percent-encodes unsafe path characters
  • Non-HTML resources (images, PDFs, JS, CSS) are filtered from crawl candidates

Collection

  • Async HTTPX client with retry + exponential backoff
  • Per-domain rate limiting and robots.txt compliance
  • Pages saved as Markdown in the session directory

Processing

  • Token-aware chunking with configurable size and overlap
  • Boilerplate removal (nav, footer, cookie notices, etc.)
  • Output as JSONL and Parquet

Generation

  • Synthetic Q&A, instruction, and conversation samples via LiteLLM
  • Supports OpenAI, Anthropic, Groq, Together AI, and local Ollama
  • Custom system prompt support

Quality

  • LLM-based quality scoring (1–5)
  • Configurable approval threshold

Export

  • HuggingFace Hub (public or private datasets)
  • Kaggle datasets
  • Local JSONL / Parquet / CSV

CLI experience

  • Ghost-text inline autocomplete with Tab completion (powered by prompt_toolkit)
  • Typo correction for unknown commands with fuzzy closest-match suggestions
  • Contextual rotating tips at each pipeline stage
  • dataforge config prompts for API keys securely via getpass and saves to .env
  • Startup hint when no provider key is detected, with guidance to run dataforge config
  • User preferences persisted to ~/.config/dataforge/prefs.json (cross-project)

Configuration

DataForge reads settings from environment variables or a .env file in the working directory. Run dataforge config to set your provider and API key interactively.

Environment variables

Variable Default Description
OPENAI_API_KEY OpenAI key
ANTHROPIC_API_KEY Anthropic key
GROQ_API_KEY Groq key
TOGETHER_API_KEY Together AI key
OLLAMA_BASE_URL http://localhost:11434 Ollama endpoint (no key needed)
DATAFORGE_LLM_PROVIDER openai Active provider
DATAFORGE_LLM_MODEL gpt-4o-mini Model name
DATAFORGE_RATE_LIMIT 2.0 Requests/sec per domain
DATAFORGE_MAX_PAGES 500 Max pages scraped per session
DATAFORGE_MAX_CRAWL_PAGES 50 Max pages found by BFS crawler
DATAFORGE_MAX_CRAWL_DEPTH 3 Max link depth for BFS crawler
DATAFORGE_CHUNK_SIZE 512 Tokens per chunk
DATAFORGE_CHUNK_OVERLAP 64 Token overlap between chunks
DATAFORGE_LOG_LEVEL INFO DEBUG / INFO / WARNING / ERROR
DATAFORGE_OUTPUT_DIR ./output Session output directory
DATAFORGE_DB_PATH ./dataforge.db SQLite database path
HUGGINGFACE_TOKEN HuggingFace Hub write token
KAGGLE_USERNAME Kaggle username
KAGGLE_KEY Kaggle API key

Using Ollama (fully local, no API key)

ollama serve
ollama pull llama3.2
dataforge config   # choose ollama / llama3.2
dataforge

Pipeline stages

Discovery → Collection → Processing → Generation → Quality → Export

Each stage is pausable and resumable. The session state is persisted to SQLite after every stage.


Development

git clone https://github.com/ianktoo/data-forge.git
cd data-forge
uv sync --extra dev
uv run pytest
uv run ruff check src/ tests/
uv run mypy src/

Releasing

# Bump version
uv version patch   # or minor / major

# Commit, tag, push — CI handles the rest
git add pyproject.toml uv.lock
git commit -m "Bump version to $(uv version --short)"
git tag v$(uv version --short)
git push origin master --tags

GitHub Actions will:

  1. Build cross-platform executables (Windows, macOS, Linux) via PyInstaller
  2. Attach them to a GitHub Release
  3. Publish the package to PyPI via uv publish using Trusted Publishers

Project structure

data-forge/
├── src/dataforge/
│   ├── agents/          # pipeline stage agents (explorer, scraper, processor, …)
│   ├── cli/             # typer app, prompts, UI, prefs, tips
│   ├── collectors/      # HTTP client, sitemap parser, BFS crawler, HTML extractor
│   ├── config/          # pydantic-settings, provider registry
│   ├── exporters/       # local, HuggingFace, Kaggle
│   ├── generators/      # LiteLLM wrapper, synthetic sample generation
│   ├── processors/      # chunker, cleaner, formatter
│   ├── storage/         # SQLModel models, database session
│   └── utils/           # logger, rate limiter, URL sanitiser, errors
├── tests/
├── .github/workflows/
│   ├── build-executables.yml
│   └── publish-pypi.yml
├── pyproject.toml
└── uv.lock

License

See LICENSE for details.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llm_web_crawler-2.0.2.tar.gz (323.9 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

llm_web_crawler-2.0.2-py3-none-any.whl (78.4 kB view details)

Uploaded Python 3

File details

Details for the file llm_web_crawler-2.0.2.tar.gz.

File metadata

  • Download URL: llm_web_crawler-2.0.2.tar.gz
  • Upload date:
  • Size: 323.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: uv/0.11.6 {"installer":{"name":"uv","version":"0.11.6","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}

File hashes

Hashes for llm_web_crawler-2.0.2.tar.gz
Algorithm Hash digest
SHA256 aca966633eb5602c46995cf9673da68c8501575fc22db5a544355436ad6687f0
MD5 0a7760799e6f2144e9653569059513ef
BLAKE2b-256 5ee0e3fe8346dec17650928d4a6c87c2a63664e3175bd8731c71f3a62631db7a

See more details on using hashes here.

File details

Details for the file llm_web_crawler-2.0.2-py3-none-any.whl.

File metadata

  • Download URL: llm_web_crawler-2.0.2-py3-none-any.whl
  • Upload date:
  • Size: 78.4 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: uv/0.11.6 {"installer":{"name":"uv","version":"0.11.6","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}

File hashes

Hashes for llm_web_crawler-2.0.2-py3-none-any.whl
Algorithm Hash digest
SHA256 0d4f7d66e2e6fbd2b1e154fc54a8d782dc289f18de7b91698b21046805cbab0e
MD5 bf48fd33c11802bea7a65f183a748fda
BLAKE2b-256 8d90c19075c0a37758ee31ef3fcc53b5bce6bcd2de3160c794b3f1015c0a34ab

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page