Skip to main content

Generate high-quality datasets from web content for AI training

Project description

WebRover 🚀

Python 3.10+ License: MIT

WebRover is a powerful Python library for generating high-quality datasets from web content, designed specifically for training Large Language Models and AI applications.


🌟 Features

  • Smart Web Scraping: Automatically find and scrape relevant content based on topics
  • Multiple Input Formats: Support for JSON, YAML, TXT, and Markdown topic files
  • Async Processing: Fast, concurrent scraping with built-in rate limiting
  • Quality Control: Built-in content validation and cleaning
  • LLM-Ready Output: Structured JSONL format perfect for model training
  • Error Handling: Robust error tracking and recovery mechanisms

🚀 Quick Start

Installation

pip install webrover

Basic Usage

from webrover import WebRover

# Initialize WebRover
rover = WebRover()

# Scrape content from topics
rover.scrape_topics(
    topics=["artificial intelligence", "machine learning"],
    num_websites=100
)

# Save the dataset
rover.save_dataset("my_dataset.jsonl")

Using Topic Files

# From JSON file
rover.scrape_topics(
    topics="topics.json",
    num_websites=100
)

# From Markdown list
rover.scrape_topics(
    topics="topics.md",
    num_websites=100
)

📖 Documentation

Supported Topic File Formats

JSON

{
    "topics": [
        "AI basics",
        "machine learning",
        "deep learning"
    ]
}

YAML

topics:
  - AI basics
  - machine learning
  - deep learning

Markdown

- AI basics
- machine learning
- deep learning

Output Structure

{
    'url': 'https://example.com/article',
    'title': 'Article Title',
    'content': 'Article content...',
    'metadata': {
        'length': 1234,
        'has_title': true,
        'domain': 'example.com'
    }
}

🛠️ Advanced Usage

# Initialize with custom output directory
rover = WebRover(output_dir="my_datasets")

# Get scraping statistics
stats = rover.get_stats()
print(f"Success rate: {stats['success_rate']*100:.1f}%")

# Access dataset programmatically
dataset = rover.get_dataset()

📊 Output Files

  • final_dataset/dataset.jsonl: Main dataset in JSONL format
  • websites_master.json: List of all discovered URLs
  • websites_completed.json: Successfully scraped URLs
  • websites_errors.json: Failed attempts with error details

🔄 Error Handling

WebRover automatically handles common issues:

  • Rate limiting
  • Network timeouts
  • Invalid URLs
  • Blocked requests
  • Malformed content

🚧 Limitations

  • Respects robots.txt and site rate limits
  • Some sites may block automated access
  • Large datasets require more processing time
  • Google search may throttle excessive requests

🗺️ Roadmap

See FUTURE.md for planned features and improvements.

🤝 Contributing

Contributions are welcome! Please feel free to submit a Pull Request.

📜 License

This project is licensed under the MIT License - see the LICENSE file for details.

🙏 Acknowledgments

Built with ❤️ by Area-25. Special thanks to all contributors.


WebRover: Build better datasets, train better models. 🚀

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

webrover-0.1.4.tar.gz (7.9 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

webrover-0.1.4-py3-none-any.whl (7.5 kB view details)

Uploaded Python 3

File details

Details for the file webrover-0.1.4.tar.gz.

File metadata

  • Download URL: webrover-0.1.4.tar.gz
  • Upload date:
  • Size: 7.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.11.9

File hashes

Hashes for webrover-0.1.4.tar.gz
Algorithm Hash digest
SHA256 8de6d0e7c88d1e12acb5ddd10c9cb70b58ab387b3a34743a0c49c02ce76ec01f
MD5 48d6e6b1cfd031fd37e7b7f155447f0b
BLAKE2b-256 597249482304b1f11cadc8075210d5ba589b26f750db3ac66960d96901acc34c

See more details on using hashes here.

File details

Details for the file webrover-0.1.4-py3-none-any.whl.

File metadata

  • Download URL: webrover-0.1.4-py3-none-any.whl
  • Upload date:
  • Size: 7.5 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.11.9

File hashes

Hashes for webrover-0.1.4-py3-none-any.whl
Algorithm Hash digest
SHA256 e28e323b9ccd74431bb53b6ccb0fbc3d70258014b413de57b94b1717567bd04f
MD5 ae2edb87b837adbf63a98a0078c31177
BLAKE2b-256 ca0cdb099a0be2ad02112d3fee1b86dcbf8152ef8956b1b4727779fb98297c7f

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page