Skip to main content

Generate high-quality datasets from web content for AI training

Project description

WebRover 🚀

Python 3.10+ License: MIT

WebRover is a powerful Python library for generating high-quality datasets from web content, designed specifically for training Large Language Models and AI applications.


🌟 Features

  • Smart Web Scraping: Automatically find and scrape relevant content based on topics
  • Multiple Input Formats: Support for JSON, YAML, TXT, and Markdown topic files
  • Async Processing: Fast, concurrent scraping with built-in rate limiting
  • Quality Control: Built-in content validation and cleaning
  • LLM-Ready Output: Structured JSONL format perfect for model training
  • Error Handling: Robust error tracking and recovery mechanisms

🚀 Quick Start

Installation

pip install webrover

Basic Usage

from webrover import WebRover

# Initialize WebRover
rover = WebRover()

# Scrape content from topics
rover.scrape_topics(
    topics=["artificial intelligence", "machine learning"],
    num_websites=100
)

# Save the dataset
rover.save_dataset("my_dataset.jsonl")

Using Topic Files

# From JSON file
rover.scrape_topics(
    topics="topics.json",
    num_websites=100
)

# From Markdown list
rover.scrape_topics(
    topics="topics.md",
    num_websites=100
)

📖 Documentation

Supported Topic File Formats

JSON

{
    "topics": [
        "AI basics",
        "machine learning",
        "deep learning"
    ]
}

YAML

topics:
  - AI basics
  - machine learning
  - deep learning

Markdown

- AI basics
- machine learning
- deep learning

Output Structure

{
    'url': 'https://example.com/article',
    'title': 'Article Title',
    'content': 'Article content...',
    'metadata': {
        'length': 1234,
        'has_title': true,
        'domain': 'example.com'
    }
}

🛠️ Advanced Usage

# Initialize with custom output directory
rover = WebRover(output_dir="my_datasets")

# Get scraping statistics
stats = rover.get_stats()
print(f"Success rate: {stats['success_rate']*100:.1f}%")

# Access dataset programmatically
dataset = rover.get_dataset()

📊 Output Files

  • final_dataset/dataset.jsonl: Main dataset in JSONL format
  • websites_master.json: List of all discovered URLs
  • websites_completed.json: Successfully scraped URLs
  • websites_errors.json: Failed attempts with error details

🔄 Error Handling

WebRover automatically handles common issues:

  • Rate limiting
  • Network timeouts
  • Invalid URLs
  • Blocked requests
  • Malformed content

🚧 Limitations

  • Respects robots.txt and site rate limits
  • Some sites may block automated access
  • Large datasets require more processing time
  • Google search may throttle excessive requests

🗺️ Roadmap

See FUTURE.md for planned features and improvements.

🤝 Contributing

Contributions are welcome! Please feel free to submit a Pull Request.

📜 License

This project is licensed under the MIT License - see the LICENSE file for details.

🙏 Acknowledgments

Built with ❤️ by Area-25. Special thanks to all contributors.


WebRover: Build better datasets, train better models. 🚀

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

webrover-0.1.6.tar.gz (10.0 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

webrover-0.1.6-py3-none-any.whl (10.8 kB view details)

Uploaded Python 3

File details

Details for the file webrover-0.1.6.tar.gz.

File metadata

  • Download URL: webrover-0.1.6.tar.gz
  • Upload date:
  • Size: 10.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.11.9

File hashes

Hashes for webrover-0.1.6.tar.gz
Algorithm Hash digest
SHA256 df802ff9788ebaf5ecb22815acf55c2009b3c45374fd2d509aa6b4b963f1161e
MD5 e54abe7965320129662c8c7b688e14f1
BLAKE2b-256 b6178f529c4c053e21ba8be07f14b052e0614295980fed342ecbaf0f95192bdd

See more details on using hashes here.

File details

Details for the file webrover-0.1.6-py3-none-any.whl.

File metadata

  • Download URL: webrover-0.1.6-py3-none-any.whl
  • Upload date:
  • Size: 10.8 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.11.9

File hashes

Hashes for webrover-0.1.6-py3-none-any.whl
Algorithm Hash digest
SHA256 a8b4e94302c73eb41e766e74e1d4e85f468144fe6bc0e92e54f868bf3d53f0b1
MD5 fdedfa60ebb9d90a740d5120e25c8cb5
BLAKE2b-256 e34d8a099d3d48688f2a2b92c3ccbb7016d4e40787f2dc1d08bd1711516548a7

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page