Skip to main content

Generate high-quality datasets from web content for AI training

Project description

WebRover 🚀

Python 3.10+ License: MIT

WebRover is a powerful Python library for generating high-quality datasets from web content, designed specifically for training Large Language Models and AI applications.


🌟 Features

  • Smart Web Scraping: Automatically find and scrape relevant content based on topics
  • Multiple Input Formats: Support for JSON, YAML, TXT, and Markdown topic files
  • Async Processing: Fast, concurrent scraping with built-in rate limiting
  • Quality Control: Built-in content validation and cleaning
  • LLM-Ready Output: Structured JSONL format perfect for model training
  • Error Handling: Robust error tracking and recovery mechanisms

⚠️ Important Notes

Cloud Environment Compatibility

When using WebRover in cloud environments like Google Colab or Kaggle Notebooks, you may need to handle nested asyncio loops. This is a limitation of these environments, not WebRover itself. To resolve this:

  1. Install nest_asyncio:
pip install nest_asyncio
  1. Add these lines at the start of your notebook:
import nest_asyncio
nest_asyncio.apply()

This setup is only required for:

  • Google Colab
  • Kaggle Notebooks
  • Similar cloud-based Jupyter environments

It's not needed for:

  • Local Python scripts
  • Command line usage
  • Standard server deployments

🚀 Troubleshooting

Cloud Environment Issues

When using WebRover in cloud environments (Google Colab, Kaggle Notebooks), you may encounter asyncio-related errors. This is due to how these environments handle async operations. To fix:

# Install the required package
pip install nest_asyncio

# Add at the start of your notebook
import nest_asyncio
nest_asyncio.apply()

Common Issues and Solutions

  1. Rate Limiting

    • Symptom: Many HTTP 429 errors
    • Solution: Decrease scraping speed by increasing sleep time between requests
  2. Memory Issues with Large Datasets

    • Symptom: Out of memory errors
    • Solution: Use smaller batch sizes or enable disk caching
  3. Blocked Access

    • Symptom: HTTP 403 Forbidden errors
    • Solution: Ensure your user agent is set correctly and respect robots.txt
  4. SSL Certificate Errors

    • Symptom: SSL verification failed
    • Solution: Update your Python SSL certificates or check network settings

🚀 Quick Start

Installation

pip install webrover

Basic Usage

from webrover import WebRover

# Initialize WebRover
rover = WebRover()

# Scrape content from topics
rover.scrape_topics(
    topics=["artificial intelligence", "machine learning"],
    num_websites=100
)

# Save the dataset
rover.save_dataset("my_dataset.jsonl")

Using Topic Files

# From JSON file
rover.scrape_topics(
    topics="topics.json",
    num_websites=100
)

# From Markdown list
rover.scrape_topics(
    topics="topics.md",
    num_websites=100
)

📖 Documentation

Supported Topic File Formats

JSON

{
    "topics": [
        "AI basics",
        "machine learning",
        "deep learning"
    ]
}

YAML

topics:
  - AI basics
  - machine learning
  - deep learning

Markdown

- AI basics
- machine learning
- deep learning

Output Structure

{
    'url': 'https://example.com/article',
    'title': 'Article Title',
    'content': 'Article content...',
    'metadata': {
        'length': 1234,
        'has_title': true,
        'domain': 'example.com'
    }
}

🛠️ Advanced Usage

# Initialize with custom output directory
rover = WebRover(output_dir="my_datasets")

# Get scraping statistics
stats = rover.get_stats()
print(f"Success rate: {stats['success_rate']*100:.1f}%")

# Access dataset programmatically
dataset = rover.get_dataset()

📊 Output Files

  • final_dataset/dataset.jsonl: Main dataset in JSONL format
  • websites_master.json: List of all discovered URLs
  • websites_completed.json: Successfully scraped URLs
  • websites_errors.json: Failed attempts with error details

🔄 Error Handling

WebRover automatically handles common issues:

  • Rate limiting
  • Network timeouts
  • Invalid URLs
  • Blocked requests
  • Malformed content

🚧 Limitations

  • Respects robots.txt and site rate limits
  • Some sites may block automated access
  • Large datasets require more processing time
  • Google search may throttle excessive requests

🗺️ Roadmap

See FUTURE.md for planned features and improvements.

🤝 Contributing

Contributions are welcome! Please feel free to submit a Pull Request.

📜 License

This project is licensed under the MIT License - see the LICENSE file for details.

🙏 Acknowledgments

Built with ❤️ by Area-25. Special thanks to all contributors.


WebRover: Build better datasets, train better models. 🚀

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

webrover-0.1.8.tar.gz (11.9 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

webrover-0.1.8-py3-none-any.whl (12.6 kB view details)

Uploaded Python 3

File details

Details for the file webrover-0.1.8.tar.gz.

File metadata

  • Download URL: webrover-0.1.8.tar.gz
  • Upload date:
  • Size: 11.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.11.9

File hashes

Hashes for webrover-0.1.8.tar.gz
Algorithm Hash digest
SHA256 1cc6699441c2a21ac1a5bc3d6ae8496caaa775551ff712619b3615318f5d7fb4
MD5 c01a524f1784ff498c4db63411979c21
BLAKE2b-256 8183bb881e4b1baff9f009eb49258f13f62e4413d579c8a22b3ab9998688b9b5

See more details on using hashes here.

File details

Details for the file webrover-0.1.8-py3-none-any.whl.

File metadata

  • Download URL: webrover-0.1.8-py3-none-any.whl
  • Upload date:
  • Size: 12.6 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.11.9

File hashes

Hashes for webrover-0.1.8-py3-none-any.whl
Algorithm Hash digest
SHA256 14961d2930f485050ed073a8c1b762b3af84c788aa519565aa770f311f0af1d1
MD5 beffa4555c446ff5b88fb7bc4eaa7ddd
BLAKE2b-256 a22302785fd995511522fd1e95b58b30059eb86a600d9d8740a536c8e9086578

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page