Skip to main content

No project description provided

Project description

CrawlBot

CrawlBot is a Scrapy-based project designed to crawl specified domains and extract various webpage components such as titles, headings, images, and links. This project supports dynamic configuration and can be used to run different spiders with specified start URLs.

Table of Contents

Installation

To install the CrawlBot package, use pip:

```bash pip install crawl_bot ```

Usage

Spiders

This project includes the following spiders:

  • BasicSpider: A basic spider that extracts titles, headings, images, links, etc.

Command-Line Usage

You can run the spiders from the command line using the run_spider command. Replace <spider_name> with the name of the spider you want to run and provide the start URLs:

```bash run_spider <spider_name> ... ```

Example:

```bash run_spider basic_spider http://example.com http://another-example.com ```

Programmatic Usage

You can also run the spiders programmatically from another Python script:

```python from crawl_bot.run_spider import run_spider

spider_name = 'basic_spider' start_urls = ['http://example.com', 'http://another-example.com'] run_spider(spider_name, start_urls)

```

Project Structure

Here is an overview of the project structure:

  • scrapy.cfg: Scrapy configuration file.
  • my_scrapy_project/: Directory containing the Scrapy project.
    • items.py: Defines the items that will be scraped.
    • middlewares.py: Custom middlewares for the Scrapy project.
    • pipelines.py: Pipelines for processing scraped data.
    • settings.py: Configuration settings for the Scrapy project.
    • spiders/: Directory containing the spiders.
      • basic_spider.py: Basic spider implementation.
      • another_spider.py: Another example spider.
  • run_spider.py: Script to run the spiders.
  • setup.py: Setup script for installing the package.
  • MANIFEST.in: Configuration for including additional files in the package.
  • README.md: Project documentation.

Contributing

We welcome contributions to CrawlBot! If you have an idea for a new feature or have found a bug, please open an issue or submit a pull request. Here's how you can contribute:

  1. Fork the repository.
  2. Create a new branch: git checkout -b my-feature-branch
  3. Make your changes and commit them: git commit -m 'Add some feature'
  4. Push to the branch: git push origin my-feature-branch
  5. Open a pull request.

Please ensure your code adheres to the project's coding standards and includes appropriate tests.

License

This project is licensed under the MIT License. See the LICENSE file for details.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

crawl_bot-0.1.5.tar.gz (7.7 kB view details)

Uploaded Source

Built Distribution

crawl_bot-0.1.5-py3-none-any.whl (8.1 kB view details)

Uploaded Python 3

File details

Details for the file crawl_bot-0.1.5.tar.gz.

File metadata

  • Download URL: crawl_bot-0.1.5.tar.gz
  • Upload date:
  • Size: 7.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.0 CPython/3.9.19

File hashes

Hashes for crawl_bot-0.1.5.tar.gz
Algorithm Hash digest
SHA256 43e0b19de07c529fbb6ed45e4eb26abd070b851ca583bd849922af1d83cbe04c
MD5 e6636eafc90cc337f90805fcdbbc0a12
BLAKE2b-256 16b4268cdab21cb2d6de9bf7d403481e329796aa908abbc15fc0724c94088d4b

See more details on using hashes here.

File details

Details for the file crawl_bot-0.1.5-py3-none-any.whl.

File metadata

  • Download URL: crawl_bot-0.1.5-py3-none-any.whl
  • Upload date:
  • Size: 8.1 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.0 CPython/3.9.19

File hashes

Hashes for crawl_bot-0.1.5-py3-none-any.whl
Algorithm Hash digest
SHA256 a3425e856ca0c12a37020f9fc4c73e46c5146af56f5ee0b0539cd56445c1db58
MD5 3023abb4f0a6d993156b63fd06cf325e
BLAKE2b-256 6efc8db29c23bbcb5b23a07f5285008665baa32b8b5c472d0b16514c9c650aa8

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page