Skip to main content

No project description provided

Project description

CrawlBot

CrawlBot is a Scrapy-based project designed to crawl specified domains and extract various webpage components such as titles, headings, images, and links. This project supports dynamic configuration and can be used to run different spiders with specified start URLs.

Table of Contents

Installation

To install the CrawlBot package, use pip:

```bash pip install crawl_bot ```

Usage

Spiders

This project includes the following spiders:

  • BasicSpider: A basic spider that extracts titles, headings, images, links, etc.

Command-Line Usage

You can run the spiders from the command line using the run_spider command. Replace <spider_name> with the name of the spider you want to run and provide the start URLs:

```bash run_spider <spider_name> ... ```

Example:

```bash run_spider basic_spider http://example.com http://another-example.com ```

Programmatic Usage

You can also run the spiders programmatically from another Python script:

```python from crawl_bot.run_spider import run_spider

spider_name = 'basic_spider' start_urls = ['http://example.com', 'http://another-example.com'] run_spider(spider_name, start_urls)

```

Project Structure

Here is an overview of the project structure:

  • scrapy.cfg: Scrapy configuration file.
  • my_scrapy_project/: Directory containing the Scrapy project.
    • items.py: Defines the items that will be scraped.
    • middlewares.py: Custom middlewares for the Scrapy project.
    • pipelines.py: Pipelines for processing scraped data.
    • settings.py: Configuration settings for the Scrapy project.
    • spiders/: Directory containing the spiders.
      • basic_spider.py: Basic spider implementation.
      • another_spider.py: Another example spider.
  • run_spider.py: Script to run the spiders.
  • setup.py: Setup script for installing the package.
  • MANIFEST.in: Configuration for including additional files in the package.
  • README.md: Project documentation.

Contributing

We welcome contributions to CrawlBot! If you have an idea for a new feature or have found a bug, please open an issue or submit a pull request. Here's how you can contribute:

  1. Fork the repository.
  2. Create a new branch: git checkout -b my-feature-branch
  3. Make your changes and commit them: git commit -m 'Add some feature'
  4. Push to the branch: git push origin my-feature-branch
  5. Open a pull request.

Please ensure your code adheres to the project's coding standards and includes appropriate tests.

License

This project is licensed under the MIT License. See the LICENSE file for details.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

crawl_bot-0.1.3.tar.gz (7.7 kB view details)

Uploaded Source

Built Distribution

crawl_bot-0.1.3-py3-none-any.whl (8.1 kB view details)

Uploaded Python 3

File details

Details for the file crawl_bot-0.1.3.tar.gz.

File metadata

  • Download URL: crawl_bot-0.1.3.tar.gz
  • Upload date:
  • Size: 7.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.0 CPython/3.9.19

File hashes

Hashes for crawl_bot-0.1.3.tar.gz
Algorithm Hash digest
SHA256 20a44416e2f2da4458d1333af494879f34cdb4844d67b3b78fc926d64d1e60e9
MD5 c5f68413b2ec0b6c32d3d2d13ef92d44
BLAKE2b-256 b9b8f63b074f59106516064dda2de9dcf5249a6b06281975be63f4667f08b3eb

See more details on using hashes here.

File details

Details for the file crawl_bot-0.1.3-py3-none-any.whl.

File metadata

  • Download URL: crawl_bot-0.1.3-py3-none-any.whl
  • Upload date:
  • Size: 8.1 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.0 CPython/3.9.19

File hashes

Hashes for crawl_bot-0.1.3-py3-none-any.whl
Algorithm Hash digest
SHA256 199b4ea464a83a80670813fbc0bdb76fd5b374ade620fdb92d98d24e401478b8
MD5 1148f33e1e77385bb0c696c552ddffea
BLAKE2b-256 8bec1c2561320de49aaa2da9073efcb74e16a03e6aa76d2cd962c77ad1472f38

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page