Skip to main content

CobWeb is a Python library for web scraping. The library consists of two classes: Spider and Scraper.

Project description

CobWeb

CobWeb is a Python library for web scraping. The library consists of two classes: Spider and Scraper.

Spider

The Spider class is used to crawl a website and identify internal and external links. It has the following methods:

__init__(self, url, max_hops = 0): Initializes a Spider object with the given URL and maximum number of links to follow from the initial URL.
_getLinks(self): Crawls the website and identifies internal and external links.
_showLinks(self): Returns the set of internal and external URLs found during crawling.
__str__(self): Returns a string representation of the Spider object.
__repr__(self): Returns a string representation of the Spider object.

Scraper

The Scraper class extends the functionality of the Spider class by scraping HTML content from web pages based on user-defined parameters. It has the following methods:

__init__(self, config): Initializes a Scraper object with the given configuration parameters.
run(self): A public method to scrape HTML content from web pages based on user-defined parameters.
__str__(self): Returns a string representation of the Scraper object.
__repr__(self): Returns a string representation of the Scraper object.

Installation

You can install CobWeb using pip:

    pip install CobWeb

Config

Config is either an object in memory or a YAML file you can build by installing YAMLbuilder or by using the provided template! Example of a complete object:

config = {
            "url": 
            "hops": 
            "tags":
            "classes":
            "attrs":
            "attrV":
            "IDv":
            "selectors":
        }

Example of YAML file (If you choose this path call the config_parser function and give it a valid path!):

IDv:
attrV: []
attrs: []
classes: []
hops: 
selectors: []
tags:
    - 
    - 
url: 

Example Usage

from CobWeb import Spider, Scraper

# Create a Spider object with a URL and maximum number of hops
spider = Spider("https://example.com", max_hops=10)

# Get the internal and external links
links = spider.run()
print(links)

# Create a Scraper object with a configuration dictionary
# hops defines how deep it will scrape, it uses the Spider internally to get more links and scrape from those pages! If you just want to scrape from a single page set it to 0 or don't provide hops
config = {
    "url": "https://example.com",
    "hops": 2,
    "tags": ["h1", "p"]
    }
scraper = Scraper(config)

# Scrape HTML content from web pages based on user-defined parameters
results = scraper.run()

# Print the results it shall be a dictionary with arrays of scraped content separated by element, attributes, etc provided in the config!
print(results)

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

CobWeb-lnx-1.1.0.tar.gz (6.9 kB view details)

Uploaded Source

Built Distribution

CobWeb_lnx-1.1.0-py3-none-any.whl (6.8 kB view details)

Uploaded Python 3

File details

Details for the file CobWeb-lnx-1.1.0.tar.gz.

File metadata

  • Download URL: CobWeb-lnx-1.1.0.tar.gz
  • Upload date:
  • Size: 6.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.11.3

File hashes

Hashes for CobWeb-lnx-1.1.0.tar.gz
Algorithm Hash digest
SHA256 34ef205730ae592572b78b20837fad62fb7ee034418698d0fa9efc9bc3291e90
MD5 d29abc38eecbdac691fff35c51e5d01f
BLAKE2b-256 c2bbdfeb6e6492ecae39c8f9aa247c56e96c2fdcd47a3c194303a006e07ded2f

See more details on using hashes here.

File details

Details for the file CobWeb_lnx-1.1.0-py3-none-any.whl.

File metadata

  • Download URL: CobWeb_lnx-1.1.0-py3-none-any.whl
  • Upload date:
  • Size: 6.8 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.11.3

File hashes

Hashes for CobWeb_lnx-1.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 1d9327d5939635af80e24c071f0c0093f163294dbbad4be2529da2f0cd7ced3f
MD5 507ac3e05f0516f77c6ddecec233bfa8
BLAKE2b-256 4d37fdd77322cb9edfe5357c425d8fe91412c3ba1750e35500651301a62e098e

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page